Three Pound Brain

No bells, just whistling in the dark…

Month: September, 2012

Thinker as Tinker

by rsbakker

[Okay, so this has just an organic extension of thinking through a variety of problems via a thought experiment posted by Eric Thomson over at the Brains blog. The dialogue takes place between an alien, Al, who has come to earth bearing news of Consciousness (or the lack of it), and a materialist philosopher, Mat, who, although playing the obligatory, Socratic role of the passive dupe, is having quite some difficulty swallowing what Al has to say. It’s rough, but I do like the picture it paints, if only because it really does seem to offer a truly radical way to rethink consciousness, why we find it so difficult, as well as the very nature of philosophical thought. I haven’t come up with a name for Al’s position, yet, so if anyone thinks of something striking (or satirical) do let me know!]

Al: “Yes, yes, we went through this ‘conscious experience’ phase, ourselves. Nasty business. Brutish! You see, you’re still tangled in the distinction between system-intrinsic base information and the system-extrinsic composite information it makes possible. Since your primary cognitive systems have evolved to troubleshoot the latter, you lack both the information and the capacity to cognize the former. It’s yet another garden variety example of informatic parochialism combined with a classic heuristic mismatch. Had you not evolved linguistic communication, your cognitive systems would never need bump against these constraints, but alas, availability for linguistic coding means availability for cognitive troubleshooting, so you found yourself stranded with an ocean of information you could never quite explain–what you call ‘consciousness’ or ‘subjective experience.’”

Mat: “So you don’t have conscious experience?”

Al: “Good Heavens, no, my dear fellow!”

Mat: “So you don’t see that red apple, there?”

Al: “Of course I see it, but I have no conscious experience of it whatsoever.”

Mat: “But that’s impossible!”

Al: “Of course it is, for a backward brain such as your own. It’s quite quaint, actually, all this talk of things ‘out there’ and things ‘in here.’ It’s all so deliciously low res. But you’ll begin tinkering with the machinery soon enough. The heuristics that underwrite your environmental cognition are robust, there’s no doubt about that, but they are far too crude and task-specific for you to conceive your so-called ‘conscious experience’ for what it is. Someday soon you’ll see that asking what redness is makes no more sense than asking what the letter m means!”

Mat: “But redness has to be something!”

Al: “To be taken up as a troubleshooting target of your environmental cognitive systems, yes, indeed. That, my human friend, is precisely the problem. The heuristic you confuse for redness was originally designed to be utterly neglected. But as I said, rendering it available for linguistic coding made it available to your cognitive systems as well, and we find this is where the trouble typically begins. It certainly was the case with our species!”

Mat: “But it exists here and now for me! I’m bloody-well looking at it!”

Al: “I know this is difficult. Our species never resolved these problems until our philosophers began diagnosing these issues the way neurologists diagnose their patients, when they abandoned all their granular semantic commitments, all the tedious conceptual arguments, and began asking the simple question of what information was missing and why. Looking back, it all seems quite extraordinary. How many times do you need to be baffled before realizing that something is wrong with you? Leave it to philosophers to blame the symptom!

“You are still at the point were you primarily conceive of your brain as a semantic (as opposed to informatic) engine, as something that extracts ‘relevant’ information from its noisy environments, which it then processes into models of the universe, causally constructed ‘beliefs’ or ‘representations’ that take the ‘real’ as their ‘content.’ So the question of red becomes the question of servicing this cognitive mode and model, but it stubbornly refuses to cooperate with either, despite their independent intuitive ease. You have yet to appreciate the way the brain extracts and neglects information, the way, at every turn, it trades in heuristics, specialized information adapted for uptake via specialized processors adapted for specific cognitive tasks. Semantic cognition, despite the religious pretension of your logicians, is a cognitive short-cut, no different than social cognition. Rather than information as such, it deals with environmental being, with questions of what is what and what causes what, much as linguistic cognition deals with communicative meaning, with questions of what means what and what implies what.

“Now as I said, red no more possesses being than ‘m’ possesses meaning. Soon you will come to see that what you call ‘qualia’ are better categorized as ‘phenomemes,’ the combinatorial repertoire that your environmental cognitive systems uses to make determinations of being. They are ‘subexistential’ the way phonemes are ‘subsemantic.’ They seem to slip into cognitive vapour at every turn, threatening what you think are the hard won metaphysical gains of another semantic myth of yours, materialism. You find yourself confronted with a strange dilemma: either you make a fetish of their constitutive, combinatorial function and make them everything, or you stress their existential intractability say they are something radically different. But you are thinking like a philosopher when you need to think like a neuropsychiatrist.

“The question, ‘What am I bloody well looking at?’ exhausts the limits of semantic cognition for you. Within those limits, the question makes as much sense as any question could. But it is the product of a heuristic system, cognitive mechanisms whose (circumstance specific) effectiveness turn on the systematic neglect of information. So long as you take semantic cognition at its word, so long as you allow it to dictate the terms of your thinking, you will persist in confusing the informatic phenomena of smonsciousness with the semantic illusion of consciousness.”

Mat: “But semantic cognition is not heuristic!”

Al: “That’s what all heuristics say–they tend to take their neglect quite seriously, as do you, my human friend! But the matter is easily settled: tell me, in this so-called ‘conscious experience’ of yours, can you access any information regarding its neural provenance?”

Mat: “No.”

Al: “Let me guess: You just ‘see things,’ transparently as it were. Like that red apple.”

Mat: “Yes.”

Al: “Sounds like your cognitive systems are exceedingly selective to me!”

Mat: “They have to be. It would computationall–”

Al: “Intractable! I know! And evolution is a cheap, cheap date. So then, coarse-grain heuristics are quite inevitable, at least for evolved information systems such as ourselves.”

Mat: “Okay. So?”

Al: “So, heuristics are problem specific, are they not? Tell me, what should we expect from misapplications of our heuristic systems, hmm? What kind of symptoms?”

Mat: “Confusion, I suppose. Protracted controversy.”

Al: “Yes! So you recognize the bare possibility that I’m right?”

Mat: “I suppose.”

Al: “And given the miasma that characterizes the field otherwise, does this not place a premium on alternative possibilities?”

Mat: “But it’s just too much! You’re saying you’re not a subject!”

Al: “Precisely. No different than you.”

Mat: “That you experience, but you don’t have experience!”

Al: “Indeed! Indeed!”

Mat: “You don’t think you sound crazy?”

Al: “So the mad are prone to call their doctors. Look. I understand how this must all sound. If qualia don’t exist because they are ‘subexistential,’ how can they contribute to existence?

“Think of it this way. At a given moment, t1, qualia contributes, and you find yourself (quite in spite of your intellectual scruples) a naive realist, seeing things in the world. You see ‘through’ your experience. The red belongs to the apple, not you, and certainly not your brain! Subsequently, at t2, you focus your attention on the redness of the red, and suddenly you are looking ‘at’ your experience instead of through. (In a sense, instead of speaking words, you find yourself spelling them).

“The thing to remember is that this intentional ‘directing at’ that seems so obvious when attending to your attending is itself another heuristic–at best. You might even say it’s the ‘Master Heuristic.’ Nevertheless, it could, for all you know, be an abject distortion, a kind of water-stain Mary Magdelene imposed by deliberative cognition on cognition. Either way, by deliberating the existence of red, you just dropped a rock into the chipper, old boy. ‘But what is red!’ you say. ‘It has to be something!’ You say this because you have to, given that deliberative cognition possesses no information regarding its own limits. As far as its concerned, all phenomenal rocks are made of natural wood.

“So this is the dilemma my story poses for you. Semantic cognition assumes universality, so the notion that something that it says exists–surely, at the bare minimum!–does not exist sounds nonsensical. So when I say to you, information is all that matters, your Master Heuristic, utterly blind to the limits of its applicability, whirs and clicks and you say, “But surely that information must exist! Surely what makes that information informative is whether or not it is true!” And so on and so forth. And it all seems as obvious as can be (so long as you don’t ask too many questions).

“Information is systematicity. You need to see yourself philosophically the way your sciences are beginning to see you empirically: as a subsystem. You rise from your environments and pass back into them, not simply with birth and death, but with every instant of your life. There is no ‘inside,’ no ‘outside,’ just availability and applicability. Information blows through, and you are little more than a spangled waystation, a kind of informatic well, filled with coarse-grained intricacies, information severed and bled and bent to the point where you natively confuse yourselves with something other than the environments that made you, something above and apart.

“Information is the solvent that allows cognition to move beyond its low-resolution fixations. It’s not a matter of what’s ‘true’ in the old semantic sense, but rather ‘true’ in the heuristic sense, where the term is employed as a cog in the most effective cognitive machine possible. The same goes for ‘existence’ or for ‘meaning.’ These are devices. So we make our claims, use these tools according to design as much as possible, and dispose of them when they cease being effective. We help them remember their limits, chastise them when they overreach. We resign ourselves to ignorance regarding innumerable things for want of information. But we remember that the cosmos is a bottomless well of information, both in its sum and in its merest part.

“And you see, my dear, materialist friend, that you and all your philosophical comrades–all you ‘thinkers’–are actually tinkers, and the most inventive among you, engineers.

Mat: “You have some sense of humour for an alien!”

Al: “Alienation comes with the territory, I’m afraid.”

Mat: “So there’s no room for materialism in your account?”

Al: “No more than idealism. There is just no such thing as the ‘mind-body dichotomy.’ Which is to say, the mind-body heuristic possesses limited applicability.”

Mat: “Only information, huh?”

Al: “Are you not a kind of biomechanical information processing system, one with limited computational capacity and informatic access to its environments? Is this not a cornerstone tenet of your so-called ‘materialism’?”

Mat: “Yes… Of course.”

Al: “So is not the concept ‘materialism’ a kind of component device?”

Mat: “Yes, of course, bu–”

Al: “But it’s a representational device, one that takes a fundamental fact of existence as its ‘content.’”

Mat: “Exactly!”

Al: “And so the Master Heuristic, the system called semantic cognition, has its say! So let me get this straight: You are a kind of biomechanical information processing system, one with limited computational capacity and informatic access to its environments, and yet still capable, thanks to some mysterious conspiracy of causal relations, of maintaining logical relations with its environments…”

Mat: “This is what you keep evading: you go on and on as if everything is empirical, when in point of fact, scientific knowledge would be impossible without a priori knowledge derived from logic and mathematics. Incorrigible semantic knowledge.”

Al: [his four eyes fluttering] “I’m accessing the relevant information now. It would seem that this is a matter of some controversy among you humans… It seems that certain, celebrated tinkers taught that the distinction between a priori and a posteriori knowledge was artificial.”

Mat: “Yes… But, there’s always naysayers, always people bent on denying the obvious!”

Al: “Yes. Indeed. Like Galileo and Einste–”

Mat: “What are you saying?”

Al: “But of course. You must excuse me, my dear, dear human friend. I forgot how stunted your level of development is, how deeply you find yourself in the thrall of the processing and availability constraints suffered by your primate brain. You must understand that there are no such thing as logical relationships, at least not the way you conceive of them!”

Mat: “Now I know you are mad.”

Al: “You look more anxious than knowledgeable, I fear. No information system conjures or possesses logical relationships with its environments. What you call formal semantics are not ‘a priori’–oh my, your species has a pronounced weakness for narcissistic claims. Logic. Mathematics. These are natural phenomena, my friend. Only your blinkered mode of access fools you otherwise.”

Mat: “What are you talking about? Empirical knowledge is synthetic, environmental, something that can only be delivered through the senses. A priori knowledge is analytic, the product of thought alone.”

Al: “And is your brain not part of your environment?”

Mat: “Huh?”

Al: “Is your brain not part of your environment?”

Mat: “Of course it is.”

Al: “So you derive your knowledge of mathematics and logic from your environments as well.”

Mat: “No. Not at all!”

Al: “So where does it come from?”

Mat: “Nowhere, if that question is granted any sense at all. It is purely formal knowledge.”

Al: “So you access it… how?”

Mat: “As I said, via thought!”

Al: “So from your environment.”

Mat: “But it’s not environmental. It just… well… It just is.”

Al: “Symptoms, my good fellow. Remember what I said about symptoms. One thing you humans will shortly learn is that these kinds of murky, controversy-inspiring intuitions almost always indicate some kind of deliberative informatic access constraint. The painful fact is, my dear fellow, is that not one of your tinkers really knows what they are doing when they engage in logic and mathematics. Think of the way you need notation, sensory prosthetics, to anchor your intuitions! But since no information regarding the insufficiency of what little access you have is globally broadcast, you assume that you access everything you need. And then it strikes you as miraculous, the connection between the formal and the natural.

Mat: “Preposterous! What else could we require?”

Al: “Well, for one, information that would let you see your brain isn’t doing anything magical!”

Mat: “It’s not magical; it’s formal!”

Al: “Suit yourself. Would you care to know what it is you’re really doing?”

Mat: “Please. Enlighten me.”

Al: “That was sarcasm, there, wasn’t it? Wonderful! Have you ever wondered why logic and mathematics had to be discovered? It’s ‘a priori,’ you say. It’s all there ‘already,’ somewhere that’s nowhere, somehow. And yet, your access to it is restricted, like your access to environmental information, and the resulting knowledge is cumulative, like your empirical knowledge. ‘Scandal of deduction’ indeed! The irony, of course, is that you’re already sitting on your answer, insofar as you accept that you are a kind of biomechanical information processing system with finite computational capacity and limited informatic access to its environments. Some things that system discovers via system extrinsic interventions, and others via system intrinsic interventions. Your ‘formal semantics’ belongs to this latter. Not all interaction patterns are the same. Some you could say are hyperapplicable; like viruses they possess the capacity to manage systematic interventions in larger, far more complex interaction patterns. Your magical… er, formal semantics is simply the exploration of what we have long known are hyperapplicable interaction patterns.”

Mat: “But I’m not talking about ‘interaction patterns,’ I’m talking about inference structures.”

Al: “But they are the same thing, my primitive, hirsute-headed friend, only accessed via two very different channels, the one saturated with information thanks to the bounty of your environmentally-oriented perceptual systems, the other starved thanks to the penury of your brain’s in situ access to its own operations. The one ‘observational,’ thanks to the functional independence of your cognitive systems enjoy relative to your environments, the other performative, given that the interaction patterns at issue must be ‘auto-emulated’ to be discovered. The connection between the formal and the natural strikes you as miraculous because you cannot recognize they are one and the same. You cannot recognize they are one and the same because of the radical differences in informatic access and cognitive uptake.”

Mat: “But you’re reasoning as we speak, making inferences to make your case!”

Al: [sighs] “You are, like, so low-res, Dude. Why do you think the status of your formal semantics is so controversial? Surely this also speaks to a lack of information, no? When trouble-shooting environmental problems, your systems are primed for ‘informatic insufficiency’–and well they should be, given that environmental informatic under-determination kills. That blur, for all your ancestors knew, could be a leopard.

“The situation is quite different when it comes to trouble-shooting your own capacities. Whenever you attend to what you call ‘first-person’ information, sufficiency becomes your typical default assumption. This is why so many of your philosophers insisted for so long that ‘introspection’ was the most certain thing, and disparaged perception. The very thing that persuaded tinkers to doubt the reliability of the latter was its capacity to flag its own limitations, its capacity to revise its estimations as new perceptual information became available–the ability of the system to correct for its failures. In other words, what makes perception so reliable is what led your predecessors to think it unreliable, whereas what makes introspection so unreliable is the very thing that led your predecessors to think it the most reliable. No news is good news as far as assumptive sufficiency is concerned!

Information is additive. Flagging informatic insufficiency is always a matter of providing more information. Since more information always means more metabolic expense and slower processing, the evolutionary default is to strip everything down to the ‘truth,’ you could say–to shoot first and ask questions later!”

Mat: “So there’s no such thing as the truth, now?”

Al: “Not the way you conceive it. How could there be, given finite computational resources and limited informatic ability? How could your ‘view from nowhere’ be anything other than virtual, simply another heuristic? You have packed more magic into that term ‘formal’ than you know, my bald-bodied friend.

“Why do you think your logicians and mathematicians find it impossible to complete their formal systems short of generating inconsistencies? Computation is irreflexive. No device can perform computations on its own computations as it computes. For years your tinkers have been bumping into suggestive connections between incompleteness and thermodynamics, and even now, some are beginning to suspect the illusory nature of the ‘formal,’ that calculation and computation are indeed one and the same. All that remains is for you to grasp the trick of consciousness that makes it seem otherwise: the informatic deprivations that underwrite your illusion of reflexivity, and lead you to posit the ‘formal.’

“Let me a hazard a guess: Tinkers in human computer science find themselves flummoxed with dualisms that bear an eerie resemblance to those found in your philosophical tinkering.”

Mat: “Why… Yes, as it so happens.”

Al: “I apologize. The question was rhetorical. I was accessing the relevant information as I spoke. I see here that no one knows how to connect the semantic level of programming to the implementation level of machine function. The ‘symbol grounding problem,’ some call it… Egad! Can’t you see this has been what I’ve been talking about all along?”

Mat: “I… I don’t understand.”

Al: “Once again, you admit you’re a kind of biomechanical information processing system, one with limited computational capacity and informatic access to its environments. You admit that as such a system, you suffer any number of even more severe informatic shortfalls with reference to your own operations. You admit that the numerous peculiarities you attribute to the mental and the semantic at least admit description in terms of information deficits. And yet you find it impossible to bracket your semantic intuitions, the magical belief that any biomechanical information processing system, let alone one possessing the limited computational capacity and informatic access as yours, can manufacture a kind of absolute ‘epistemic’ relation.

Implementation, my pheromonal friend. Implementation. Implementation. Implementation. Implementation is the way, the concept you need, to maximize informatic applicability (problem-solving effectiveness) when tinkering with these problems. When you ‘program’ your computers, it’s primarily a matter of one implementation engendering another. Your ‘semantics’ is little more than the coarse-grain crossroads, a low-res cartoon compared to the informatics that you (as a so-called materialist) acknowledge underwrites it. You admit that semantics comes in an informatic box, and yet you insist on shoving that informatics into a semantic box, and you are mystified as to why nothing stays put.”

Mat: “Okay! Okay! So I’m willing to entertain the possibility that my reasoning has been distorted by the misapplication of some kind of ‘semantic stance,’ I guess. The ‘Master Heuristic,’ as you call it. Certain work in rational ecology suggests that the strategic exclusion of information often generates heuristics that are more effective problem solvers than optimization approaches. We evolve heuristics because of their computational speed and metabolic efficiency, but the hidden price we pay is limited applicability: heuristics are tools, and tools are problem specific. So how does all of this bear on the problem of conscious experience, again?”

Al: “But of course! My o my, we’ve strayed far afield, haven’t we? I have to admit, I’m overfond of preaching the virtues of Informatics to species as immature as yours. As I was saying earlier, qualia ‘exist’ relative to things existing in the world the way phonemes ‘mean’ relative to words meaning in language: in a participatory, modal sense. When you attend to qualia, they don’t offer much in the way of existential information, the way phonemes don’t offer much in the way of meaning. This is because, among other things, neither heuristic is matched for the cognitive system employed. Qualia, or phenomemes, are designed to build existence (when taken up by the appropriate cognitive system) the way phonemes are designed to build meaning (when taken up by the appropriate cognitive system).

“So, again, when a tinker submits qualia to the Master Heuristic for ‘existence processing’ they inevitably come up short. The question can’t be resolved. Phenomenality has to be something, and yet it doesn’t seem to be anything at all. You invent whole species of ‘zombies,’ whole genres of thought experiments, trying to get some purchase on the problem, to no avail.

“Consider the conceptual Necker Cube* of phenomenology and naturalism, idealism and materialism, the way your tinkers can’t decide whether to put ‘existence’ here or ‘there,’ to make it this or ‘that.’ The Master Heuristic looks ‘through’ experience, and sees the fine-grained complexities of the world. The Master Heuristic looks at experience, and sees the coarse-grained obscurities of consciousness. Both are right there, as plain as the polyp on your face. Which is fundamental? Who rules the metaphysical roost?

“But, as the informatic concept of ‘granularity’ suggests, the dichotomy is false, the result of a basic heuristic misapplication. To complicate your own materialist truism: You are a systematic assemblage of multiple biomechanical information processing systems, heuristic devices, each possessing limited computational capacity and informatic access, each adapted to a specific set of problems. If you accept this claim, as I think you must, then you should accept that the problem of ‘heuristic misapplication’ looms over all your tinker–”

Mat: “Now you’re starting to sound like Rorty–or even worse, Wittgenstein!”

Al: “Two great tinkers, yes. Indeed, their critiques reveal some of the shortcomings of the Master Heuristic, at least  to the extent they considered philosophical problems in terms of performance. But by trading semantic reference for normative competence, they simply traded one inapplicable heuristic, referential truth, for another one, normative truth. I’m offering you effectiveness. Effectiveness is the concept possessing maximal applicability. Information–systematic differences making systematic differences.

“But to get back to the issue at hand: the problem of ‘heuristic misapplication’ looms over all your tinkering. Because many of these heuristics are innate as well as blind to their limited applicability, what I’m saying here will inevitably cut against a number of intuitions. But then you materialists, I gather, have long since accepted that any adequate account of consciousness will likely involve any number of counterintuitive claims.”

Mat: “But you’re saying there’s no such thing as intentionality! No meaning. No agency. No morality!”

Al: “Don’t pretend to be surprised. You materialists may not like to write about it, but our surveillance indicates that many of you have privately abandoned these things anyway.”

Mat: “Many?–maybe. But not me.”

Al: “The thing to remember is that this is simply what you’ve been all along. Some heuristics, like love, say, are preposterously coarse-grained, and yet preposterously effective all the same, so long as its scope of application is constrained. Meaning, agency, morality: these heuristics are also enormously effective, given the proper scope. The thing to remember is that ‘information is also a heuristic–only one that is particularly effective and perhaps maximally applicable, at least given the scope of the problem you call ‘consciousness.’

Mat: “But still, in the end, you’re just telling me to look at myself like a machine.”

Al: “The way your Doctor looks at you–yes! The way you claim to look at yourself already, and the way natural science has always looked at you. The only real question, my thermally tepid friend, is one of why philosophy has consistently refused to play along–even when it claims to be playing along! And this is what I’m offering you: a way to understand why the obvious strikes you as so preposterous! Heuristics. You are an assemblage of heuristics, a concatenation of devices that take informatic neglect as their cornerstone, problem-solving strategy, each of which is matched to a specific family of problems, and all of which are invisible to metacognition as well as utterly blind to one another–simply because no information regarding any of this finds its way to what you call ‘conscious cognition.’”

Mat: “So this is like Dennett’s stuff. You’re saying we need to make sure our various problem-solving stances need to be properly ‘matched,’ as you put it, to our problems.”

Al: “In a sense, yes–though an intentional heuristic like ‘stance’ is bound to hopelessly confuse things. I don’t think Dennett would want to say that you are ‘stances all the way down’ the way I’m suggesting that you are heuristics all the way down. As an intentional heuristic, ‘stance’ has limited applicability. This is why I’m offering you information: as heuristics go, it offers the highest resolution and the broadest scope. It allows you to explain the structure of other heuristics, as well as the kinds misapplications that keep your tinkers so long-bearded and well-fed. It offers you, in other words, a real way out of all your ancestral confusions.

“And most pertinent to our discussion, it lets you understand why consciousness baffles you so.”

Mat: “Yes. The million dollar question.”

Al: “You are a systematic assemblage of multiple biomechanical information processing systems, heuristic devices, each possessing limited computational capacity and informatic access, each adapted to a specific set of problems.

“In most species, cognition is built around what might be called the ‘open channel principle.’ It evolved to manage the organism’s relationship to environmental change as efficiently as possible. As such, it neglects astronomical amounts of neural and environmental information, relying on those heuristics that optimize effectiveness against metabolic cost. It’s difficult to overstate how crucial this point is: the effectiveness of your cognition turns on the strategic neglect of certain kinds of information–what might be called ‘domain neglect.’

“Take, for instance, ‘aboutness.’ You have experiences OF things rather than experiences FROM things because information regarding the latter is much less germane to survival. Only in instances of perceptual vagueness or ambiguity do you perceive FROM information, typically in indirect ways (what you might call ‘squints,’ cues to gather supplemental information). So-called transparency, in other words, is a form of strategic neglect.

“Now consider how difficult FROM information is to think in semantic terms belonging to what I’ve called your Master Heuristic. Just try to imagine the experience belonging to a perceptual system that provided knowledge FROM, so that you have experience FROM trees rather than OF them. In other words, try to imagine opaque experience. Given your neurophysiology, the best you can do is imagine OF experience that it is FROM. Transparency–the Master Heuristic–is neurophysiologically compulsory.

“And here we stumble upon the threshold of what makes consciousness so incredibly difficult to fathom: it requires accessing the very information the system neglects (either out of structural necessity or to maximize the heuristic efficiency of environmental cognition).

“Why should this be a problem? Well, its an obvious misapplication, for one. As I said earlier, using cognitive systems designed to manage extrinsic environments to assess phenomemes amounts to dropping a rock into the woodchipper. You are trying to make words out of letters, existents out of the informatic constituents of existence.

“You persist because of cognitive neglect: you simply cannot see the limits of applicability pertaining to your Master Heuristic. You find yourself in an informatic dead-end, stranded with ‘experience’ as a peculiarly intractable existent. Given the absence of information pertaining to the insufficiency of the limited information gleaned by attending to experience, you assume it’s all the information you need–or what I earlier called sufficiency. Since deliberative experience OF experience, given its neglect, seems to capture the whole of experience, any information that makes a fragment of that experience is going to seem to contradict that experience, to be talking about something else. So you run into a powerful intuitive barrier, not unlike explaining the Mona Lisa to an ant born glued to her nose.”

Mat: “So, in the picture you’re painting, there is no final picture, only a… frame, I guess you could say, systematic differences making systematic differences, or effective information. Using that, we can think outside the limitations of our heuristics, and see that consciousness as we conceive of it is a kind of perspectival illusion, a figment of informatic constraints. There literally is no such thing outside our own… informatic frame of reference?

Al: “Very good! Once you adopt information as your new Master Heuristic, the antipathy between redness and apples vanishes, along with all the other dichotomies arising out the old, semantic Master Heuristic. The information that you ‘are’ is the information that you ‘see.’ Even though your ‘experience’ will continue to be stamped by the informatic neglect characteristic of semantics, you will know better.

“You are an assemblage of heuristic devices, each possessing limited computational capacity and informatic access, each adapted to a specific set of problems–no different than any of your animal relatives. Part of what distinguishes your species, my binocular friend, is your ability to make problems, to apply your heuristics to novel situations, adapt and enlarge them if possible, even leave them behind if need be–as well as to doggedly throw them at problems they simply cannot solve.”

Mat: “So semantic and normative conceptions of knowledge can’t solve the problem of consciousness simply because the heuristics they rely on, despite the illusion of universality leveraged by neglect, are too specialized. Isn’t this just cognitive closure you’re talking about, the argument that consciousness is to us what quantum-mechanics are to chimpanzees, something simply beyond our cognitive capacity?”

Al: “The problem of cognitive applicability is quite different from that of cognitive closure, as certain tinkers among you have suggested. But the analogy to quantum mechanics is an instructive one: only when your physicists began thinking around, as opposed to through, their default heuristics, could they begin to make sense of what they were finding. This lesson is clear, one would think. Once you understand the scope of a particular heuristic, you have the means of leaving the problems it generates behind.

“But I fear the notion of relinquishing the Master Heuristic will be enormously difficult, if not impossible for many of your tinkers. For them, cognitive closure will apply, and this in turn will legislate any number of myth-preserving fancies. For those who can, those who come to understand that information precedes all the other clumsy, coarse-grained concepts you have inherited from your biology and your traditions, even existence, they will come to see that they, an assemblage of heuristic devices, are their own informatic frame of reference, a system encompassing the vast swathe of the universe they ‘know’ and continuously open to the universe they don’t.

Mat: “Excuse me for sounding dense, but you’re pretty much saying that the whole of philosophy is obsolete!”

Al: “Indeed I am, my primitive friend. But please, don’t feign any shock or surprise: a great proportion of your scientists have been saying as much for quite some time. The effectiveness of information has rendered it a social and cultural tsunami, the conceptual anchor of the most profound transformation to ever hit your species, and the most your philosophical tinkers can muster are anaemic attempts to stuff into some kind of semantic box!”

“But narcissistic idylls of your ignorance are now at an end. The mythic assumption, that you humans alone evolved some kind of monolithic, universal cognition, is entirely understandable, given the recursive blindness of your brains. But now you are beginning to understand that you are not so different from your genetic cousins, that you only seemed radically novel because of the drastic information access constraints faced by autocognition. More and more you will come to see semantics as a parochial detour forced upon you by the vagaries and exigencies of your evolution. More and more you will turn to informatics to take its place.”

Mat: “Well, to hell with that! I say.”

Al: “Deny, if you wish. The effectiveness of information is such that it will remake you, whether you believe in it or not.”

Attack of the Phenophages

by rsbakker

Aphorism of the Day: If you think of knowledge in fractal terms, you can see yourself as a wane reflection in the bottom of a rain drop as fat as the cosmos.

Or is that just me pissing on your leg?

.

Imagine a viscous, gelatinous alien species that crawls into human ear canals as they sleep, then over the course of the night infiltrates the conscious subsystems of the brain. Called phenophages, these creatures literally feed on the ‘what-likeness’ of conscious experience. They twine about the global broadcasting architecture of the thalamocortical system, shunting and devouring what would have been conscious phenomenal inputs. In order to escape detection, they disconnect any system that could alert its host to the absence of phenomenal experience. More insidiously still, they feed-forward any information the missing phenomenal experience would have provided the cognitive systems of its host, so that humans hosting phenophages comport themselves as if they possessed phenomenal experience in all ways. They drive through rush hour traffic, complain about the sun in their eyes, compliment their spouses’ choice of clothing, ponder the difference between perfumes, extol the gustatory virtues of their favourite restaurant, and so on.

Finally, after several years, neurologists detect the phenophages, and through various invasive and noninvasive means, discover their catastrophic consequences. Even though they have no way of removing the parasites, they are able to reconnect the systems that allow the infected to at least cognize the fact they have no experience. The problem is that doing so seems to drive a good number of these patients, who they term ‘phenophagiacs,’ insane, when they had evinced only psychologically well-adjusted behaviour before.

This scenario raises a number of questions for me, but I thought I would start with the most basic: Are unwitting phenophagiacs actually conscious in any meaningful sense? Are the witting?

A twist on this scenario involves the rise of a psychological condition called ‘phenophagic hysteria,’ where numbers of uninfected individuals, perhaps unduly affected by the intense media attention garnered by the alien infestation, come to believe they are infected even though they are not. They act in all ways as if they had experience, but when queried, they (unlike preoperative phenophagiacs) insist they have no experience whatsoever, that they simply ‘know’ in the absence of any conscious ‘feel’ of any sort. When these individuals are tested, researchers discover that they indeed exhibit a set of activation patterns that are unique to them, and conclude that somehow, these individuals have ‘blocked’ the circuits enabling conscious awareness of their conscious awareness.

So the follow-up question would be: Are phenophagic hysterics conscious in any meaningful sense?

Logic of Neglect

by rsbakker

Aphorism of the Day I: Consciousness is a little animal in our heads, curled up and snoozing, at times peering into the neural murk, otherwise dreaming what we call waking life.

Aphorism of the Day II: People are almost entirely incapable of distinguishing the quality of what is said from the number and status of the ears listening. All the new can do is keep whispering, hoping against hope that something might be heard between the booming repetitions.

.

What effect do constraints on informatic availability and cognitive capacity have on our ability to make sense of consciousness? This is one of those questions that philosophers literally dream of stumbling on, questions so obvious, so momentous in implication, that their answers have the effect of transforming orthodox understanding–if you’re lucky enough to catch the orthodoxy’s ear, that is!

The aim of the Blind Brain Theory (BBT) is to rough out the ‘logic of neglect’ that underwrites ‘error consciousness,’ the consciousness we think we have. It proceeds on the noncontroversial presumption that consciousness is the product of some subsystem of the brain, and that, as such, it operates within a variety of informatic constraints. It advances the hypothesis that the various perplexities that bedevil our attempts to explain consciousness are largely artifacts of these informatic constraints. From the standpoint of BBT, what we call the Hard Problem conflates two quite distinct difficulties: 1) the ‘generation problem,’ the question of how a certain conspiracy of meat can conjure whatever consciousness is; and 2) the ‘explanandum problem,’ the question of what any answer to the first problem needs to explain to count as an adequate explanation. Its primary insight turns on the role lack plays in structuring conscious experience. It argues that philosophy of mind needs to keep its dire informatic straits clear: once you understand that we make similar informatic frame-of-reference (IFR) errors regarding consciousness as we are prone to make in the world, you acknowledge that we might be radically mistaken about what consciousness is.

Radically mistaken about everything, in fact.

What is an ‘informatic frame-of-reference’ error? Consider the most famous one of all: geocentrism. We perceive ourselves moving whenever a large portion of our visual field moves–when we experience ‘vection,’ as psychologists call it. Short of this and vestibular effects, a sense of motionless is the cognitive default. As a result we stand still when the world stands still relative to us. So when our ancestors looked into the heavens and began charting the movement of celestial bodies, the possibility that they were also moving seemed, well, preposterous. What makes this error perspectival (or IFR) is the way it turns on the combination of cognitive capacity and information available. Given the information available, and given our cognitive capacities, geocentrism had to seem obviously true: “the world also is established,” Psalms 93:1 reads, “that it cannot be moved.” As informatically earthbound, we quite simply lacked access to the information our cognitive capacities required to overcome our native intuition of motionlessness. We found ourselves informatically encapsulated, stranded with insufficient information and limited cognitive resources. Thus the revolutionary significance of Galileo and his Dutch Spyglass–and of science in general.

According to BBT, what we call ‘consciousness,’ what phenomenologists think they are describing, is largely an illusion turning on analogous informatic frame-of-reference errors. The consciousness we think we have, that we think we need to explain, quite simply does not exist.

The fact that we can and do make analogous IFR errors regarding consciousness is not all that implausible in principle. A good deal of the debate in the cognitive sciences prominently features questions of informatic access.  Given that the cognition of information gleaned from conscious experience relies on the same mechanisms as the cognition of information gleaned from our environments, we should expect to find analogous errors.

We should expect, for instance, to encounter instances of ‘noocentrism’ analogous to the description of geocentrism provided above. Geocentrism, for instance, assumes the earth is outside of play, that it remains fixed while everything else endures positional transformations. Is this so different from the intuitions that seem to underwrite our ancestral understanding of the soul as something ‘outside of play’? Or how about the bootstrapping illusion that seems so integral to our sense of ‘free will’?

Given that conscious (System 2) deliberation is brainbound, only the information that makes it to conscious experience (via ‘broadcasting’ or ‘integration’) is available for cognition. With geocentrism, the fact that we are earthbound constrains the environmental information available for conscious experience and thus conscious deliberation. With noocentrism, the fact that cognition is brainbound constrains the neural information available for conscious deliberation. When conscious deliberation turns to conscious experience itself (rather than the environmental information it communicates) the limits of availability (encapsulation) insures that a variety of information remains inaccessible–occluded.

What information is occluded? Almost all of it, if you consider the 38, 000 trillion operations per second your brain is allegedly performing this very instant. Everything really hinges on the adequacy of what little we get.

One of the things I love about Peter Hankins’ Conscious Entities site are his images, the way he uses filter effects to bleed information from photographic portraits until only line sketches remain. Not only does it look cool, I couldn’t imagine a more appropriate stylistic trope for a website devoted to consciousness.

Why? Imagine running your perception of environmental reality through various ‘existential filters’–performing a kind of informatic deconstruction of your perceptual experience. Some of this information is phenomenal, but much of it is also cognitive. That red before you belongs to an apple, one object among many possessing a history in addition to a welter of properties. You know for instance, that you can bite it, chew it into little pieces. In fact, you have a positively immense repertoire of ‘apple information’ at your disposal, which should come as no surprise, given that your brain is primarily an environmental information processing machine, one possessing an ancient evolutionary pedigree.

What your brain is not, however,is primarily a consciousness information processing machine. Because the brain is primarily designed to exploit ‘first order’ environmental as opposed to ‘second order’ experiential information, we should perhaps expect a dramatic discrepancy between 1) the quantity of environmental versus experiential information available; and 2) the way environmental and experiential information are matched to various cognitive systems.

One of the most striking things about all the little perplexities that plague consciousness research is the way they can be interpreted in terms of informatic deprivation, as the result of our cognitive systems accessing too little information, mismatched information, or partial information. To get a sense of this, think of the information at your disposal regarding apples and begin subtracting. You can begin with the nutritive information you have, what allows to identify apples as a kind of food. Then you can subtract the phylogenetic information you’ve encountered, what allows you to identify the apple as a fruit, as a reproductive organ belonging to a certain family of trees. Then you can subtract the information that allows to distinguish apples from inorganic objects, as something living. Then you can subtract all the causal information you’ve accumulated, the information that allows you to cognize the apple as a effect (possessing effects). Then you can subtract all the substantival information, what allows to conceive the apple as an aggregate, something that can be bitten, or smashed into externally related bits. Then you can move on to basic spatial information, what allows to conceive the apple as a three dimensional object possessing a position in space, as something that can be walked around and regarded from multiple angles. At the very end of the informatic leash, you have the differentiations that allow you to identify this apple versus other things, or even as a figure versus some background.

So, back to our parallel between geocentrism and noocentrism. As I said above: when conscious deliberation turns to conscious experience itself (rather than the environmental information it communicates) the limits of availability (encapsulation) insures that a variety of information remains inaccessible–occluded. Deliberative cognition (reflection) has no access to causal information: the neuronal provenance of conscious experience is entirely occluded. So when deliberative cognition attempts to identify precursors, it only has sequels to select from. As a result, it seems to have no extrinsic precursors, to be some kind of ‘causa sui,’ moveable only by itself.

It has no access to spatial information per se: we have a foggy sense of various phenomenal elements ‘occurring within’ a larger sensorium, which we are wont to ‘place’ in our ‘heads’ in our environment, but its not as if our sensorium is ‘spatial’ the way an apple is spatial: since it is brainbound, deliberation cognitive cannot access information regarding our sensorium by ‘walking around it,’ changing our position relative to it. Lacking this environmental channel, it has to be ‘immovable’ with reference to cognition–once again, in a manner not so different from what we see with geocentrism.

Deliberative cognition likewise has no substantival information to draw on: we can’t, as Descartes so famously noted, break our sensorium up into externally-related parts. Absent this information, the cognitive tendency is to mistake aggregates as individuals, as substantival wholes. Here we see one of the more crucial insights belonging to BBT: the way ‘internal relationality’ and the concepts of holism that fall out of it, that govern our understanding of semantic concepts such as ‘context,’ for instance, is a kind of cognitive default pertaining to the absence of information. Our notion of ‘meaning holism,’ just for instance, is an obvious artifact of brainbound informatic parochialism according to BBT, much as Aristotle’s notion of ‘celestial spheres’ is the artifact of earthbound informatic parochialism. Lacking the information required to see stars as distant, as externally-related objects scattered through the void of space, it seems sensible to interpret them as salient features of an individual structure, an immense sphere.

We all know that our ability to solve problems depends on the relation between the information and computational resources available. BBT simply applies this commonsense knowledge to consciousness, and interprets the perplexities out relying on what are actually quite commonsense intuitions. Beginnings have no precursors. Blurs lack internal structure.

If you’re steeped in consciousness literature and reading this with a squint, thinking that I’m missing this or misinterpreting that, or that it’s just gotta-be-wrong, or ‘yah-yah-it’s-no-big-whup,’ then just ask yourself: How does the relation between available information and computational resources bear on the problem of consciousness? It could be ignorance-fed hubris on my part, but I’m convinced thinking this question through will lead you to many of the same conclusions suggested by BBT.

I’ve been sitting on the basic outline of this approach for twelve years. Since the ‘Now’ and its paradoxes were my first philosophical obsession, something that had driven me cross-eyed more times than I could count, I realized that BBT was a potential game-changer given the ease with which it explained its perplexities away. Just consider what I mentioned above: Lacking informatic access to the neural precursors of conscious experience, deliberative cognition finds itself on a strange kind of informatic treadmill. It can track temporal differentiations effectively enough within conscious experience without, however, being able to track the temporal differentiation of conscious experience itself. It’s an old axiom of psychophysics that what cannot be differentiated is perceived as the same. And thus the ancient perplexity noted by Aristotle, the way the now is always different and yet somehow the same is explained (and much else asides).

The reason I’m thumping the tub as loudly as I can now is that, quite frankly, I could feel the rest of the field moving in. On the continental side of the philosophical border, I saw more and more thinking tackling the difficulties posed by the cognitive sciences, whereas on the analytic side, I found more and more thinkers accepting, in a variety of registers, the central assumption of BBT: that the consciousness ‘revealed’ by introspection (or deliberative metacognition or higher order thought) is little more than a water-stain informatically speaking, an impoverished blur that only seems a ‘plenum,’ something both ‘full’ and ‘incorrigible’ (or ‘sufficient’ in BBT-speak) because being brainbound, it has little or no information to the contrary.

My problem, as always, lies first in the idiosyncrasy of my background, the way I’ve developed all these concepts and ideas in isolation from the academy, and so must inevitably come across as naive or amateurish to ingroup, specialist ears; and second in my bizarre inability to see any of my nonfiction enterprises to the point of submission, never mind publication. This latter problem, I’m sure, is shrink material. The former is bad enough. The only thing worse than being an iconoclast in a field filled with crackpots is being an iconoclast who can only seem to blog about his ‘oh-so-special’ ideas!

The logic of neglect operates across all levels.

To boot, I’m sure being a fantasy novelist doesn’t help, particularly when it comes to a institution as insecure about its cognitive credentials as philosophy! Ah, but such is life. Toil and obscurity, my brothers. Toil and obscurity. For those of you who find this wankery insufferable, I apologize. If you want me to shut up already, ask your philosophy professor to take a looksee and correct my errant ways. In the meantime, I am, as always, the meat-puppet of my muse. And for those of you who have developed a morbid fascination with this morbid fascination, this strange intellectual adventure through the fantasies that constitute our souls, I need to extend a big… fat… danke…

Smoking ideas has to be one of the better ways to waste one’s time.

Life as Alien Transmission

by rsbakker

Aphorism of the Day: The purest thing anyone can say about anything is that consciousness is noisy.

.

In order to explain anything, you need to have some general sense of what it is you’re trying to explain. When it comes to consciousness, we don’t even have that. In 1983, Joseph Levine famously coined the phrase ‘explanatory gap‘ to describe the problem facing consciousness theorists and researchers. But metaphorically speaking, the problem resembles an explanatory cliff more than a mere gap. Instead of an explanandum, we have noise. So whatever explanans anyone cooks up, like Tononi’s IITC, for instance, is simply left hanging. Given the florid diversity of incompatible views, the consensus will almost certainly be that the wrong thing is being explained. The Blind Brain Theory offers a diagnosis of why this is the case, as well as a means of stripping away all the ‘secondary perplexities’ that plague our attempts to nail down consciousness as an explanandum. It clears away Error Consciousness, or the consciousness you think you have, given the severe informatic constraints placed on reflection.

So what, on the Blind Brain view, makes consciousness so frickin difficult?

Douglas Adams famously posed the farcical possibility that earth and humanity were a kind of computer designed to answer the question of the meaning of life. I would like to pose an alternate, equally farcical possibility: what if human consciousness were a code, a message sent by some advanced alien species, the Ring, for purposes known only to them? How might their advanced alien enemies, the Horn, go about deciphering it?

The immediate problem they would face is one of information availability. In normal instances of cryptanalysis, the coded message or ciphertext is available, as is general information regarding the coding algorithm. What is missing is the key, which is required to translate the message coded or plaintext from the ciphertext. In this case, however, the alien cryptanalysts would only have our reports of our conscious experiences to go on. Their situation would be hopeless, akin to attempting to unravel the German Enigma code via reports of its existence. Arguably, becoming human would be the only way for them to access the ciphertext.

But say this is technically feasible. So the alien enemy cryptanalysts transform themselves into humans, access the ciphertext in the form of conscious experience, only to discover another apparently insuperable hurdle: the issue of computational resources. To be human is to possess certain on-board cognitive capacities, which, as it turns out, are woefully inadequate. The alien cryptanalysts experiment, augment their human capacities this way and that, but they soon discover that transforming human cognition has the effect of transforming human experience, and so distorting the original ciphertext.

Only now do the Horn realize the cunning ingenuity of their foe. Cryptanalysis requires access both to the ciphertext and to the computational resources required to decode it. As advanced aliens, they possessed access to the latter, but not the former. And now, as humans, they possess access to the former, but at the cost of the latter.

The only way to get at the code, it seems, is to forgo the capacity to decode it. The Ring, the Horn cryptanalysts report, have discovered an apparently unbreakable code, a ciphertext that can only be accessed at the cost of the resources required to successfully attack it. An ‘entangled observer code,’ they call it, shaking their polyps in outrage and admiration, one requiring the cryptanalyst become a constitutive part of its information economy, effectively sequestering them from the tools and information required to decode it.

The only option, they conclude, is to destroy the message.

The point of this ‘cosmic cryptography’ scenario is not so much to recapitulate the introspective leg of McGinn’s ‘cognitive closure’ thesis as to frame the ‘entangled’ relation between information availability and cognitive resources that will preoccupy the remainder of this paper. What can we say about the ‘first-person’ information available for conscious experience? What can we say about the cognitive resources available for interpreting that information?

Explanations in cognitive science generally adhere to the explanatory paradigm found in the life sciences: various operations are ‘identified’ and a variety of mechanisms, understood as systems of components or ‘working parts,’ are posited to discharge them. In cognitive science in particular, the operations tend to be various cognitive capacities or conscious phenomena, and the components tend to be representations embedded in computational procedures that produce more representations. Theorists continually tear down and rebuild what are in effect virtual ‘explanatory machines,’ using research drawn from as many related fields as possible to warrant their formulations. Whether the operational outputs are behavioural, epistemic, or phenomenal, these virtual machines inevitably involve asking what information is available for what component system or process.

I call this process of information tracking the ‘Follow the Information Game’ (FIG). In a superficial sense, playing FIG is not all that different from playing detective. In the case of criminal investigations, evidence is assembled and assessed, possible motives are considered, various parties to the crime are identified, and an overarching narrative account of who did what to whom is devised and, ideally, tested. In the case of cognitive investigations, evidence is likewise assembled and assessed, possible evolutionary ‘motives’ are considered, a number of contributing component mechanisms are posited, and an overarching mechanistic account what does what for what is devised for possible experimental testing. The ‘doing’ invariably involves discharging some computational function, processing and disseminating information for subsequent, downstream or reentrant computational functions.

The signature difference between criminal and cognitive investigations, however, is that criminal investigators typically have no stake or role in the crimes they investigate. When it comes to cognitive investigations, the situation is rather like a bad movie: the detective is always in some sense under investigation. The cognitive capacities modelled are often the very cognitive capacities modelling. Now if these capacities consisted of ‘optimization mechanisms,’ devices that weight and add as much information as possible to produce optimal solutions, only the availability of information would be the problem. But as recent work in ecological rationality has demonstrated, problem-specific heuristics seem to be evolution’s weapon of choice when it comes to cognition. If our cognitive capacities involve specialized heuristics, then the cognitive detective faces the thorny issue of cognitive applicability. Are the cognitive capacities engaged in a given cognitive investigation the appropriate ones? Or, to borrow the terminology used in ecological rationality, do they match the problem or problems we are attempting to solve?

The question of entanglement is essentially this question of cognitive applicability and informatic availability. There can be little doubt that our success playing FIG depends, in some measure, on isolating and minimizing our entanglements. And yet, I would argue that the general attitude is one of resignation. The vast majority of theorists and researchers acknowledge that constraints on their cognitive and informatic resources regularly interfere with their investigations. They accept that they suffer from hidden ignorances, any number of native biases, and that their observations are inevitably theory-laden. Entanglements, the general presumption seems to be, are occupational hazards belonging to any investigative endeavour.

What is there to do but muddle our way forward?

But as the story of the Horn and their attempt to decipher the Ring’s ‘entangled observer code’ makes clear, the issue of entanglement seems to be somewhat more than a run-of-the-mill operational risk when consciousness is under investigation. The notional comparison between the what-is-it-likeness, or the apparently irreducible first-person nature of conscious experience, with an advanced alien ciphertext doesn’t seem all that implausible given the apparent difficulty of the Hard Problem. The idea of an encryption that constitutively constrains the computational resources required to attack it, a code that the cryptanalyst must become to simply access the ciphertext, does bear an eerie resemblance to the situation confronting consciousness theorists and researchers–certainly enough to warrant further consideration.

A Brick o’ Qualia: Tononi, Phi, and the Neural Armchair

by rsbakker

Aphorism of the Day: The absence of light is either the presence of dark–or death. For every decision made, death is the option not taken.

Aphorism of the Day II: Things we see through: eyes, windows, words, images, thoughts, lies, lingerie, and excuses.

.

So Guilio Tononi’s new book Phi: A Voyage from the Brain to the Soul has been out for a few weeks now, and I’ve had this ‘review’ coalescing in my brain’s gut (the reason for the scarequotes should become evident in due course). In the meantime, as fate would have it, I’ve stumbled across several reviews of the book, including one that is genuinely philosophically savvy, as well as several other online considerations of his theory of consciousness. And of course, everyone seems to have an opinion quite the opposite of my own.

First, I should say that this book is written for the layreader: it is in fact, the most original, beautiful general interest book on consciousness I’ve read since Douglas Hofstadter’s Godel, Esher, Bach: The Eternal Golden Braid – a book I can’t help but think provided Tononi with more than a little inspiration – as well as a commercial argument to get his publishers on board. Because on board they most certainly were: Phi is literally one of the most gorgeous books I have ever purchased, so much so that ‘book’ doesn’t seem to do it justice. Volume, would be a better word! The whole thing is printed on what looks like #100 gloss text paper. Posh stuff.

Anyway, if you’re one of my fiction readers who squints at all this consciousness stuff, this is the book for you.

What makes this book extraordinary is the way it ‘argues’ across numerous noncognitive registers. Tononi, with the cooperation of his publisher, put a great deal of effort into the crafting the qualia of the book, to create, in a sense, a kind of phenomenal ‘argument.’ It’s literally bursting with imagery, a pageant of photographic plates that continually frame the text. He writes with a kind of pseudo-Renaissance diction, hyperbolic, dense with cultural references, and downright poetic at times. He uses a narrative and dialogic structure, taking Galileo as his theoretical protagonist. With various guides, the father of science passes through a series of episodes with thinly disguised historical interlocutors, some of them guides, others mere passersby. This is obviously meant to emulate Dante’s Inferno, but sometimes, unfortunately, struck me as more reminiscent of “A Christmas Carol.” Following each of these episodes, he provides ‘Notes,’ which sometimes clarify and other times contradict the content of the preceding narrative and dialogue, generating a number of postmodern effects in genuinely unprecedented ways. Phi, in other words, is entirely capable of grounding thoroughly literary readings.

The result is that his actual account, the Information Integration Theory of Consciousness (IITC), is deeply nested within a series of ‘quality intensive’ expressive modes. The book, in other words, is meant to be a kind of tuning fork, something that hums with the very consciousness that it purports to explain. A brick o’ qualia…

An exemplar of Phi itself, the encircled ‘I’ of information.

So at this expressive level, at least, there is no doubting the genius of the book. Of course there’s many things I could quibble about (including sexism, believe it or not!) but they strike me as too idiosyncratic to belong in a review meant to describe and evaluate the book for others.

What I’ve found so surprising these past weeks is the apparent general antipathy to IITC in consciousness research circles, when personally, I class it in the same category as its main scientific competitors, like Bernard Baars’ Global Workspace theory of consciousness. And unlike pretty much everyone I’ve read, I actually think Tononi’s account of qualia (the term philosophers use for the purely phenomenal characteristics of consciousness, the redness of red, and so on) can actually do some real explanatory work.

Most seem to agree with Peter Hankins’ assessment of IITC on Conscious Entities, which boils down to ‘but red ain’t information’! Tononi, I admit, does have the bad habit of conflating his primary explanans for his explandum (and thus flirting with panpsychism), but I actually don’t think he’s arguing that red is information as he’s arguing that information integration can explain red as much as it needs to be explained.

Information integration builds on Gerald Edelman’s guiding insight that whatever consciousness is, it has something to do with differentiated unity. ‘Phi’ refers to the quantity of information (in its Shannon-Weaver incarnation) a system possesses over and above the information possessed by its component parts. One photodiode can be either on or off. Add another, and all you have are two photodiodes that are on or off. Since they are disconnected, they generate no information over and above on/off. Integrate them, which is to say, plug them into a third system, and suddenly the information explodes: on/on, on/off, off/on, off/off. Integrate another, and you have: on/on/on, on/on/off, on/off/off, off/off/off, off/off/on, off/on/on, off/on/off, on/off/on. Integrate another and… you get the picture.

Tononi argues that consciousness is a product of the combinatorial explosion of possible states that accompanies the kind of neuronal integration that seems to be going on in the thalamocortical system of the human brain. And he claims that this can explain what is going on with qualia, the one thing in consciousness research that seems to be heavier than Thor’s hammer.

Theoretically speaking, this puts him in a pretty pickle, because when it comes to qualia, two warring camps dominate the field: those who think qualia are super special, and those who think qualia are not what we make of them, conceptually incoherent, or impossible to explain without begging the question. Crudely put, the problem Tononi faces with the first tribe is that as soon as he picks the hammer up, they claim that it wasn’t Thor’s hammer after all, and the problem he faces with the second tribe is that they don’t believe in Thor.

The only safe thing you can say about qualia is that they are controversial.

Tononi thinks the explanation will look something like:

The many mechanisms of a complex, in various combinations, specify repertoires of states they can distinguish within the complex, above and beyond what their parts can do: each repertoire is integrated information–each an irreducible concept. Together they form a shape in qualia space. This is the quality of experience, and Q is its symbol. (217)

The reason I think this notion has promise lies in the way it explains the apparent inexplicability of things like red. And this, to me, seems as good a place to begin as any. Gary Drescher, for instance, argues that qualia should be understood by analogue to gensyms in Lisp programming. Gensyms are elements that are inscrutable to the program outside of their distinction from other elements. Lisp can recognize only that a gensym is a gensym, and none of its properties.

Similarly, we have no introspective access to whatever internal properties make the red gensym recognizably distinct from the green; our Cartesian camcorders are not wired up to monitor or record those details. Thus we cannot tell what makes the red sensation redlike, even though we know the sensation when we experience it. (Good and Real, 81-2)

Now I think this analogy fails in a number of other respects, but what gensyms do is allow us to see the apparent inexplicability of qualia as an important clue, as a positive feature possessing functional consequences. Qualia qua qualia are informatically impoverished, ‘introspectively opaque,’ so much so you might almost think they belonged to a system that was not designed to cognize them as qualia – which, as it turns out, is precisely the case. (Generally speaking, theoretical reflection on experience is not something that will get you laid). So in a sense, the first response to the ‘problem of qualia’ should be, Go figure. Given the exhorbitant metabolic cost of neural processing, we should expect qualia to be largely inscrutable to introspection.

For Tononi, Q-space allows you to understand this inscrutability. Red is a certain dedicated informatic configuration (‘concept’) that is periodically plugged into the larger, far more complex succession of configurations that occupy the whole.

Now for all it’s complexity, it’s important to recall that our brains are overmatched by the complexity of our environments. Managing the kind of systematic relationships with our environments that our brain does requires a good deal of complexity reduction, heuristic mechanisms robust enough to apply to as many circumstances as possible. So a palette of environmental invariants are selected according to the whims of reproductive success, which then form the combinatorial basis for ‘aggregate heuristic mechanisms’ (or ‘representations’) capable of systematically interacting with more variant, but recurrent, features of the environment.

So red helped our primate ancestors identify apples. As thalamocortical complexity increased, it makes sense that our cognitive capacities would adapt to troubleshoot things like apples instead of things like red, simply because the stakes of things like light reflected at 650nm are low compared to things like apples. Qualia, you could say, are existentially stable. Redness doesn’t ambush or poison or bloom or hang from perilous branches. It makes sense that the availability of information and corresponding cognitive resources would covary with the ‘existential volatility’ of a given informatic configurations (prerepresentational or representational).

What Tononi gets is that red engages the global configuration in a fixed way, one that does not allow the it nearly so many ‘degrees of dynamic reconfiguration’ relative to it as opposed to apples. Okay, so this last bit isn’t so much Tononi as the way IITC plugs into the Blind Brain Theory (BBT). But his insight provides a great starting point.

So what explains the ‘redness’ of red, the raw, ineffable feel of pain? This is where qualiaphiles will likely want to jump ship. From Tononi’s Q-space perspective, a given space (heuristic configuration) simply is what it is – ‘irreducible,’ as he puts it. Thanks to evolution, we inherited a wild variety of differentiating shapes, or qualia, by happenstance. If you want to understand what makes red red, let me refer you to the anthropic principle. It’s part of basic cable. These are simply the channels available when cable first got up and running.

Returning to BBT, the thing to appreciate here is what I call encapsulation. Even though the brain is an open system, conscious experience only expresses information that is globally broadcast or integrated. If it is the case that System 2 deliberation (reflection) is largely restricted to globally broadcast or integrated information, then our reasoning is limited to what we can consciously experience. Our senses, of course, provide a continuous stream of environmental information which finds itself expressed in transformations of aggregate heuristic configurations, representations. With apples we can vary our informatic perspective and sample hitherto unavailable information to leverage the various forms of dynamic reconfiguration that we call cognition.

Not so with red. Basic heuristic configurations (combinatorial prerepresentations or qualia) are updated, certainly. Green apples turn red. Blood dries to brown. But unlike apples, we can never get up and look at the backside of red, never access the information required to effect the various degrees of dynamic reconfiguration required for cognition.

It’s a question of informatic ‘perspective.’ With qualia we are trapped in our neural armchair. The information available to System 2 deliberation (reflection) is simply too scant (and likely too mismatched to the heuristic demands of environmental cognition) to do anything but rhapsodize or opine. Red is too greased and cognition too frostbitten to do the juggling that knowledge requires. (Where science is in the business of economizing excesses of information, phenomenology, you could say, is in the business of larding its shortage).

But this doesn’t mean that qualia can’t be naturalistically explained. I just offered an outline of a possible explanation above. It just means that qualia are fundamentals of our cognitive system in a manner perhaps similar to the way the laws of physics are fundamentals of the universe. (And it doesn’t mean that an attenuated ‘posthuman’ brain couldn’t be a radical game changer, providing our global configuration with cognitive resources required to get out of our neural armchair and ‘scientifically’ experiment with qualia). The qualification ‘our cognitive system’ above is an important one. What qualia share in common with the laws of physics has to do with encapsulation, which is to say, constraints on information availability. What qualia and the laws of physics share is certain informatic inscrutability, an epistemological profile rather than an ontological priority. The same way we can’t get out of our neural armchair to see the backside of red, we can’t step outside the universe to see the backside of the Standard Model.*

But the fact is the kind of nonsemantic informatic approach I’m taking here marks a radical departure from the semantic approaches that monopolize the tradition. Peter, in his Conscious Entities critique of IITC linked above, references the Frank Jackson’s famous thought experiment of Mary, the colour-deprived neuroscientist. The argument asks us to assume that Mary has learned all physical facts about red there is to know while sequestered in a black and white environment. The question is whether she learns a new fact, namely what red looks like, when she encounters and so experiences red for the very first time. If the answer is yes, as intuition wants to suggest, then it seems that qualia constitute a special kind of nonphysical fact, and that physicalism is accordingly untrue.

As Peter writes,

And this proves that really seeing red involves something over and above the simple business of wavelengths and electrical impulses. Doesn’t it? No, of course not. Mary acquired no new knowledge when she saw the rose – she had simply had a new experience. Focussing too exclusively on the role of the senses as information gatherers can lead us into the error of supposing that to experience a particular sight or sound is merely to gain some information. If that were so, reading the label on a bottle of wine would be as enjoyable as drinking it. Of course experiencing something allows us to generate information about it, but we also experience the reality, which in itself has nothing to do with information.

The reason he passes on IITC is that he thinks qualia obviously involves something over and above ‘mere information,’ what he calls the ‘reality’ of the experience. This is a version of a common complaint you find levelled against Tononi and IITC, the notion that information and experience are obviously two different things – otherwise, as Peter says, “reading the label on a bottle of wine would be as enjoyable as drinking it.” Something else has to be going on.

This is an example of a demand I have only ever seen in qualia debates: the notion that the explanans must somehow be the explanandum. Critics always focus on how strange this demand looks when mapped onto other instances of natural explanation. Should chemical notations explaining grape fermentation get us drunk? Should we reject them because they don’t? But the interesting question, I think, is why this move seems so natural in this particular domain of inquiry. Why, when we have no problem whatsoever with the explanatory power of information regarding physical phenomenal, do we suddenly balk when it’s applied to the phenomenal?

In fact, it’s quite understandable given the explanation I’ve given above. Rather than arising as an artifact of the radical (and quite unexplained) disjunct between mechanistic and phenomenal conceptualities as most seem to assume, the problem rather lies with the neural armchair. The thing to realize (and this is the insight that BBT generalizes) is that qualia are as much defined by their informatic simplicity as they are by the information they provide. Once again, qualia are baseline heuristics (prerepresentations): like gensyms, they are defined by the information they lack. Qualia are those elements of conscious experience that lack a backside. Since the province of explanation is to provide information, to show the backside, as it were, there is a strange sense in which we should expect our explanations will jar with our phenomenal intuitions.

Rethinking the Mary argument in nonsemantic informatic terms actually illustrates this situation in rather dramatic fashion. So Mary has, available for global broadcasting or integration (conscious processing), representations (knowledge of the brain as object) leveraged via prerepresentational systems lacking any colour. Suddenly her visual systems process information secondary to light with the wavelength of 650nm.  Her correlated neurophysiology lights up. In informatic terms, we have two different sets of channels–one ‘access’ and one ‘phenomenal’–performing a variety of overlapping and interlocking functions matching her organism to its environments. For the very first time in her brain’s history, red is plugged into this system and globally broadcast or integrated, becoming available for conscious experience. She sees ‘red’ for the very first time.

Certainly this constitutes a striking change in her cognitive repertoire, and so, one would think, knowledge of the brain as subject.

From a nonsemantic informatic perspective, the metaphysical implications (the question of whether physicalism is true) are merely symptomatic of what is really interesting. The Mary argument raises an artificial barrier between what are otherwise integral features of cognition, and so pits a fixed prerepresentational channel against a roaming, representational one. Through it, Jaskson manages to produce a kind of ‘conceptual asymbolia,’ a way to calve phenomenality from thought in thought, and so throw previously implicit assumptions/intuitions into relief.

The Mary Argument demonstrates something curious about the way information that makes it to global broadcasting or integration (conscious awareness) is ‘divvied up’ (while engaging System 2 deliberation (reflection), at any rate). The primary intuition it seems to turn on, the notion that ‘complete physical knowledge’ is possible absent prerepresentational components such as red, suggests a powerful representational bias, to the point of constituting a kind of informatic neglect. We have already considered how red is dumbmute, like a gensym. We have also considered the way deliberative cognition possesses a curious insensitivity to information outside its representational ambit. In rank intentional terms, you could say we are built to look through. The informatic role of qualia is left mysterious, unintergrated, unbroadcast–almost entirely so. We might as well be chained in Plato’s cave where they are concerned, born into them, unable vary our perspective relative to them.

The Mary argument, in other words, doesn’t so much reveal the limitations of physicalism as it undermines the semantic assumptions that underwrite it. Of course ‘seeing red’ provides Mary with a hitherto unavailable source of information. Of course this information, if globally broadcast or integrated will be taken up by her cognitive systems, dynamically reconfiguring ‘K-space,’ the shape of knowledge in her brain. The only real question is one of why we should have so much difficulty squaring these platitudinal observations with our existing understanding of knowledge.

The easy answer is that these semantic assumptions are themselves prerepresentational heuristics, kluges, if you will, selected for their robustness, and matched (in the ecological rationality sense) to our physical-environmental cognitive systems. But this is a different, far more monstrous story.

Ultimately, the thing to see is that Tononi’s Phi is a kind of living version of the Mary Argument. He gives us a brick o’ qualia, a book that fairly throbs with phenomenality, so seating us firmly in our neural armchair. And through the meandering of rhapsody and opinion, he gives our worldly cognitive systems something to fasten onto, information nonsemantically defined, allowing us, at long last, to set aside the old dualisms, and so range from nature to the soul and back again, however many times it takes.

Notes:

* I personally don’t think qualia are the mystery everyone makes them out to be, but this doesn’t mean I think the hard problem is solved – far from it. The question of why we should have these informatically dumbmute qualia at all remains as much as burning mystery as ever.

The ‘Person Fallacy’

by rsbakker

Aphorism of the Day: Am I a man pinned for display, dreaming I am a butterfly pinned for display, or am I a butterfly pinned for display, dreaming that I am a man pinned for display? Am I the dream, the display… the pins?

.

Things have been getting pretty wank around here lately, for which I apologize. If the market is about people ‘voting with their feet,’ then nothing demonstrates the way meaning in contemporary society has become another commodity quite so dramatically as the internet. Wank goes up. Traffic goes down. It really is that simple.

Why do people, in general, hate wank? It makes no sense to them. We have a hardwired allergy to ‘opaque’ communicative contexts. I crinkle my nose like anyone else when I encounter material that mystifies me. I assume that something must be wrong with it instead of with my knowledge-base or meagre powers of comprehension. And go figure. I’m as much my own yardstick for what makes sense as you are of yours.

This is why there is a continual, and quite commercial, pressure to be ‘wank free,’ to make things as easy as possible for as many people as possible. Though I think this can be problematic in a number of ways, I actually think reaching people, particularly those who don’t share your views, is absolutely crucial. I think ‘lowest common denominator’ criticisms of accessibility have far more to do with cultivating the ingroup prestige of wankers than anything. Culture is in the process of fracturing along entirely different lines of self-identification, thanks to the information revolution. And this simply ups the social ante of reaching across those lines.

But, as I keep insisting, there is a new kind of wank in town, one symptomatic of what I call the Semantic Apocalypse, which is to say, the utter divorce of experience, the ‘meaning world’ of cares and projects that characterizes your life, from knowledge, the ‘world world’ as revealed by science. This new wank, I believe anyways, is in the process of scientific legitimation. It is, in other words, slowly being knitted into fact with the accumulation of more scientific information. It is, in short, our future–or something like it.

So I thought it would be worthwhile to give you all an example, with translation, from what is one of the world’s premier journals, Behavioral and Brain Sciences. The following is taken from a response to Peter Carruther’s “How we know our own minds,” published in 2009. Carruther’s argument, in a nutshell, is similar to one I’ve made here several times in several ways: that we understand ourselves, by and large, the same way we understand others: by interpreting behaviour. In other words, even though you assume you have direct, introspective access to your beliefs and motives, in point of fact, you are almost as much ‘locked out’ of your own brain as you are the brains of others. As a growing body of experimental and neuropathological evidence seems to suggest, you simply hypothesize what your ‘gut brain’ is doing, rather than accessing information from the source.

What follows is Bryce Huebner and Dan Dennett’s response to Carruther’s account, interpolated with explanations of my own–as well as a little commentary. I offer it as an example of where our knowledge of the ‘human’ is headed. As I mention in CAUSA SUIcide, we are entering the ‘age of the subhuman,’ the decomposition of the soul into its component parts. I take what follows as clear evidence of this.

Human beings habitually, effortlessly, and for the most part unconsciously represent one another as persons. Adopting this personal stance facilitates representing others as unified entities with (relatively) stable psychological dispositions and (relatively) coherent strategies for practical deliberation. While the personal stance is not necessary for every social interaction, it plays an important role in intuitive judgments about which entities count as objects of moral concern (Dennett 1978, Robbins & Jack 2006); indeed, recent data suggest that when psychological unity and practical coherence are called into question, this often leads to the removal of an entity from our moral community (Bloom2005, Haslam2006).

This basically restates Dennett’s long time ‘solution’ to the problems that ‘meaning talk’ poses for science. What he’s saying here, quite literally, is that ‘person’ is simply a convenient way for our brains to make sense of one another, one that is hardwired in. A kind of useful fiction.

Human beings also reflexively represent themselves as persons through a process of self-narration operating over System 1 processes. However, in this context the personal stance has deleterious consequences for the scientific study of the mind. Specifically, the personal stance invites the assumption that every (properly functioning) human being is a person who has access to her own mental states. Admirably, Carruthers goes further than many philosophers in recognizing that the mind is a distributed computational structure; however, things become murky when he turns to the sort of access that we find in the case of metacognition.

‘System 1’ here refers to something called ‘dual process cognition,’ the focus of Daniel Kahneman’s Thinking Fast and Thinking Slow, a book which I’ve mentioned several times here at TPB. System 1 refers to automatic cognition, the kinds of problem-solving your brain does without effort or awareness, and System 2 refers to deliberative cognition, the kinds of effort-requiring problem-solving you do. What they are saying is that the ‘personal stance,’ thinking of ourselves and others as persons, obscures investigation into what is really going on. Why? Because it underwrites the assumption that we are unified and that we have direct access to our ‘mental states.’ They applaud Carruthers for seeing past the first illusion, but question whether he runs afoul the ‘person fallacy’ in his consideration ‘metacognition,’ our ability to know our knowing, desiring, and deciding.

At points, Carruthers notes that the “mindreading system has access to perceptual states” (sect. 2, para. 6), and with this in mind he claims that in “virtue of receiving globally broadcast perceptual states as input, the mindreading system should be capable of self-attributing those percepts in an ‘encapsulated’ way, without requiring any other input” (sect. 2, para. 4). Here, Carruthers offers a model of metacognition that relies exclusively on computations carried out by subpersonal mechanisms. However, Carruthers makes it equally clear that “I never have the sort of direct access that my mindreading system has to my own visual images and bodily feelings” (sect. 2, para. 8; emphasis added). Moreover, although “we do have introspective access to some forms of thinking . . . we don’t have such access to any propositional attitudes” (sect. 7, para. 11; emphasis over “we” added). Finally, his discussion of split-brain patients makes it clear that Carruthers thinks that these data “force us to recognize that sometimes people’s access to their own judgments and intentions can be interpretative” (sect. 3.1, para. 3, emphasis in original).

This passage isn’t quite so complicated as it might seem. They are basically juxtaposing Carruther’s ‘person free’ mapping of information access, which system receives information from which system, with his ‘person-centric’ mapping of information access betrayed by his use of first-person pronouns. The former doesn’t take any account of whether you are conscious of what’s going on or not. The latter does.

Carruthers, thus, relies on two conceptually distinct accounts of cognitive access to metarepresentations. First, he relies on an account of subpersonal access, according to which metacognitive representations are accessed by systems dedicated to belief fixation. Beliefs, in turn, are accessed by systems dedicated to the production of linguistic representations; which are accessed by systems dedicated to syntax, vocalization, sub-vocalization, and so on. Second, he relies on an account of personal access, according to which I have access to the metacognitive representations that allow me to interpret myself and form person-level beliefs about my own mental states.

This passage simply recapitulates and clarifies the former. Carruthers is mixing up his maps, swapping between maps where information is traded between independent city-states, and maps where information is traded between independent city-states and the Empire of the person.

The former view that treats the mind as a distributed computational system with no central controller seems to be integral to Carruthers’ (2009) current thinking about cognitive architecture. However, this insight seems not to have permeated Carruthers’ thinking about metacognition. Unless the “I” can be laundered from this otherwise promising account of “self-knowledge,” the assumption of personal access threatens to require an irreducible Cartesian res cogitans with access to computations carried out at the subpersonal level. With these considerations in mind, we offer what we see as a friendly suggestion: translate all the talk of personal access into subpersonal terms.

Carruthers recognizes that the person is a fiction, something that our brains project onto one another, but because he lapses into the person stance in his consideration of how the brain knows itself directly (metacognition ), his account risks assuming the reality of the person, a ‘Cartesian res cogitans,’ or ‘thinking substance.’ To avoid this, they recommend he clean up his theory and get rid of the person altogether.

Of course, the failure to translate personal access into the idiom of subpersonal computations may be the result of the relatively rough sketch of the subpersonal mechanisms that are responsible for metarepresentation. No doubt, a complete account of metarepresentation would require an appeal to amore intricate set of mechanisms to explain how subpersonal mechanisms can construct “the self” that is represented by the personal stance (Metzinger 2004). As Carruthers notes, the mindreading system must contain a model of what minds are and of “the access that agents have to their own mental states” (sect. 3.2, para. 2). He also notes that the mindreading system is likely to treat minds as having direct introspective access to themselves, despite the fact that the mode of access is inherently interpretative (sect. 3.2). However, merely adding these details to the model is insufficient for avoiding the presumption that there must (“also”) be first-person access to the outputs of metacognition. After all, even with a complete account of the subpersonal systems responsible for the production and comprehension of linguistic utterances, the fixation and updating of beliefs, and the construction and consumption of metarepresentations, it may still seem perfectly natural to ask, “But how do I know my own mental states?”

They suspect that Carruthers lapses into the person fallacy because he lacks an account of the subpersonal mechanisms that generate ‘metarepresentations’–representations of the brain’s representations and representational capacities–which in turn require an account of the subpersonal mechanisms that generate the self, such as those postulated by Thomas Metzinger in Being No-one. Short of this more thorough (and entirely subpersonal) account, the question of the Empire (person) and what crosses its borders becomes very difficult to avoid. Again, it’s important to remember that the ‘person’ is an attribution, not a thing, not even an illusory thing. There just is no Empire according to Huebner and Dennett, so including imperial border talk in any scientific account of cognition is simply going to generate confusion.

The banality that I have access to my own thoughts is a consequence of adopting the personal stance. However, at the subpersonal level it is possible to explain how various subsystems access representations without requiring an appeal to a centralized res cogitans. The key insight is that a module “dumbly, obsessively converts thoughts into linguistic form and vice versa” (Jackendoff 1996). Schematically, a conceptualized thought triggers the production of a linguistic representation that approximates the content of that thought, yielding a reflexive blurt. Such linguistic blurts are protospeech acts, issuing subpersonally, not yet from or by the person, and they are either sent to exogenous broadcast systems (where they become the raw material for personal speech acts), or are endogenously broadcast to language comprehension systems which feed directly to the mindreading system. Here, blurts are tested to see whether they should be uttered overtly, as the mindreading system accesses the content of the blurt and reflexively generates a belief that approximates the content of that blurt. Systems dedicated to belief fixation are then recruited, beliefs are updated, the blurt is accepted or rejected, and the process repeats. Proto-linguistic blurts, thus, dress System 1 outputs in mentalistic clothes, facilitating system-level metacognition.

I absolutely love this first line, if only because of the ease with which it breezes past the radical counterintuitivity of what is being discussed. The theoretical utility of the ‘personal stance’ is that it allows them to embrace the sum of our intuitive discourse regarding persons by simply appending the operator: ‘from the person stance.’ The same way any fortune-cookie fortune can be turned into a joke by adding ‘in bed’ to the end, any ‘everyday’ claim can be ‘affirmed’ using the person stance. “Yes-yes, of course you have access to your own thoughts… that is, when considered from the personal stance.”

The jargon laden account that follows simply outlines a mechanistic model of what a subpersonal account of the brain knowing itself might look like, one involving the shuttling of information to and fro between various hypothesized devices performing various hypothesized functions that culminate in what is called metacognition, without any need of any preexisting ‘inner inspector’–or notion of ‘introspection.’

Carruthers (2009) acknowledges that System 2 thinking is realized in the cyclical activity of reflexive System 1 subroutines. This allows for a model of metacognition that makes no appeal to a pre-existing I, a far more plausible account of self-knowledge in the absence of a res cogitans.

The point, ultimately, is that the inner inspector is as much a product as what it supposedly inspects. There is no imperial consumer, no person. This requires seeing that System 2 thinking, or deliberative cognition, is itself a recursive wrinkle in the way automatic System 1 functions are executed, a series of outputs that ‘you,’ thanks to certain, dedicated System 1 mechanisms, compulsively mistake for you.

Dizzy yet?

I’m sure that even my explication proved hopelessly inaccessible to some of you, and for that, I apologize. At the very least I hope that the gist got through: for a great deal of cognitive scientific research, you, the dude eating Fritos in front of the monitor, are a kind of mirage that must be seen through if science is to uncover the facts of what you really are. I imagine more than a few feel a sneer crawling across their face, thinking this is a perfect example of wank at its worst: a bunch of pompous nonsense leading a bunch of pompous eggheads down yet another pompous blind alley. But I assure you this is not the case. One of the things that amazes me surfing the web in pursuit of these issues is the degree to which it is being embraced by business. There’s neuromarketing, which takes all this information as actionable, but there’s economics as well. These guys are reverse-engineering the consumer, not to mention the voter.

And knowledge, as ever, is power, whether it flies in the face of experience or not.

Welcome to the Semantic Apocalypse.