Three Pound Brain

No bells, just whistling in the dark…

Tag: Metacognition

Zahavi, Dennett, and the End of Being*

by rsbakker

 

We are led back to these perceptions in all questions regarding origins, but they themselves exclude any further question as to origin. It is clear that the much-talked-of certainty of internal perception, the evidence of the cogito, would lose all meaning and significance if we excluded temporal extension from the sphere of self-evidence and true givenness.

–Husserl, The Phenomenology of Internal Time-Consciousness

So recall this list, marvel how it continues to grow, and remember, the catalogue is just getting started. The real tsunami of information is rumbling off in the near horizon. And lest you think your training or education render you exempt, pause and consider the latest in Eric Schwitzgebel’s empirical investigations of how susceptible professional philosophers are to various biases and effects on that list. I ask you to consider what we know regarding human cognitive shortcomings to put you in a skeptical frame of mind. I want to put in a skeptical frame of mind because of a paper by Dan Zahavi, the Director of the Center for Subjectivity Research at the University of Copenhagen, that came up on my academia.edu feed the other day.

Zahavi has always struck me as unusual as far as ‘continental’ philosophers go, at once a Husserlian ‘purist’ and determined to reach out, to “make phenomenology a powerful and systematically convincing voice in contemporary philosophical discussion” (“Husserl, self, and others: an interview with Dan Zahavi”). I applaud him for this, for braving genuine criticism, genuine scientific research, rather than allowing narrow ingroup interpretative squabbles to swallow him whole. In “Killing the straw man: Dennett and phenomenology,” he undertakes a survey of Dennett’s many comments regarding phenomenology, and a critical evaluation of his alternative to phenomenology, heterophenomenology. Since I happen to be a former phenomenologist, I’ve had occasion to argue both sides of the fence. I spent a good portion of my late twenties and early thirties defending my phenomenological commitments from my skeptical, analytically inclined friends using precisely the arguments and assumptions that Zahavi deploys against Dennett. And I’ve spent the decade following arguing a position even more radically eliminativistic than Dennett’s. I’ve walked a mile in both shoes, I suppose. I’ve gone from agreeing with pretty much everything Zahavi argues in this piece (with a handful of deconstructive caveats) to agreeing with almost nothing.

So what I would like to do is use Zahavi’s position and critique as a foil to explain how and why I’ve abandoned the continental alliance and joined the scientific empire. I gave up on what I call the Apple-and-Oranges Argument because I realized there was no reliable, a priori way to discursively circumscribe domains, to say science can only go so far and no further. I gave up on what I call the Ontological Pre-emption Argument because I realized arguing ‘conditions of possibility,’ far from rationally securing my discourse, simply multiplied my epistemic liabilities. Ultimately, I found myself stranded with what I call the Abductive Argument, an argument based on the putative reality of the consensual structures that seem to genuinely anchor phenomenological disputation. Phenomenology not only offered the best way to describe that structure, it offered the only way, or so I thought. Since Zahavi provides us with examples of all three arguments in the course of castigating Dennett, and since Dennett occupies a position similar to my own, “Killing the straw man” affords an excellent opportunity to demonstrate how phenomenology fares when considered in terms of brain science and heuristic neglect.

As the title of the paper suggests, Zahavi thinks Dennett never moves past critiquing a caricature of phenomenology. For Dennett, Zahavi claims, phenomenology is merely a variant of Introspectionism and thus suffering all the liabilities that caused Introspectionism to die as a branch of empirical psychology almost a century ago now. To redress this equivocation, Zahavi turns to that old stalwart of continental cognitive self-respect, the ‘Apples-and-Oranges Argument’:

To start with, it is important to realize that classical phenomenology is not just another name for a kind of psychological self-observation; rather it must be appreciated as a special form of transcendental philosophy that seeks to reflect on the conditions of possibility of experience and cognition. Phenomenology is a philosophical enterprise; it is not an empirical discipline. This doesn’t rule out, of course, that its analyses might have ramifications for and be of pertinence to an empirical study of consciousness, but this is not its primary aim.

By conflating phenomenology and introspective psychology, Dennett is conflating introspection with the phenomenological attitude, the theoretically attuned orientation to experience that allows the transcendental structure of experience to be interpreted. Titchener’s psychological structuralism, for instance, was invested in empirical investigations into the structure and dynamics of the conscious mind. As descriptive psychology, it could not, by definition, disclose what Zahavi terms the ‘nonpsychological dimension of consciousness,’ those structures that make experience possible.

What makes phenomenology different, in other words, is also what makes phenomenology better. And so we find the grounds for the Ontological Pre-emption Argument in the Apples-and-Oranges Argument:

Phenomenology is not concerned with establishing what a given individual might currently be experiencing. Phenomenology is not interested in qualia in the sense of purely individual data that are incorrigible, ineffable, and incomparable. Phenomenology is not interested in psychological processes (in contrast to behavioral processes or physical processes). Phenomenology is interested in the very dimension of givenness or appearance and seeks to explore its essential structures and conditions of possibility. Such an investigation of the field of presence is beyond any divide between psychical interiority and physical exteriority, since it is an investigation of the dimension in which any object—be it external or internal—manifests itself. Phenomenology aims to disclose structures that are intersubjectively accessible, and its analyses are consequently open for corrections and control by any (phenomenologically tuned) subject.

The strategy is as old as phenomenology itself. First you extricate phenomenology from the bailiwick of the sciences, then you position phenomenology prior to the sciences as the discipline responsible for cognizing the conditions of possibility of science. First you argue that it is fundamentally different, and then you argue that this difference is fundamental.

Of course, Zahavi omits any consideration of the ways Dennett could respond to either of these claims. (This is one among several clues to the institutionally defensive nature of this paper, the fact that it is pitched more to those seeking theoretical reaffirmation than to institutional outsiders—let alone lapsarians). Dennett need only ask Zahavi why anyone should believe that his domain possesses ontological priority over the myriad domains of science. The fact that Zahavi can pluck certain concepts from Dennett’s discourse, drop them in his interpretative machinery, and derive results friendly to that machinery should come as no surprise. The question pertains to the cognitive legitimacy of the machinery: therefore any answer presuming that legitimacy simply begs the question. Does Zahavi not see this?

Even if we granted the possible existence of ‘conditions of possibility,’ the most Zahavi or anyone else could do is intuit them from the conditioned, which just happen to be first-person phenomena. So if generalizing from first-person phenomena proved impossible because of third-person inaccessibility—because genuine first person data were simply too difficult to come by—why should we think those phenomena can nevertheless anchor a priori claims once phenomenologically construed? The fact is phenomenology suffers all the problems of conceptual controversy and theoretical underdetermination as structuralist psychology. Zahavi is actually quite right: phenomenology is most certainly not a science! There’s no need for him to stamp his feet and declare, “Oranges!” Everybody already knows.

The question is why anyone should take his Oranges seriously as a cognitive enterprise. Why should anyone believe his domain comes first? What makes phenomenologically disclosed structures ontologically prior or constitutive of conscious experience? Blood flow, neural function—the life or death priority of these things can be handily demonstrated with a coat-hanger! Claims like Zahavi’s regarding the nature of some ontologically constitutive beyond, on the other hand, abound in philosophy. Certainly powerful assurances are needed to take them seriously, especially when we reject them outright for good reason elsewhere. Why shouldn’t we just side with the folk, chalk phenomenology up to just another hothouse excess of higher education? Because you stack your guesswork up on the basis of your guesswork in a way you’re guessing is right?

Seriously?

As I learned, neither the Apples-and-Oranges nor the Ontological Pre-emption Arguments draw much water outside the company of the likeminded. I felt their force, felt reaffirmed the way many phenomenologists, I’m sure, feel reaffirmed reading Zahavi’s exposition now. But every time I laid them on nonphenomenologists I found myself fenced by questions that were far too easy to ask—and far easier to avoid than answer.

So I switched up my tactics. When my old grad school poker buddies started hacking on Heidegger, making fun of the neologisms, bitching about the lack of consensus, I would say something very similar to what Zahavi claims above—even more powerful, I think, since it concretizes his claims regarding structure and intersubjectivity. Look, I would tell them, once you comport yourself properly (with a tremendous amount of specialized training, bear in mind), you can actually anticipate the kinds of things Husserl or Heidegger or Merleau-Ponty or Sarte might say on this or that subject. Something more than introspective whimsy is being tracked—surely! And if that ‘something more’ isn’t the transcendental structure of experience, what could it be? Little did I know how critical this shift in the way I saw the dialectical landscape would prove.

Basically I had retreated to the Abductive Argument—the only real argument, I now think, that Zahavi or any phenomenologist ultimately has outside the company of their confreres. Apriori arguments for phenomenological aprioricity simply have no traction unless you already buy into some heavily theorized account of the apriori. No one’s going to find the distinction between introspectionism and phenomenology convincing so long as first-person phenomena remain the evidential foundation of both. If empirical psychology couldn’t generalize from phenomena, then why should we think phenomenology can reason to their origins, particularly given the way it so discursively resembles introspectionism? Why should a phenomenological attitude adjustment make any difference at all?

One can actually see Zahavi shift to abductive warrant in the last block quote above, in the way he appeals to the intersubjectively accessible nature of the ‘structures’ comprising the domain of the phenomenological attitude. I suspect this is why Zahavi is so keen on the eliminativist Dennett (whom I generally agree with) at the expense of the intentionalist Dennett (whom I generally disagree with)—so keen on setting up his own straw man, in effect. The more he can accuse Dennett of eliminating various verities of experience, the more spicy the abductive stew becomes. If phenomenology is bunk, then why does it exhibit the systematicity that it does? How else could we make sense of the genuine discursivity that (despite all the divergent interpretations) unquestionably animates the field? If phenomenological reflection is so puny, so weak, then how has any kind of consensus arisen at all?

The easy reply, of course, is to argue that the systematicity evinced by phenomenology is no different than the systematicity evinced by intelligent design, psychoanalysis, climate-change skepticism, or what have you. One might claim that rational systematicity, the kind of ‘intersubjectivity’ that Zahavi evokes several times in “Killing the straw man,” is actually cheap as dirt. Why else would we find ourselves so convincing, no matter what we happen to believe? Thus the importance of genuine first-person data: ‘structure’ or no ‘structure,’ short of empirical evidence, we quite simply have no way of arbitrating between theories, and thus no way of moving forward. Think of the list of our cognitive shortcomings! We humans have an ingrown genius for duping both ourselves and one another given the mere appearance of systematicity.

Now abductive arguments for intentionalism more generally have the advantage of taking intentional phenomena broadly construed as their domain. So in his Sources of Intentionality, for instance, Uriah Kriegel argues ‘observational contact with the intentional structure of experience’ best explains our understanding of intentionality. Given the general consensus that intentional phenomena are real, this argument has real dialectical traction. You can disagree with Kriegel, but until you provide a better explanation, his remains the only game in town.

In contrast to this general, Intentional Abductive Argument, the Phenomenological Abductive Argument takes intentional phenomena peculiar to the phenomenological attitude as its anchoring explananda. Zahavi, recall, accuses Dennett of equivocating phenomenology and introspectionism because of a faulty understanding of the phenomenological attitude. As a result he confuses the ontic with the ontological, ‘a mere sector of being’ with the problem of Being as such. And you know what? From the phenomenological attitude, his criticism is entirely on the mark. Zahavi accuses Dennett of a number of ontological sins that he simply does not commit, even given the phenomenological attitude, but this accusation, that Dennett has run afoul the ‘metaphysics of presence,’ is entirely correct—once again, from the phenomenological attitude.

Zahavi’s whole case hangs on the deliverances of the phenomenological attitude. Refuse him this, and he quite simply has no case at all. This was why, back in my grad school days, I would always urge my buddies to read phenomenology with an open mind, to understand it on its own terms. ‘I’m not hallucinating! The structures are there! You just have to look with the right eyes!’

Of course, no one was convinced. I quickly came to realize that phenomenologists occupied a position analogous to that of born-again Christians, party to a kind of undeniable, self-validating experience. Once you grasp the ontological difference, it truly seems like there’s no going back. The problem is that no matter how much you argue no one who has yet to grasp the phenomenological attitude can possibly credit your claims. You’re talking Jesus, son of God, and they think you’re referring to Heyzoos down at the 7-11.

To be clear, I’m not suggesting that phenomenology is religious, only that it shares this dialectical feature with religious discourses. The phenomenological attitude, like the evangelical attitude, requires what might be called a ‘buy in moment.’ The only way to truly ‘get it’ is to believe. The only way to believe is to open your heart to Husserl, or Heidegger, or in this case, Zahavi. “Killing the straw man” is jam packed with such inducements, elegant thumbnail recapitulations of various phenomenological interpretations made by various phenomenological giants over the years. All of these recapitulations beg the question against Dennett, obviously so, but they’re not dialectically toothless or merely rhetorical for it. By giving us examples of phenomenological understanding, Zahavi is demonstrating possibilities belonging to a different way of looking at the world, laying bare the very structure that organizes phenomenology into genuinely critical, consensus driven discourse.

The structure that phenomenology best explains. For anyone who has spent long rainy afternoons pouring through the phenomenological canon, alternately amused and amazed by this or that interpretation of lived life, the notion that phenomenology is ‘mere bunk’ can only sound like ignorance. If the structures revealed by the phenomenological attitude aren’t ontological, then what else could they be?

This is what I propose to show: a radically different way of conceiving the ‘structures’ that motivate phenomenology. I happen to be the global eliminativist that Zahavi mistakenly accuses Dennett of being, and I also happen to have a fairly intimate understanding of the phenomenological attitude. I came by my eliminativism in the course of discovering an entirely new way to describe the structures revealed by the phenomenological attitude. The Transcendental Interpretation is no longer the only game in town.

The thing is, every phenomenologist, whether they know it or not, is actually part of a vast, informal heterophenomenological experiment. The very systematicity of conscious access reports made regarding phenomenality via the phenomenological attitude is what makes them so interesting. Why do they orbit around the same sets of structures the way they do? Why do they lend themselves to reasoned argumentation? Zahavi wants you to think that his answer—because they track some kind of transcendental reality—is the only game in town, and thus the clear inference to the best explanation.

But this is simply not true.

So what alternatives are there? What kind of alternate interpretation could we give to what phenomenology contends is a transcendental structure?

In his excellent Posthuman Life, David Roden critiques transcendental phenomenology in terms of what he calls ‘dark phenomenology.’ We now know as a matter of empirical fact that our capacity to discriminate colours presented simultaneously outruns our capacity to discriminate sequentially, and that our memory severely constrains the determinacy of our concepts. This gap between the capacity to conceptualize and the capacity to discriminate means that a good deal of phenomenology is conceptually dark. The argument, as I see it, runs something like: 1) There is more than meets the phenomenological eye (dark phenomenology). 2) This ‘more’ is constitutive of what meets the phenomenological eye. 3) This ‘more’ is ontic. 4) Therefore the deliverances of the phenomenological eye cannot be ontological. The phenomenologist, he is arguing, has only a blinkered view. The very act of conceptualizing experience, no matter how angelic your attitude, covers experience over. We know this for a fact!

My guess is that Zahavi would concede (1) and (2) while vigorously denying (3), the claim that the content of dark phenomenology is ontic. He can do this simply by arguing that ‘dark phenomenology’ provides, at best, another way of delimiting horizons. After all, the drastic difference in our simultaneous and sequential discriminatory powers actually makes phenomenological sense: the once-present source impression evaporates into the now-present ‘reverberations,’ as Husserl might call them, fades on the dim gradient of retentional consciousness. It is a question entirely internal to phenomenology as to just where phenomenological interpretation lies on this ‘continuum of reverberations,’ and as it turns out, the problem of theoretically incorporating the absent-yet-constitutive backgrounds of phenomena is as old as phenomenology itself. In fact, the concept of horizons, the subjectively variable limits that circumscribe all phenomena, is an essential component of the phenomenological attitude. The world has meaning–everything we encounter resounds with the significance of past encounters, not to mention future plans. ‘Horizon talk’ simply allows us to make these constitutive backgrounds theoretically explicit. Even while implicit they belong to the phenomena themselves no less, just as implicit. Consciousness is as much non-thematic consciousness as it is thematic consciousness. Zahavi could say the discovery that we cannot discriminate nearly as well sequentially as we can simultaneously simply recapitulates this old phenomenological insight.

Horizons, as it turns out, also provide a way to understand Zahavi’s criticism of the heterophenomenology Dennett proposes we use in place of phenomenology. The ontological difference is itself the keystone of a larger horizon argument involving what Heidegger called the ‘metaphysics of presence,’ how forgetting the horizon of Being, the fundamental background allowing beings to appear as beings, leads to investigations of Being under the auspices of beings, or as something ‘objectively present.’ More basic horizons of use, horizons of care, are all covered over as a result. And when horizons are overlooked—when they are ignored or worse yet, entirely neglected—we run afoul conceptual confusions. In this sense, it is the natural attitude of science that is most obviously culpable, considering beings, not against their horizons of use or care, but against the artificially contrived, parochial, metaphysically naive, horizon of natural knowledge. As Zahavi writes, “the one-sided focus of science on what is available from a third person perspective is both naive and dishonest, since the scientific practice constantly presupposes the scientist’s first-personal and pre-scientific experience of the world.”

As an ontic discourse, natural science can only examine beings from within the parochial horizon of objective presence. Any attempt to drag phenomenology into the natural scientific purview, therefore, will necessarily cover over the very horizon that is its purview. This is what I always considered a ‘basic truth’ of the phenomenological attitude. It certainly seems to be the primary dialectical defence mechanism: to entertain the phenomenological attitude is to recognize the axiomatic priority of the phenomenological attitude. If the intuitive obviousness of this escapes you, then the phenomenological attitude quite simply escapes you.

Dennett, in other words, is guilty of a colossal oversight. He is quite simply forgetting that lived life is the condition of possibility of science. “Dennett’s heterophenomenology,” Zahavi writes, “must be criticized not only for simply presupposing the availability of the third-person perspective without reflecting on and articulating its conditions of possibility, but also for failing to realize to what extent its own endeavour tacitly presupposes an intact first-person perspective.”

Dennett’s discursive sin, in other words, is the sin of neglect. He is quite literally blind to the ontological assumptions—the deep first person facts—that underwrite his empirical claims, his third person observations. As a result, none of these facts condition his discourse the way they should: in Heidegger’s idiom, he is doomed to interpret Being in terms of beings, to repeat the metaphysics of presence.

The interesting thing to note here, however, is that Roden is likewise accusing Zahavi of neglect. Unless phenomenologists accord themselves supernatural powers, it seems hard to believe that they are not every bit as conceptually blind to the full content of phenomenal experience as the rest of us are. The phenomenologist, in other words, must acknowledge the bare fact that they suffer neglect. And if they acknowledge the bare fact of neglect, then, given the role neglect plays in their own critique of scientism, they have to acknowledge the bare possibility that they, like Dennett and heterophenomenology, find themselves occupying a view whose coherence requires ignorance—or to use Zahavi’s preferred term, naivete—in a likewise theoretically pernicious way.

The question now becomes one of whether the phenomenological concept of horizons can actually allay this worry. The answer here has to be no. Why? Simply because the phenomenologist cannot deploy horizons to rationally immunize phenomenology against neglect without assuming that phenomenology is already so immunized. Or put differently: if it were the case that neglect were true, that Zahavi’s phenomenology, like Dennett’s heterophenomenology, only makes sense given a certain kind of neglect, then we should expect ‘horizons’ to continue playing a conceptually constitutive role—to contribute to phenomenology the way it always has.

Horizons cannot address the problem of neglect. The phenomenologist, then, is stranded with the bare possibility that their practice only appears to be coherent or cognitive. If neglect can cause such problems for Dennett, then it’s at least possible that it can do so for Zahavi. And how else could it be, given that phenomenology was not handed down to Moses by God, but rather elaborated by humans suffering all the cognitive foibles on the list linked above? In all our endeavours, it is always possible that our blindspots get the better of us. We can’t say anything about specific ‘unknown unknowns’ period, let alone anything regarding their relevance! Arguing that phenomenology constitutes a solitary exception to this amounts to withdrawing from the possibility of rational discourse altogether—becoming a secular religion, in effect.

So it has to be possible that Zahavi’s phenomenology runs afoul theoretically pernicious neglect the way he accuses Dennett’s heterophenomenology of running afoul theoretically pernicious neglect.

Fair is fair.

The question now becomes one of whether phenomenology is suffering from theoretically pernicious neglect. Given that magic mushrooms fuck up phenomenologists as much as the rest of us, it seems assured that the capacities involved in cognizing their transcendental domain pertain to the biological in some fundamental respect. Phenomenologists suffer strokes, just like the rest of us. Their neurobiological capacity to take the ‘phenomenological attitude’ can be stripped from them in a tragic inkling.

But if the phenomenological attitude can be neurobiologically taken, it can also be given back, and here’s the thing, in attenuated forms, tweaked in innumerable different ways, fuzzier here, more precise there, truncated, snipped, or twisted.

This means there are myriad levels of phenomenological penetration, which is to say, varying degrees of phenomenological neglect. Insofar as we find ourselves on a biological continuum with other species, this should come as no surprise. Biologically speaking, we do not stand on the roof of the world, so it makes sense to suppose that the same is true of our phenomenology.

So bearing this all in mind, here’s an empirical alternative to what I termed the Transcendental Interpretation above.

On the Global Neuronal Workspace Theory, consciousness can be seen as a serial, broadcast conduit between a vast array of nonconscious parallel systems. Networks continually compete at the threshold of conscious ‘ignition,’ as it’s called, competition between nonconscious processes results in the selection of some information for broadcast. Stanislaus Dehaene—using heterophenomenology exactly as Dennett advocates—claims on the basis of what is now extensive experimentation that consciousness, in addition to broadcasting information, also stabilizes it, slows it down (Consciousness and the Brain). Only information that is so broadcast can be accessed for verbal report. From this it follows that the ‘phenomenological attitude’ can only access information broadcast for verbal report, or conversely, that it neglects all information not selected for stabilization and broadcast.

Now the question becomes one of whether that information is all the information the phenomenologist, given his or her years of specialized training, needs to draw the conclusions they do regarding the ontological structure of experience. And the more one looks at the situation through a natural lens, the more difficult it becomes to see how this possibly could be the case. The GNW model sketched above actually maps quite well onto the dual-process cognitive models that now dominate the field in cognitive science. System 1 cognition applies to the nonconscious, massively parallel processing that both feeds, and feeds from, the information selected for stabilization and broadcast. System 2 cognition applies to the deliberative, conscious problem-solving that stabilization and broadcast somehow makes possible.

Now the phenomenological attitude, Zahavi claims, somehow enables deliberative cognition of the transcendental structure of experience. The phenomenological attitude, then, somehow involves a System 2 attempt to solve for consciousness in a particular way. It constitutes a paradigmatic example of deliberative, theoretical metacognition, something we are also learning more and more about on a daily basis. (The temptation here will be to beg the question and ‘go ontological,’ and then accuse me of begging the question against phenomenology, but insofar as neuropathologies have any kind of bearing on the ‘phenomenological attitude,’ insofar as phenomenologists are human, giving in to this temptation would be tendentious, more a dialectical dodge than an honest attempt to confront a real problem.)

The question of whether Zahavi has access to what he needs, then, calves into two related issues: the issue of what kind of information is available, and the issue of what kind of metacognitive resources are available.

On the metacognitive capacity front, the picture arising out of cognitive psychology and neuroscience is anything but flattering. As Fletcher and Carruthers have recently noted:

What the data show is that a disposition to reflect on one’s reasoning is highly contingent on features of individual personality, and that the control of reflective reasoning is heavily dependent on learning, and especially on explicit training in norms and procedures for reasoning. In addition, people exhibit widely varied abilities to manage their own decision-making, employing a range of idiosyncratic techniques. These data count powerfully against the claim that humans possess anything resembling a system designed for reflecting on their own reasoning and decision-making. Instead, they support a view of meta-reasoning abilities as a diverse hodge-podge of self-management strategies acquired through individual and cultural learning, which co-opt whatever cognitive resources are available to serve monitoring-and-control functions. (“Metacognition and Reasoning”)

We need to keep in mind that the transcendental deliverances of the phenomenological attitude are somehow the product of numerous exaptations of radically heuristic systems. As the most complicated system in its environment, and as the one pocket of its environment that it cannot physically explore, the brain can only cognize its own processes in disparate and radically heuristic ways. In terms of metacognitive capacity, then, we have reason to doubt the reliability of any form of reflection.

On the information front, we’ve already seen how much information slips between the conceptual cracks with Roden’s account of dark phenomenology. Now with the GNW model, we can actually see why this has to be the case. Consciousness provides a ‘workspace’ where a little information is plucked from many producers and made available to many consumers. The very process of selection, stabilization, and broadcasting, in other words, constitutes a radical bottleneck on the information available for deliberative metacognition. This actually allows us to make some rather striking predictions regarding the kinds of difficulties such a system might face attempting to cognize itself.

For one, we should expect such a system to suffer profound source neglect. Since all the neurobiological machinery preceding selection, stabilization, and broadcast is nonconscious, we should expect any metacognitive attempt to solve for the origins of consciousness to end in dismal failure. In fact, given that the larger cognitive system cognizes environments via predictive error minimization (I heartily recommend Hohwy’s, The Predictive Mind), which is to say, via the ability to anticipate what follows from what, we could suppose it would need some radically different means of cognizing itself, one somehow compensating for, or otherwise accommodating, source neglect.

For another, we should expect such a system to suffer profound scope neglect. Once again, since all the neurobiological machinery bracketing the selection, stabilization, and broadcast is nonconscious, we should expect any metacognitive attempt to solve for the limits of consciousness to end in failure. Since the larger cognitive system functions via active environmental demarcations, consciousness would jam the gears, to be an ‘object without edges,’ if anything coherent at all.

We should expect to be baffled by our immediate sources and by our immediate scope, not because they comprise our transcendental limitations, but because such blind-spots are an inevitable by-product of the radical neurophysiological limits on our brain’s ability to cognize its own structure and dynamics. Thus Blind Brain Theory, the empirical thesis that we’re natural in such a way that we cannot cognize ourselves as natural, and so cognize ourselves otherwise. We’re a standalone solution-monger, one so astronomically complicated that we at best enjoy an ad hoc, heuristic relation to ourselves. The self-same fundamental first-person structure that phenomenology interprets transcendentally—as ontologically positive, naturalistically inscrutable, and inexplicably efficacious—it explains in terms of neglect, explains away, in effect. It provides a radical alternative to the Transcendental Interpretation discussed above—a Blind Brain interpretation. Insofar as Zahavi’s ‘phenomenological attitude’ amounts to anything at all, it can be seen as a radically blinkered, ‘inside view’ of source and scope neglect. Phenomenology, accordingly, can be diagnosed as the systematic addumbration of a wide variety of metacognitive illusions, all turning in predictable ways on neglect.

As a onetime phenomenologist I can appreciate how preposterous this must all sound, but I ask you to consider, as honestly as that list I linked above allows, the following passage:

This flow is something we speak of in conformity with what is constituted, but it is not ‘something in objective time.’ It is absolute subjectivity and has the absolute properties of something to be designated metaphorically as ‘flow’; of something that originates in a point of actuality, in a primal source-point and a continuity of moments of reverberation. For all this, we lack names. Husserl, Phenomenology of Internal Time-Consciousness, 79.

Now I think this sounds like a verbal report generated by a metacognitive system suffering source and scope neglect yet grappling with questions of source and scope all the same. Blind to our source blindness, our source appears to stand outside the order of the conditioned, to be ‘absolute’ or ‘transcendental.’ Blind to our scope blindness, this source seems to be a kind of ‘object without edges,’ more boundless container than content. And so a concatenation of absolute ignorances drives a powerful intuition of absolute or transcendental subjectivity at the very limit of what can be reported. Thus domesticated, further intuitive inferences abound, and the sourceless, scopeless arena of the phenomenological attitude is born, and with it, the famed ontological difference, the principled distinction of the problem of being from the problems of beings, or the priority of the sourceless and scopeless over the sourced and the scoped.

My point here is to simply provide a dramatic example of the way the transcendental structure revealed by the phenomenological attitude can be naturalistically turned inside out, how its most profound posits are more parsimoniously explained as artifacts of metacognitive neglect. Examples of how this approach can be extended in ways relevant to phenomenology can be found here, here, and here.

This is a blog post, so I can genuinely reach out. Everyone who practices phenomenology needs to consider the very live possibility that they’re actually trading in metacognitive illusions, that the first person they claim to be interpreting in the most fundamental terms possible is actually a figment of neglect. At the very least they need to recognize that the Abductive Argument is no longer open to them. They can no longer assume, the way Zahavi does, that the intersubjective features of their discourse evidence the reality of their transcendental posits exclusively. If anything, Blind Brain Theory offers a far better explanation for the discourse-organizing structure at issue, insofar as it lacks any supernatural posits, renders perspicuous a hitherto occult connection between brain and consciousness (as phenomenologically construed), and is empirically testable.

All of the phenomenological tradition is open to reinterpretation in its terms. I agree that this is disastrous… the very kind of disaster we should have expected science would deliver. Science is to be feared precisely because it monopolizes effective theoretical cognition, not because it seeks to, and philosophies so absurd as to play its ontological master manage only to anaesthetize themselves.

When asked what problems remain outstanding in his AVANT interview, Zahavi acknowledges that phenomenology, despite revealing the dialectical priority of the first person over the third person perspective on consciousness, has yet to elucidate the nature of the relationship between them. “What is still missing is a real theoretical integration of these different perspectives,” he admits. “Such integration is essential, if we are to do justice to the complexity of consciousness, but it is in no way obvious how natural science all by itself will be able to do so” (118). Blind Brain Theory possesses the conceptual resources required to achieve this integration. Via neglect and heuristics, it allows us to see the first-person in terms entirely continuous with the third, while allowing us to understand all the apories and conundrums that have prevented such integration until now. It provides the basis, in other words, for a wholesale naturalization of phenomenology.

Regardless, I think it’s safe to say that phenomenology is at a crossroads. The days when the traditional phenomenologist could go on the attack, actually force their interlocutors to revisit their assumptions, are quickly coming to a close. As the scientific picture of the human accumulates ever more detail—ever more data—the claim that these discoveries have no bearing whatsoever on phenomenological practice and doctrine becomes ever more difficult to credit. “Science is a specific theoretical stance towards the world,” Zahavi claims. “Science is performed by embodied and embedded subjects, and if we wish to comprehend the performance and limits of science, we have to investigate the forms of intentionality that are employed by cognizing subjects.”

Perhaps… But only if it turns out that ‘cognizing subjects’ possess the ‘intentionality’ phenomenology supposes. What if science is performed by natural beings who, quite naturally, cannot intuit themselves in natural terms? Phenomenology has no way of answering this question. So it waits the way all prescientific discourses have waited for the judgment of science on their respective domains. I have given but one possible example of a judgment that will inevitably come.

There will be others. My advice? Jump ship before the real neuroinformatic deluge comes. We live in a society morphing faster and more profoundly every year. There is much more pressing work to be done, especially when it comes to theorizing our everydayness in more epistemically humble and empirically responsive manner. We lack names for what we are, in part because we have been wasting breath on terms that merely name our confusion.

 

*[Originally posted 2014/10/22]

Advertisements

Introspection Explained

by rsbakker

Las Meninas

So I couldn’t get past the first paper in Thomas Metzinger’s excellent Open MIND offering without having to work up a long-winded blog post! Tim Bayne’s “Introspective Insecurity” offers a critique of Eric Schwitzgebel’s Perplexities of Consciousness, which is my runaway favourite book on introspection (and consciousness, for that matter). This alone might have sparked me to write a rebuttal, but what I find most extraordinary about the case Bayne lays out against introspective skepticism is the way it directly implicates Blind Brain Theory. His  defence of introspective optimism, I want to show, actually vindicates an even more radical form of pessimism than the one he hopes to domesticate.

In the article, Bayne divides the philosophical field into two general camps, the introspective optimists, who think introspection provides reliable access to conscious experience, and introspective pessimists, who do not. Recent years have witnessed a sea change in philosophy of mind circles (one due in no small part to Schwitzgebel’s amiable assassination of assumptions). The case against introspective reliability has grown so prodigious that what Bayne now terms ‘optimism’–introspection as a possible source of metaphysically reliable information regarding the mental/phenomenal–would have been considered rank introspective pessimism not so long ago. The Cartesian presumption of ‘self-transparency’ (as Carruthers calls it in his excellent The Opacity of Mind) has died a sudden death at the hands of cognitive science.

Bayne identifies himself as one of these new optimists. What introspection needs, he claims, is a balanced account, one sensitive to the vulnerabilities of both positions. Where proponents of optimism have difficulty accounting for introspective error, proponents of pessimism have difficulty accounting for introspective success. Whatever it amounts to, introspection is characterized by perplexing failures and thoughtless successes. As he writes in his response piece,  “The epistemology of introspection is that it is not flat but contains peaks of epistemic security alongside troughs of epistemic insecurity” (“Introspection and Intuition,” 1). Since any final theory of introspection will have to account for this mixed ‘epistemic profile,’ Bayne suggests that it provides a useful speculative constraint, a way to sort the metacognitive wheat from the chaff.

According to Bayne, introspective optimists motivate their faith in the deliverances of introspection on the basis of two different arguments: the Phenomenological Argument and the Conceptual Argument. He restricts his presentation of the phenomenological argument to a single quote from Brie Gertler’s “Renewed Acquaintance,” which he takes as representative of his own introspective sympathies. As Gertler writes of the experience of pinching oneself:

When I try this, I find it nearly impossible to doubt that my experience has a certain phenomenal quality—the phenomenal quality it epistemically seems to me to have, when I focus my attention on the experience. Since this is so difficult to doubt, my grasp of the phenomenal property seems not to derive from background assumptions that I could suspend: e.g., that the experience is caused by an act of pinching. It seems to derive entirely from the experience itself. If that is correct, my judgment registering the relevant aspect of how things epistemically seem to me (this phenomenal property is instantiated) is directly tied to the phenomenal reality that is its truthmaker. “Renewed Acquaintance,” Introspection and Consciousness, 111.

When attending to a given experience, it seems indubitable that the experience itself has distinctive qualities that allow us to categorize it in ways unique to first-person introspective, as opposed to third-person sensory, access. But if we agree that the phenomenal experience—as opposed to the object of experience—drives our understanding of that experience, then we agree that the phenomenal experience is what makes our introspective understanding true. “Introspection,” Bayne writes, “seems not merely to provide one with information about one’s experiences, it seems also to ‘say’ something about the quality of that information” (4). Introspection doesn’t just deliver information, it somehow represents these deliverances as true.

Of course, this doesn’t make them true: we need to trust introspection before we can trust our (introspective) feeling of introspective truth. Or do we? Bayne replies:

it seems to me not implausible to suppose that introspection could bear witness to its own epistemic credentials. After all, perceptual experience often contains clues about its epistemic status. Vision doesn’t just provide information about the objects and properties present in our immediate environment, it also contains information about the robustness of that information. Sometimes vision presents its take on the world as having only low-grade quality, as when objects are seen as blurry and indistinct or as surrounded by haze and fog. At other times visual experience represents itself as a highly trustworthy source of information about the world, such as when one takes oneself to have a clear and unobstructed view of the objects before one. In short, it seems not implausible to suppose that vision—and perceptual experience more generally—often contains clues about its own evidential value. As far as I can see there is no reason to dismiss the possibility that what holds of visual experience might also hold true of introspection: acts of introspection might contain within themselves information about the degree to which their content ought to be trusted. 5

Vision is replete with what might be called ‘information information,’ features that indicate the reliability of the information available. Darkness, for instance, is a great example, insofar as it provides visual information to the effect that visual information is missing. Our every glance is marbled with what might be called ‘more than meets the eye’ indicators. As we shall, this analogy to vision will come back and haunt Bayne’s thesis. The thing to keep in mind is the fact that the cognition of missing information requires more information. For the nonce, however, his claim is modest enough to acknowledge his point: as it stands, we cannot rule out the possibility that introspection, like exospection, reliably indicates its own reliability. As such, the door to introspective optimism remains open.

Here we see the ‘foot-in-the-door strategy’ that Bayne adopts throughout the article, where his intent isn’t so much to decisively warrant introspective optimism as it is to point out and elucidate the ways that introspective pessimism cannot decisively close the door on introspection.

The conceptual motivation for introspective optimism turns on the necessity of epistemic access implied in the very concept of ‘what is it likeness.’ The only way for something to be ‘like something’ is for it to like something for somebody. “[I]f a phenomenal state is a state that there is something it is like to be in,” Bayne writes, “then the subject of that state must have epistemic access to its phenomenal character” (5). Introspection has to be doing some kind of cognitive work, otherwise “[a] state to which the subject had no epistemic access could not make a constitutive contribution to what it was like for that subject to be the subject that it was, and thus it could not qualify as a phenomenal state” (5-6).

The problem with this argument, of course, is that it says little about the epistemic access involved. Apart from some unspecified ability to access information, it really implies very little. Bayne convincingly argues that the capacity to cognize differences, make discriminations, follows from introspective access, even if the capacity to correctly categorize those discriminations does not. And in this respect, it places another foot in the introspective door.

Bayne then moves on to the case motivating pessimism, particularly as Eric presents it in his Perplexities of Consciousness. He mentions the privacy problems that plague scientific attempts to utilize introspective information (Irvine provides a thorough treatment of this in her Consciousness as a Scientific Concept), but since his goal is to secure introspective reliability for philosophical purposes, he bypasses these to consider three kinds of challenges posed by Schwitzgebel in Perplexities, the Dumbfounding, Dissociation, and Introspective Variation Arguments. Once again, he’s careful to state the balanced nature of his aim, the obvious fact that

any comprehensive account of the epistemic landscape of introspection must take both the hard and easy cases into consideration. Arguably, generalizing beyond the obviously easy and hard cases requires an account of what makes the hard cases hard and the easy cases easy. Only once we’ve made some progress with that question will we be in a position to make warranted claims about introspective access to consciousness in general. 8

His charge against Schwitzgebel, then, is that even conceding his examples of local introspective unreliability, we have no reason to generalize from these to the global unreliability of introspection as a philosophical tool. Since this inference from local unreliability to global unreliability is his primary discursive target, Bayne doesn’t so much need to problematize Schwitzgebel’s challenges as to reinterpret—‘quarantine’—their implications.

So in the case of ‘dumbfounding’ (or ‘uncertainty’) arguments, Schwitzgebel reveals the epistemic limitations of introspection via a barrage of what seem to be innocuous questions. Our apparent inability to answer these questions leaves us ‘dumbfounded,’ stranded on a cognitive limit we never knew existed. Bayne’s strategy, accordingly, is to blame the questions, to suggest that dumbfounding, rather than demonstrating any pervasive introspective unreliability, simply reveals that the questions being asked possess no determinate answers. He writes:

Without an account of why certain introspective questions leave us dumbfounded it is difficult to see why pessimism about a particular range of introspective questions should undermine the epistemic credentials of introspection more generally. So even if the threat posed by dumbfounding arguments were able to establish a form of local pessimism, that threat would appear to be easily quarantined. 11

Once again, local problems in introspection do not warrant global conclusions regarding introspective reliability.

Bayne takes a similar tack with Schwitzgebel’s dissociation arguments, examples where our naïve assumptions regarding introspective competence diverge from actual performance. He points out the ambiguity between the reliability of experience and the reliability of introspection: Perhaps we’re accurately introspecting mistaken experiences. If there’s no way to distinguish between these, Bayne, suggests, we’ve made room for introspective optimism. He writes: “If dissociations between a person’s introspective capacities and their first-order capacities can disconfirm their introspective judgments (as the dissociation argument assumes), then associations between a person’s introspective judgments and their first-order capacities ought to confirm them” (12). What makes Schwitzgebel’s examples so striking, he goes on to argue, is precisely that fact that introspective judgments are typically effective.

And when it comes to the introspective variation argument, the claim that the chronic underdetermination that characterizes introspective theoretical disputes attests to introspective incapacity, Bayne once again offers an epistemologically fractionate picture of introspection as a way of blocking any generalization from given instances of introspective failure. He thinks that examples of introspective capacity can be explained away, “[b]ut even if the argument from variation succeeds in establishing a local form of pessimism, it seems to me there is little reason to think that this pessimism generalizes” (14).

Ultimately, the entirety of his case hangs on the epistemologically fractionate nature of introspection. It’s worth noting at this point, that from a cognitive scientific point of view, the fractionate nature of introspection is all but guaranteed. Just think of the mad difference between Plato’s simple aviary, the famous metaphor he offers for memory in the Theaetetus, and the imposing complexity of memory as we understand it today. I raise this ‘mad difference’ for two reasons. First, it implies that any scientific understanding of introspection is bound to radically complicate our present understanding. Second, and even more importantly, it evidences the degree to which introspection is blind, not only to the fractionate complexity of memory, but to its own fractionate complexity as well.

For Bayne to suggest that introspection is fractionate, in other words, is for him to claim that introspection is almost entirely blind to its own nature (much as it is to the nature of memory). To the extent that Bayne has to argue the fractionate nature of introspection, we can conclude that introspection is not only blind to its own fractionate nature, it is also blind to the fact of this blindness. It is in this sense that we can assert that introspection neglects its own fractionate nature. The blindness of introspection to introspection is the implication that hangs over his entire case.

In the meantime, having posed an epistemologically plural account of introspection, he’s now on the hook to explain the details. “Why,” he now asks, “might certain types of phenomenal states be elusive in a way that other types of phenomenal states are not?” (15). Bayne does not pretend to possess any definitive answers, but he does hazard one possible wrinkle in the otherwise featureless face of introspection, the 2010 distinction that he and Maja Spener made in “Introspective Humility” between ‘scaffolded’ and ‘freestanding’ introspective judgments. He notes that those introspective judgments that seem to be the most reliable, are those that seem to be ‘scaffolded’ by first-order experiences. These include the most anodyne metacognitive statements we make, where we reference our experiences of things to perspectivally situate them in the world, as in, ‘I see a tree over there.’ Those introspective judgments that seem the least reliable, on the other hand, have no such first-order scaffolding. Rather than piggy-back on first-order perceptual judgments, ‘freestanding’ judgments (the kind philosophers are fond of making) reference our experience of experiencing, as in, ‘My experience has a certain phenomenal quality.’

As that last example (cribbed from the Gertler quote above) makes plain, there’s a sense in which this distinction doesn’t do the philosophical introspective optimist any favours. (Max Engel exploits this consequence to great effect in his Open MIND reply to Bayne’s article, using it to extend pessimism into the intuition debate). But Bayne demurs, admitting that he lacks any substantive account. As it stands, he need only make the case that introspection is fractionate to convincingly block the ‘globalization’ of Schwitzgebel’s pessimism. As he writes:

perhaps the central lesson of this paper is that the epistemic landscape of introspection is far from flat but contains peaks of security alongside troughs of insecurity. Rather than asking whether or not introspective access to the phenomenal character of consciousness is trustworthy, we should perhaps focus on the task of identifying how secure our introspective access to various kinds of phenomenal states is, and why our access to some kinds of phenomenal states appears to be more secure than our access to other kinds of phenomenal states. 16

The general question of whether introspective cognition of conscious experience is possible is premature, he argues, so long as we have no clear idea of where and why introspection works and does not work.

This is where I most agree with Bayne—and where I’m most puzzled. Many things puzzle me about the analytic philosophy of mind, but nothing quite so much as the disinclination to ask what seem to me to be relatively obvious empirical questions.

In nature, accuracy and reliability are expensive achievements, not gifts from above. Short of magic, metacognition requires physical access and physical capacity. (Those who believe introspection is magic—and many do—need only be named magicians). So when it comes to deliberative introspection, what kind of neurobiological access and capacity are we presuming? If everyone agrees that introspection, whatever it amounts to, requires the brain do honest-to-goodness work, then we can begin advancing a number of empirical theses regarding access and capacity, and how we might find these expressed in experience.

So given what we presently know, what kind of metacognitive access and capacity should we expect our beans to possess? Should we, for instance, expect it to rival the resolution and behavioural integration of our environmental capacities? Clearly not. For one, environmental cognition coevolved with behaviour and so has the far greater evolutionary pedigree—by hundreds of millions of years, in fact! As it turns out, reproductive success requires that organisms solve their surroundings, not themselves. So long as environmental challenges are overcome, they can take themselves for granted, neglect their own structure and dynamics. Metacognition, in other words, is an evolutionary luxury. There’s no way of saying how long homo sapiens has enjoyed the particular luxury of deliberative introspection (as an exaptation, the luxury of ‘philosophical reflection’ is no older than recorded history), but even if we grant our base capacity a million year pedigree, we’re still talking about a very young, and very likely crude, system.

Another compelling reason to think metacognition cannot match the dimensionality of environmental cognition lies in the astronomical complexity of its target. As a matter of brute empirical fact, brains simply cannot track themselves the high-dimensional way they track their environments. Thus, once again, ‘Dehaene’s Law,’ the way “[w]e constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79).  The vast resources society is presently expending to cognize the brain attests to the degree to which our brain exceeds its own capacity to cognize in high dimensional terms. However the brain cognizes its own operations, then, it can only do so in a radically low dimensional way. We should expect, in other words, our brains to be relatively insensitive to their own operation—to be blind to themselves.

A third empirical reason to assume that metacognition falls short environmental dimensionality is found in the way it belongs to the very system it tracks, and so lacks the functional independence as well as the passive and active information-seeking opportunities belonging to environmental cognition. The analogy I always like to use here is that of a primatologist sewn into a sack with a troop of chimpanzees versus one tracking them discretely in the field. Metacognition, unlike environmental cognition, is structurally bound to its targets. It cannot move toward some puzzling item—an apple say—peer at it, smell it, touch it, turn it over, crack it open, taste it, scrutinize the components. As embedded, metacognition is restricted to fixed channels of information that it could not possibly identify or source. The brain, you could say, is simply too close to itself to cognize itself as it is.

Viewed empirically, then, we should expect metacognitive access and capacity to be more specialized, more adventitious, and less flexible compared to that of environmental cognition. Given the youth of the system, the complexity of its target, and the proximity of its target, we should expect human metacognition will consist of various kluges, crude heuristics that leverage specific information to solve some specific range of problems. As Gerd Gigerenzer and the Adaptive Behaviour and Cognition Research Group have established, simple heuristics are often far more effective than optimization methods at solving problems. “As the amount of data available to make predictions in an environment shrinks, the advantage of simple heuristics over complex algorithms grows” (Hertwig and Hoffrage, “The Research Agenda,” Simple Heuristics in a Social World, 23). With complicated problems yielding little data, adding parameters to a solution can compound the chances of making mistakes. Low dimensionality, in other words, need not be a bad thing, so long as the information consumed is information enabling the solution of some problem set. This is why evolution so regularly makes use of it.

Given this broad-stroke picture, human metacognition can be likened to a toolbox containing multiple, special-purpose tools, each possessing specific ‘problem-ecologies,’ narrow, but solvable domains that trigger their application frequently and decisively enough to have once assured the tool’s generational selection. The problem with heuristics, of course, lies in the narrowness of their respective domains. If we grant the brain any flexibility in the application of its metacognitive tools, then the potential for heuristic misapplication is always a possibility. If we deny the brain any decisive capacity to cognize these misapplications outside their consequences (if the brain suffers ‘tool agnosia’), then we can assume these misapplications will be indistinguishable from successful applications short of those consequences.

In other words, this picture of human metacognition (which is entirely consistent with contemporary research) provides an elegant (if sobering) recapitulation and explanation of what Bayne calls the ‘epistemic landscape of introspection.’ Metacognition is fractionate because of the heuristic specialization required to decant behaviourally relevant information from the brain. The ‘peaks of security’ correspond to the application of metacognitive heuristics to matching problem-ecologies, while the ‘troughs of insecurity’ correspond to the application of metacognitive heuristics to problem-ecologies they could never hope to solve.

Since those matching problem-ecologies are practical (as we might expect, given the cultural basis of regimented theoretical thinking), it makes sense that practical introspection is quite effective, whereas theoretical introspection, which attempts to intuit the general nature of experience, is anything but. The reason the latter strike us as so convincing—to the point of seeming impossible to doubt, no less—is simply that doubt is expensive: there’s no reason to presume we should happily discover the required error-signalling machinery awaiting any exaptation of our deliberative introspective capacity, let alone one so unsuccessful as philosophy. As I mentioned above, the experience of epistemic insufficiency always requires more information. Sufficiency is the default simply because the system has no way of anticipating novel applications, no decisive way of suddenly flagging information that was entirely sufficient for ancestral problem-ecologies and so required no flagging.

Remember how Bayne offered what I termed ‘information information’ provided by vision as a possible analogue of introspection? Visual experience cues us to the unreliability or absence of information in a number of ways, such as darkness, blurring, faintness, and so on. Why shouldn’t we presume that deliberative introspection likewise flags what can and cannot be trusted? Because deliberative introspection exapts information sufficient for one kind of practical problem-solving (Did I leave my keys in the car? Am I being obnoxious? Did I read the test instructions carefully enough?) for the solution of utterly unprecedented ontological problems. Why should repurposing introspective deliverances in this way renovate the thoughtless assumption of ‘default sufficiency’ belonging to their original purposes?

This is the sense in which Blind Brain Theory, in the course of explaining the epistemic profile of introspection, also explodes Bayne’s case for introspective optimism. By tying the contemplative question of deliberative introspection to the empirical question of the brain’s metacognitive access and capacity, BBT makes plain the exorbitant biological cost of the optimistic case. Exhaustive, reliable intuition of anything involves a long evolutionary history, tractable targets, and flexible information access—that is, all the things that deliberative introspection does not possess.

Does this mean that deliberative introspection is a lost cause, something possessing no theoretical utility whatsoever? Not necessarily. Accidents happen. There’s always a chance that some instance of introspective deliberation could prove valuable in some way. But we should expect such solutions to be both adventitious and local, something that stubbornly resists systematic incorporation into any more global understanding.

But there’s another way, I think, in which deliberative introspection can play a genuine role in theoretical cognition—a way that involves looking at Schwitzgebel’s skeptical project as a constructive, rather than critical, theoretical exercise.

To show what I mean, it’s worth recapitulating one of the quotes Bayne selects from Perplexities of Consciousness for sustained attention:

How much of the scene are you able vividly to visualize at once? Can you keep the image of your chimney vividly in mind at the same time you vividly imagine (or “image”) your front door? Or does the image of your chimney fade as your attention shifts to the door? If there is a focal part of your image, how much detail does it have? How stable is it? Suppose that you are not able to image the entire front of your house with equal clarity at once, does your image gradually fade away towards the periphery, or does it do so abruptly? Is there any imagery at all outside the immediate region of focus? If the image fades gradually away toward the periphery, does one lose colours before shapes? Do the peripheral elements of the image have color at all before you think to assign color to them? Do any parts of the image? If some parts of the image have indeterminate colour before a colour is assigned, how is that indeterminacy experienced—as grey?—or is it not experienced at all? If images fade from the centre and it is not a matter of the color fading, what exactly are the half-faded images like? Perplexities, 36

Questions in general are powerful insofar as they allow us to cognize the yet-to-be-cognized. The slogan feels ancient to me now, but no less important: Questions are how we make ignorance visible, how we become conscious of cognitive incapacity. In effect, then, each and every question in this quote brings to light a specific inability to answer. Granting that this inability indicates either a lack of information access and/or metacognitive incapacity, we can presume these questions enumerate various cognitive dimensions missing from visual imagery. Each question functions as an interrogative ‘ping,’ you could say, showing us another direction that (for many people at least) introspective inquiry cannot go—another missing dimension.

So even though Bayne and Schwitzgebel draw negative conclusions from the ‘dumbfounding’ that generally accompanies these questions, each instance actually tells us something potentially important about the limits of our introspective capacities. If Schwitzgebel had been asking these questions of a painting—Las Meninas, say—then dumbfounding wouldn’t be a problem at all. The information available, given the cognitive capacity possessed, would make answering them relatively straightforward. But even though ‘visual imagery’ is apparently ‘visual’ the same as a painting, the selfsame questions stop us in our tracks. Each question, you could say, closes down a different ‘degree of cognitive freedom,’ reveals how few degrees of cognitive freedom human deliberative introspection possesses for the purposes of solving visual imagery. Not much at all, as it turns out.

Note this is precisely what we should expect on a ‘blind brain’ account. Once again, simply given the developmental and structural obstacles confronting metacognition, it almost certainly consists of an ‘adaptive toolbox’ (to use Gerd Gigerenzer’s phrase), a suite of heuristic devices adapted to solve a restricted set of problems given only low-dimensional information. The brain possesses a fixed set of metacognitive channels available for broadcast, but no real ‘channel channel,’ so that it systematically neglects metacognition’s own fractionate, heuristic structure.

And this clearly seems to be what Schwitzgebel’s interrogative barrage reveals: the low dimensionality of visual imagery (relative to vision), the specialized problem-solving nature of visual imagery, and our profound inability to simply intuit as much. For some mysterious reason we can ask visual questions that for some mysterious reason do not apply to visual imagery. The ability of language to retask cognitive resources for introspective purposes seems to catch the system as a whole by surprise, confronts us with what had been hitherto relegated to neglect. We find ourselves ‘dumbfounded.’

So long as we assume that cognition requires work, we must assume that metacognition trades in low dimensional information to solve specific kinds of problems. To the degree that introspection counts as metacognition, we should expect it to trade in low-dimensional information geared to solve particular kinds of practical problems. We should also expect it to be blind to introspection, to possess neither the access nor the capacity required to intuit its own structure. Short of interrogative exercises such as Schwitzgebel’s, deliberative introspection has no inkling of how many degrees of cognitive freedom it possesses in any given context. We have to figure out what information is for what inferentially.

And this provides the basis for a provocative diagnosis of a good many debates in contemporary psychology and philosophy of mind. So for instance, a blind brain account implies that our relation to something like ‘qualia’ is almost certainly one possessing relatively few degrees of cognitive freedom—a simple heuristic. Deliberative introspection neglects this, and at the same time, via questioning, allows other cognitive capacities to consume the low-dimensional information available. ‘Dumbfounding’ often follows—what the ancient Greeks liked to call, thaumazein. The practically minded, sniffing a practical dead end, turn away, but the philosopher famously persists, mulling the questions, becoming accustomed to them, chasing this or that inkling, borrowing many others, all of which, given the absence of any real information information, cannot but suffer from some kind of ‘only game in town effect’ upon reflection. The dumbfounding boundary is trammelled to the point of imperceptibility, and neglect is confused with degrees of cognitive freedom that simply do not exist. We assume that a quale is something like an apple—we confuse a low-dimensional cognitive relationship with a high-dimensional one. What is obviously specialized, low-dimensional information becomes, for a good number of philosophers at least, a special ‘immediately self-evident’ order of reality.

Is this Adamic story really that implausible? After all, something has to explain our perpetual inability to even formulate the problem of our nature, let alone solve it. Blind Brain Theory, I would argue, offers a parsimonious and comprehensive way to extricate ourselves from the traditional mire. Not only does it explain Bayne’s ‘epistemic profile of introspection,’ it explains why this profile took so long to uncover. By reinterpreting the significance of Schwitzgebel’s ‘dumbfounding’ methods, it raises the possibility of ‘Interrogative Introspection’ as a scientific tool. And lastly, it suggests the problems that neglect foists on introspection can be generalized, that much of our inability to cognize ourselves turns on the cognitive short cuts evolution had to use to assure we could cognize ourselves at all.

BBT Creep…

by rsbakker

“Given the inability of SDT-based models to account for blind insight, our data suggest that a more radical revision of metacognition models is required. One potential direction for revision would take into account the evidence, mentioned in the Introduction, that neural dynamics underlying perceptual decisions involve counterflowing bottom-up and top-down neural signals (Bowman et al., 2006; Jaskowski & Verleger, 2007; Salin & Bullier, 1995). A framework for interpreting these countercurrent dynamics is provided by predictive processing, which proposes that top-down projections convey predictions (expectations) about the causes of sensory signals, with bottom-up projections communicating mismatches (prediction errors) between expected and observed signals across hierarchical levels, with their mutual dynamics unfolding according to the principles of Bayesian inference (Clark, 2013). Future models of metacognition could leverage this framework to propose that both first-order and metacognitive discriminations emerge from the interaction of top-down expectations and bottom- up prediction errors, for example by allowing top-down signals to reshape the probability distributions of evidence on which decision thresholds are imposed (Barrett et al., 2013). We can at this stage only speculate as to whether such a model might provide the means to account for the blind-insight phenomenon and recognize that predictive coding is just one among a variety of potential frameworks that could be applied to that challenge (Timmermans et al., 2012).” Ryan B. Scott et al, “Blind Insight: Metacognitive Discrimination Despite Chance Task Performance,” 8

Just thinking in these terms renders traditional assumptions regarding the character and capacity of philosophical reflection deeply suspect. Is it really just a coincidence that all the old riddles regarding the human remain just as confounding? You need only consider the challenge the brain poses to itself to realize the brain simply cannot track its own activities the way it tracks activities in its environments. The traditionalists would have you believe that reflection reveals an alternate order of efficacy, if not being. So far, the apparent obviousness of the intuitions and the absence of any credible account of the work they seem to do has allowed them to make an abductive case. Reflection, they argue, discriminates autonomous/irreducible/transcendental functions and/or phenomena. Of course, they don’t so much agree on the actual discriminations they make as they agree that such discriminations can and must be made.

My bet is that the brain does a lot of causal (Bayesian) predictive processing troubleshooting its environments and relies on some kind of noncausal predictive processing to troubleshoot itself and other brains. You only need to look at the dimensions missing in the ‘mental’ or the ‘normative’ or the ‘phenomenological’ to realize they’re precisely the kinds of information we should expect an overmatched metacognition to neglect. Where the brain is able to articulate efficacies into mechanistic (lateral) relationships in certain, typically natural environments, it must posit unarticulated efficacies in other, typically social environments. My hypothesis is that the countless naturalistically inscrutable, ontologically exceptional, alternate orders of efficacy posited by the traditionalist amount to nothing more than this.

Either way, this research is killing traditional philosophy as we speak.

The Crux

by rsbakker

Aphorism of the Day: Give me an eye blind enough, and I will transform guttering candles into exploding stars.

.

The Blind Brain Theory turns on the following four basic claims:

1) Cognition is heuristic all the way down.

2) Metacognition is continuous with cognition.

3) Metacognitive intuitions are the artifact of severe informatic and heuristic constraints. Metacognitive accuracy is impossible.

4) Metacognitive intuitions only loosely constrain neural fact. There are far more ways for neural facts to contradict our metacognitive intuitions than otherwise.

A good friend of mine, Dan Mellamphy, has agreed to go through a number of the posts from the past eighteen months with an eye to pulling them together into a book of some kind. I’m actually thinking of calling it Through the Brain Darkly: because of Neuropath, because the blog is called Three Pound Brain, and because of my apparent inability to abandon the tedious metaphorics of neural blindness. Either way, I thought boiling BBT down to its central commitments would be worthwhile exercise. Like a picture taken on a rare, good hair day…

.

1) Cognition is heuristic all the way down.

I take this claim to be trivial. Heuristics are problem-solving mechanisms that minimize computational costs via the neglect of extraneous or inaccessible information. The human brain is itself a compound heuristic device, one possessing a plurality of cognitive tools (innate and learned component heuristics) adapted to a broad but finite range of environmental problems. The human brain, therefore, possesses a ‘compound problem ecology’ consisting of the range of those problems primarily responsible for driving its evolution, whatever they may be. Component heuristics likewise possess problem ecologies, or ‘scopes of application.’

.

2) Metacognition is continuous with cognition.

I also take this claim to be trivial. The most pervasive problem (or reproductive obstacle) faced by the human brain is the inverse problem. Inverse problems involve deriving effective information (ie., mass and trajectory) from some unknown, distal phenomenon (ie., a falling tree) via proximal information (ie., retinal stimuli) possessing systematic causal relations (ie., reflected light) to that phenomenon. Hearing, for instance, requires deriving distal causal structures, an approaching car, say, on the basis of proximal effects, the cochlear signals triggered by the sound emitted from the car. Numerous detection technologies (sonar, radar, fMRI, and so on) operate on this very principle, determining the properties of unknown objects from the properties of some signal connected to them.

The brain can mechanically engage its environment because it is mechanically embedded in its environment–because it is, quite literally, just more environment. The brain is that part of the environment that models/exploits the rest of the environment. Thus the crucial distinction between those medial environmental components involved in modelling/enacting (sensory media, neural mechanisms) and those lateral environmental components modelled. And thus, medial neglect, the general blindness of the human brain to its own structure and function, and its corollary, lateral sensitivity, the general responsiveness of the brain to the structure and function of its external environments–or in other words, the primary problem ecology of the heuristic brain.

Medial neglect and lateral sensitivity speak to a profound connection between ignorance and knowledge, how sensitivity to distal, lateral complexities necessitates insensitivity to proximal, medial complexities. Modelling environments necessarily exacts what might be called an ‘autoepistemic toll’ on the systems responsible. The greater the lateral fidelity, the more sophisticated the mechanisms, the greater the surplus of ‘blind,’ or medial, complexity. The brain, you could say, is an organ that transforms ‘risky complexity’ into ‘safe complexity,’ that solves distal unknowns that kill by accumulating proximal unknowns (neural mechanisms) that must be fed.

The parsing of the environment into medial and lateral components represents more a twist than a scission: the environment remains one environment. Information pertaining to brain function is environmental information, which is to say, information pertinent to the solution of potential environmental problems. Thus metacognition, heuristics that access information pertaining to the brain’s own operations.

Since metacognition is continuous with cognition, another part of the environment engaged in problem solving the environment, it amounts to the adaptation of neural mechanisms sensitive in effective ways to other neural mechanisms in the brain. The brain, in other words, poses an inverse problem for itself.

.

3) Metacognitive intuitions are the artifact of severe informatic and heuristic constraints. Metacognitive accuracy is impossible.

This claim, which is far more controversial than those above, directly follows from the continuity of metacognition and cognition–from the fact that the brain itself constitutes an inverse problem. This is because, as an inverse problem, the brain is quite clearly insoluble. Two considerations in particular make this clear:

1) Target complexity: The human brain is the most complicated mechanism known. Even as an external environmental problem, it has taken science centuries to accumulate the techniques, information, and technology required to merely begin the process of providing any comprehensive mechanistic explanation.

2) Target complicity: The continuity of metacognition and cognition allows us to see that the structural entanglement of metacognitive neural mechanisms with the neural mechanisms tracked, far from providing any cognitive advantage, thoroughly complicates the ability of the former to derive high-dimensional information from the latter. One might analogize the dilemma in terms of two biologists studying bonobos, the one by observing them in their natural habitat, the other by being sewn into a burlap sack with one. Relational distance and variability provide the biologist-in-the-habitat quantities and kinds (dimensions) of information simply not available to the biologist-in-the-sack. Perhaps more importantly, they allow the former to cognize the bonobos without the complication of observer effects. Neural mechanisms sensitive to other neural mechanisms* access information via dedicated, as opposed to variable, channels, and as such are entirely ‘captive’: they cannot pursue the kinds of active environmental engagement that permit the kind of high-dimensional tracking/modelling characteristic of cognition proper.

Target complexity and complicity mean that metacognition is almost certainly restricted to partial, low-dimensional information. There is quite literally no way for the brain to cognize itself as a brain–which is to say, accurately. Thus the mind-body problem. And thus a good number of the perennial problems that have plagued philosophy of mind and philosophy more generally (which can be parsimoniously explained away as different consequences of informatic privation). Heuristic problem-solving does not require the high-dimensional fidelity that characterizes our sensory experience of the world, as simpler life forms show. The metacognitive capacities of the human brain turn on effective information, scraps gleaned via adventitious mutations that historically provided some indeterminate reproductive advantage in some indeterminate context. It confuses these scraps for wholes–suffers the cognitive illusion of sufficiency–simply because it has no way of cognizing its informatic straits as such. Because of this, it perpetually mistakes what could be peripheral fragments in neurofunctional terms, for the entirety and the crux.

.

4) Metacognitive intuitions only loosely constrain neural fact. There are far more ways for neural facts to contradict our metacognitive intuitions than otherwise.

Given the above, the degree to which the mind is dissimilar to the brain is the degree to which deliberative metacognition is simply mistaken. The futility of philosophy is no accident on this account. When we ‘reflect upon’ conscious cognition or experience, we are accessing low-dimensional information adapted to metacognitive heuristics adapted to narrow problem ecologies faced by our preliterate–prephilosophical–ancestors. Thanks to medial neglect, we are utterly blind to the actual neurofunctional context of the information expressed in experience. Likewise, we have no intuitive inkling of the metacognitive apparatuses at work, no idea whether they are many as opposed to one, let alone whether they are at all applicable to the problem they have been tasked to solve. Unless, that is, the task requires accuracy–getting some theoretical metacognitive account of mind or meaning or morality or phenomenology right–in which case we have good grounds (all our manifest intuitions to the contrary) to assume that such theoretical problem ecologies are hopelessly out of reach.

Experience, the very sum of significance, is a kind of cartoon that we are. Metacognition assumes the mythical accuracy (as opposed to the situation-specific efficacy) of the cartoon simply because that cartoon is all there is, all there ever has been. It assumes sufficiency because, in other words, cognizing its myriad limits and insufficiencies requires access to information that simply does not exist for metacognition.

The metacognitive illusion of sufficiency means that the dissociation between our metacognitive intuition of function and actual neural function can be near complete, that memory need not be veridical, the feeling of willing need not be efficacious, self-identity need not be a ‘condition of possibility,’ and so on, and so on. It means, in other words, that what we call ‘experience’ can be subreptive through and through, and still seem the very foundation of the possibility of knowledge.

It means that, all things being equal, the thoroughgoing neuroscientific overthrow our manifest self-understanding is far, far more likely than even its marginal confirmation.

The Introspective Peepshow: Consciousness and the ‘Dreaded Unknown Unknowns’

by rsbakker

On February 12th, 2002, Secretary of Defence Donald Rumsfeld was famously asked in a DoD press conference about the American government’s failure to provide evidence regarding Iraq’s alleged provision of weapons of mass destruction to terrorist groups. His reply, which was lampooned in the media at the time, has since become something of a linguistic icon:

[T]here are known knowns; there are things we know that we know. There are known unknowns; that is to say there are things that we know we don’t know. But there are also unknown unknowns; there are things we don’t know we don’t know.

In 2003, this comment earned Rumsfeld the ‘Foot in Mouth Award’ from the British-based Plain English Campaign. Despite the scorn and hilarity it occasioned in mainstream culture at the time, the concept of unknown unknowns, or ‘unk-unk’ as it is sometimes called, has enjoyed long-standing currency in military and engineering circles. Only recently has it found its way to business and economics (in large part due to the work of Daniel Kahneman), where it is often referred to as the ‘dreaded unknown unknown.’ For enterprises involving risk, the reason for this dread is quite clear. Even in daily life, we speak of being ‘blind-sided,’ of things happening ‘out of the blue’ or coming ‘out of left field.’ Our institutions, like our brains, have evolved to manage and exploit environmental regularities. Since knowing everything is impossible, we have at our disposal any number of rehearsed responses, precooked ways to deal with ‘known unknowns,’ or irregularities that are regular enough to be anticipated. Unknown unknowns refer to those events that find us entirely unprepared–often with catastrophic consequences.

Given that few human activities are quite so sedate or ‘risk free,’ unk-unk might seem out of place in the context of consciousness research and the philosophy of mind. But as I hope to show, such is not the case. The unknown unknown, I want to argue, has a profound role to play in developing our understanding of consciousness. Unfortunately, since the unknown unknown itself constitutes an unknown unknown within cognitive science, let alone consciousness research, the route required to make my case is necessarily circuitous. As John Dewey (1958) observed, “We cannot lay hold of the new, we cannot even keep it before our minds, much less understand it, save by the use of ideas and knowledge we already possess” (viii-ix).

Blind-siding readers rarely pays. With this in mind, I begin with a critical consideration of Peter Carruthers (forthcoming, 2011, 2009a, 2009b, 2008) ‘innate self-transparency thesis,’ the account of introspection entailed by his more encompassing ‘mindreading first thesis’ (or as he calls it in The Opacity of the Mind (2011), Interpretative Sensory-Access Theory (ISA)). I hope to accomplish two things with this reading: 1) illustrate the way explanations in the cognitive sciences so often turn on issues of informatic tracking; and 2) elaborate an alternative to Carruthers’ innate self-transparency thesis that makes, in a preliminary fashion at least, the positive role played of the unknown unknown clear.

Since what I propose subsequent to this first leg of the article can only sound preposterous short of this preliminary, I will commit the essayistic sin (and rhetorical virtue) of leaving my final conclusions unstated–as a known unknown, worth mere curiosity, perhaps, but certainly not dread.

.

Follow the Information

Explanations in cognitive science generally adhere to the explanatory paradigm found in the life sciences: various operations are ‘identified’ and a variety of mechanisms, understood as systems of components or ‘working parts,’ are posited to discharge them (Bechtel and Abrahamson 2005, Bechtel 2008). In cognitive science in particular, the operations tend to be various cognitive capacities or conscious phenomena, and the components tend to be representations embedded in computational procedures that produce more representations. Theorists continually tear down and rebuild what are in effect virtual ‘explanatory machines,’ using research drawn from as many related fields as possible to warrant their formulations. Whether the operational outputs are behavioural, epistemic, or phenomenal, these virtual machines inevitably involve asking what information is available for what component system or process.

Let’s call this process of information tracking the ‘Follow the Information Game’ (FIG).

In a superficial sense, playing FIG is not all that different from playing detective. In the case of criminal investigations, evidence is assembled and assessed, possible motives are considered, various parties to the crime are identified, and an overarching narrative account of who did what to whom is devised and, ideally, tested. In the case of cognitive investigations, evidence is likewise assembled and assessed, possible evolutionary ‘motives’ are considered, a number of contributing component mechanisms are posited, and an overarching mechanistic account what does what for what is devised for possible experimental testing. The ‘doing’ invariably involves discharging some computational function, processing and disseminating information for subsequent computation. The theorist quite literally ‘follows the information’ from mechanism to mechanism, using a complex stew of evolutionary rationales, experimental results, and neuropathological case studies to warrant the various specifics of the resulting theoretical account.

We see this quite clearly in the mindreading versus metacognition debate, where the driving question is one of how we attribute propositional attitudes to ourselves as opposed to others. Do we have direct ‘metacognitive’ access to our beliefs and desires? Is mindreading a function of metacognition? Is metacognition a function of mindreading? Or are they simply different channels of a singular mechanism? Any answer to these questions requires mapping the flow of information, which is to say, playing FIG. This is why, for example, Peter Carruthers’ “How we know our own minds” and the following Open Peer Commentary read like transcripts of the diplomatic feuding behind the Treaty of Versailles. It’s an issue of mapping, but instead of arguing coal mines in Silesia and ports on the Baltic, the question is one of how the brain’s informatic spoils are divided.

Carruthers holds forth a ‘mindreading first’ account, arguing that our self-attributions of PAs rely on the same interpretative mechanisms we use to ‘mind read’ the PAs of others:

There is just a single metarepresentational faculty, which probably evolved in the first instance for purposes of mindreading… In order to do its work, it needs to have access to perceptions of the environment. For if it is to interpret the actions of others, it plainly requires access to perceptual representations of those actions. Indeed, I suggest that, like most other conceptual systems, the mindreading system can receive as input any sensory or quasi-sensory (eg., imagistic or somatosensory) state that gets “globally broadcast” to all judgment-forming, memory-forming, desire-forming, and decision-making systems. (2009b, 3-4)

In this article, he provides a preliminary draft of the informatic map he subsequently fleshes out in The Opacity of the Mind. He takes Baars (1988) Global Workspace Theory of Consciousness as a primary assumption, which requires him to distinguish between information that is and is not ‘globally broadcast.’ Consistent with the massive modularity endorsed in The Architecture of the Mind (2006), he posits a variety of informatically ‘encapsulated’ mechanisms operating ‘subpersonally’ or outside conscious access. The ‘mindreading system,’ not surprisingly, is accorded the most attention. Other mechanisms, when not directly recruited from preexisting cognitive scientific sources, are posited to explain various folk-psychological categories, such as belief. The tenability of these mechanisms turns on what might be called the ‘Accomplishment Assumption,’ the notion that all aspects of mental life that can be (or as in the case of folk psychology, already are) individuated are the accomplishments of various discrete neural mechanisms.

Given these mechanisms, Carruthers makes a number of ‘access inferences,’ each turning on the kinds of information required for each mechanism to discharge its function. To interpret the actions of others, the mindreading system needs access to information regarding those actions, which means it needs access to those systems dedicated to gathering that information. Given the apparently radical difference between self and other interpretation, Carruthers needs to delineate the kind of access characteristic of each:

Although the mindreading system has access to perceptual states, the proposal is that it lacks any access to the outputs of the belief-forming and decision-making mechanisms that feed off those states. Hence, self-attributions of propositional attitude events like judging and deciding are always the result of a swift (and unconscious) process of self-interpretation. However, it isn’t just the subject’s overt behavior and physical circumstances that provide the basis for the interpretation. Data about perceptions, visual and auditory imagery (including sentences rehearsed in “inner speech”), patterns of attention, and emotional feelings can all be grist for the self-interpretative view. (2009b, 4)

So the brain does possess belief mechanisms and the like, but they are informatically segregated from the suite of mechanisms responsible for generating the self-attribution of PAs. The former, it seems, do not ‘globally broadcast,’ and so their machinations must be gleaned the same way our brains glean the machinations of other brains, via their interpretative mindreading systems. Since, however, the mindreading system has no access to any information globally broadcast by other brains, he has to concede that the mindreading system is privy to additional information in instances of self-attribution, just not any involving direct access to the mechanisms responsible for PAs. So he lists what he presumes is available.

The problem, of course, is that it just doesn’t feel that way. Assumptions of unmediated access or self-transparency, Carruthers writes, “seem to be almost universal across times and cultures” (2011 15), not to mention “widespread in philosophy.” If we are forced to rely on our environmentally-oriented mindreading systems to interpret, as opposed to intuit, the function of our own brains, then why should we have any notion of introspective access to our PAs, let alone the presumption of unmediated access? Why presume an incorrigible introspective access that we simply do not have?

Carruthers offers what might be called a ‘less is more account.’ The mindreading system, he proposes, represents its self-application as direct rather than interpretative,. Our sense of self-transparency is the product of a mechanism. Once we have a mechanism, however, we require some kind of evolutionary story warranting its development. Carruthers argues that the presumption of incorrigible introspective access spares the brain a complicated series of computations pertaining to reliability without any real gain in reliability. “The transparency of our minds to ourselves,” he explains in an interview, “is a simplifying but false heuristic…” Citing Gigarenzer and Todd (1999), he points out that heuristics, even deceptive ones, regularly out-perform more fine-grained computational processes simply because of the relation between complexity and error. So long as self-interpretation via the mindreading system is generally reliable, this ‘Cartesian assumption’ or ‘self-transparency thesis’ (Carruthers 2008) possesses the advantage of simplicity to the extent that it relieves the need for computational estimations of interpretative reliability. The functional adequacy of a direct access model, in other words, more than compensates for its epistemic inadequacy, once one considers the metabolic cost and ‘robustness,’ as they say in ecological rationality circles, of the former versus the latter.

This explanation provides us with a clear-cut example of what I called the Accomplishment Assumption above. Given that ‘direct introspective access’ seems to be a discrete feature of mental life, it seems plausible to suppose that some discrete neural mechanism must be responsible for producing it. But there is a simpler explanation, one that draws out some of the problematic consequences of the ‘Follow the Information Game’ as it is presently played in cognitive science. A clue to this explanation can be found when Eric Schwitzgebel (2011) considers the selfsame problem:

Why, then, do people tend to be so confident in their introspective judgments, especially when queried in a casual and trusting way? Here is my guess: Because no one ever scolds us for getting it wrong about our experience and we never see decisive evidence of our error, we become cavalier. This lack of corrective feedback encourages a hypertrophy of confidence. [emphasis added] 130

Given his skepticism of ‘boxological’ mechanistic explanation (2011, 2012), Schwitzgebel can circumvent Carruthers’ dilemma (the mindreading system represents agent access either as direct or as interpretative) and simply pose the question in a far less structured way. Why do we possess unwarranted confidence in our introspective judgements? Well, no one tells us otherwise. But this simply begs the question of why. Why should we require ‘social scolding’ to ‘see decisive evidence of our error’? Why can’t we just see it on our own?

The easy answer is that, short of different perspectives, the requisite information is simply not available to us. The problem, in Schwitzgebel’s characterization, is that we have only a single perspective on our conscious experience, one lacking access to information regarding the limitations of introspection. In other words, the near universal presumption of self-transparency is an artifact of the near universal lack of any information otherwise. On this account, you could say the traditional, prescientific assumption of self-transparency is not so different from the traditional, prescientific assumption of geocentrism. We experience ‘vection,’ a sense of bodily displacement, whenever a large portion of our visual field moves. Short of that perceived motion (or other vestibular effects), a sense of motionless is the cognitive default. This was why the accumulation of so much (otherwise inaccessible) scientific knowledge was required to overturn geocentrism: not because we possessed an ‘innate representation’ of a motionless earth, but because of the interplay between our sensory limitations and our evolved capacity to detect motion.

The self-transparency assumption, on this account, is simply a kind of ‘noocentrism,’ the result of a certain limiting relationship between the information available and the cognitive systems utilized. The problem with geocentrism was that we were all earthbound, literally limited to what meagre extraterrestrial information our native senses could provide. That information, given our cognitive capacities, made geocentrism intuitively obvious. Thus the revolutionary significance of Galileo and his Dutch Spyglass. The problem with noocentrism, on the other hand, is that we are all brainbound, literally limited to what neural information our introspective ‘sense’ can provide. As it turns out that information, given our cognitive capacities, makes noocentrism intuitively obvious. Why? Because short of any Neural Spyglass, we lack any information regarding the insufficiency of the information at our disposal. We assume self-transparency because there is literally no other assumption to make.

One need only follow the information. Adopting a dual process perspective (Stanovich, 1999; Stanovich and Toplak, 2011), the globally broadcast information accessed for System 2 deliberation contains no information regarding its interpretative (and thus limited) status. Given that global broadcasting or integration operates within fixed bounds, System 2 has no way of testing, let alone sourcing, the information it provides. Thus, one cannot know whether the information available for introspection is insufficient in this or that respect. But since the information accessed is never flagged for insufficiencies (and why should it be, when it is generally reliable?) this suggests sufficiency will always be the assumptive default.

Given that Carruthers’ innate self-transparency account is one that he has developed with great care and ingenuity over the course of several years, a full rebuttal of the position would require an article in its own right. It’s worth noting, however, that many of the advantages that he attributes to his self-transparency mechanism also fall out of the default self-transparency account proposed here, with the added advantage of exacting no metabolic or computational cost whatsoever. You could say it’s a ‘more for even less’ account.

But despite its parsimony, there’s something decidedly strange about the notion of default self-transparency. Carruthers himself briefly entertains the possibility in The Opacity of the Mind, stating that “[a] universal or near-universal commitment to transparency may then result from nothing more than the basic principle or ‘law’ that when something appears to be the case one is disposed to form the belief that it is the case, in the absence of countervailing considerations or contrary evidence” (15). How might this ‘basic principle or law’ be characterized? Carruthers, I think, shies from pursuing this line of questioning simply because it presses FIG into hitherto unexplored territory.

Parsimony alone motivates a sustained consideration of what lies behind default self-transparency. Emily Pronin (2009), for instance, in her consideration of the ‘introspection illusion,’ draws an important connection between the assumption of self transparency and the so-called ‘bias blind spot,’ the fact that biases we find obvious in others are almost entirely invisible to ourselves. She details a number of studies where subjects were even more prone to exhibit this ‘blindness’ when provided opportunities to introspect. Now why are these biases invisible to us? Should we assume, as Carruthers does in the case of self-transparency, that some mechanism or mechanisms are required to represent our intuitions as unbiased in each case? Or should we exercise thrift and suppose that something structural is implicit in each?

In what follows, I propose to pursue the latter possibility, to argue that what I called ‘default sufficiency’ above is an inevitable consequence of mechanistic explanation, or FIG, once we appreciate the systematic role informatic neglect plays in human cognition.

.

The Invisibility of Ignorance

Which brings us to Daniel Kahneman. In a New York Times (2011, October 19) piece entitled “Don’t Blink! The Hazards of Confidence,” he writes of his time in the Psychology Branch of the Israeli Army, where he was tasked with evaluating candidates for officer training by observing them in a variety of tests designed to isolate soldiers’ leadership skills. His evaluations, as it turned out, were almost entirely useless. But what surprised him was the way knowing this seemed to have little or no impact on the confidence with which he and his fellows submitted their subsequent evaluations, time and again. He was so struck by the phenomenon that he would go on to study it as the ‘illusion of validity,’ a specific instance of the general role the availability of information seems to plays in human cognition–or as he later terms it, What-You-See-Is-All-There-Is, or WYSIATI.

The idea, quite simply, is that because you don’t know what you don’t know, you tend, in many contexts, to think you know all that you need to know. As he puts it in Thinking, Fast and Slow:

An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. [Our automatic cognitive system] excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have. (2011, 85)

As Kahneman shows, this leads to myriad errors in reasoning, including our peculiar tendency in certain contexts to be more certain about our interpretations the less information we have available. The idea is so simple as to be platitudinal: only the information available for cognition can be cognized. Other information, as Kahneman says, “might as well not exist” for the systems involved. Human cognition, it seems, abhors a vacuum.

The problem with platitudes, however, is that they are all too often overlooked, even when, as I shall argue in this case, their consequences are spectacularly profound. In the case of informatic availability, one need only look to clinical cases of anosognosia to see the impact of what might be called domain specific informatic neglect, the neuropathological loss of specific forms of information. Given a certain, complex pattern of neural damage, many patients suffering deficits as profound as lateralized paralysis, deafness, even complete blindness, appear to be entirely unaware of the deficit. Perhaps because of the informatic bandwidth of vision, visual anosognosia, or ‘Anton’s Syndrome,’ is generally regarded as the most dramatic instance of the malady. Prigatano (2010) enumerates the essential features of the syndrome as following:

First, the patient is completely blind secondary to cortical damage in the occipital regions of the brain. Second, these lesions are bilateral. Third, the patient is not only unaware of her blindness; she rejects any objective evidence of her blindness. Fourth, the patient offers plausible, but at times confabulatory responses to explain away any possible evidence of her failure to see (e.g., “The room is dark,” or “I don’t have my glasses, therefore how can I see?”). Fifth, the patient has an apparent lack of concern (or anosodiaphoria) over her neurological condition. (456)

These symptoms are almost tailor-made for FIG. Obviously, the blindness stems from the occlusion of raw visual information. The second-order ‘blindness,’ the patient’s inability to ‘see’ that they cannot see, turns, one might suppose, on the unavailability of information regarding the unavailability of visual information. At some crucial juncture, the information required to process the lack of visual information has gone missing. As Kahneman might say, since System 1 is dedicated to the construction of ‘the best possible story’ given only the information it has, the patient confabulates, utterly convinced they can see even though they are quite blind.

Anton’s Syndrome, in other words, can be seen as a neuropathological instance of WYSIATI. And WYSIATI, conversely, can be seen as a non-neuropathological version of anosognosia. And both, I want to argue, are analogous to the default self-transparency thesis I offered in lieu of Carruthers’ innate self-transparency thesis above. Consider the following ‘translation’ of Prigatano’s symptoms, only applied to what might be called ‘Carruthers’ Syndrome’:

First, the philosopher is introspectively blind to his PAs secondary to various developmental and structural constraints. Second, the philosopher is not aware of his introspective blindness, and is prone to reject objective evidence of it. Third, the philosopher offers plausible, but at times confabulatory responses to explain away evidence of his inability to introspectively access his PAs. And fourth, the philosopher often exhibits an apparent lack of concern for his less than ideal neurological constitution.

Here we see how the default self-transparency thesis I offered above is capable of filling the explanatory shoes of Carruthers’ innate self-transparency thesis: it simply falls out of the structure of cognition. In FIG terms, what philosophers call ‘introspection’ possibly provides some combination of impoverished information, skewed information, or (what amounts to the same) information matched to cognitive systems other than those employed in deliberative cognition, without–and here’s the crucial twist–providing information to this effect. Our sense of self-transparency, in other words, is a kind of ‘unk-unk effect,’ what happens when we can’t see that we can’t see. In the absence of information to the contrary, what is globally broadcast (or integrated) for System 2 deliberative uptake, no matter how attenuated, seems become everything there is to apprehend.

But what does it mean to that say that default self-transparency ‘falls out of the structure of cognition’? Isn’t this, for instance, a version of ‘belief perseverance’? Prima facie, at least, something like Keith Stanovich’s (1999) ‘knowledge projection argument’ might seem to offer an explanation, the notion that “in a natural ecology where most of our prior beliefs are true, projecting our beliefs onto new data will lead to faster accumulation of knowledge” (Sa, 1999, 506). But as the analogy to Kahneman’s WYSIATI and Anton’s Syndrome should make clear, something considerably more profound than the ‘projection of prior beliefs’ seems to be at work here. The question is what.

Consider the following: On Carruthers’ innate self-transparency account, the assumption seems to be that short of the mindreading system telling us otherwise, we would know that something hinky is afoot. But how? To paraphrase Plato, how could we, having never seen otherwise, know that we were simply guessing at a parade of shadows? What kind of cognitive resources could we draw on? We couldn’t source the information back to the mindreading system. Neither could we compare it with some baseline–some introspective yardstick of informatic sufficiency. In fact, it’s actually difficult to imagine how we might come to doubt introspectively accessed information at all, short of regimented, deliberative inquiry.

So then why does Carruthers seem to make the opposite assumption? Why does he assume that we would know short of some representational device telling us otherwise?

To answer this question we first need to appreciate the ubiquity of ‘unk-unk effects’ in the natural world. The exploitation of cognitive scotoma or blind spots has shaped the evolution of entire species, including our own. Consider the apparently instinctive nature of human censoriousness, the implicit understanding that managing the behaviour of others requires managing the information they have available. Consider mimicry or camouflage. Or consider ‘obligate brood parasites’ such as the cuckoo, which lays its eggs in the nests of other birds to be raised to maturity by them. Looked at in purely biomechanical terms, these are all examples of certain organic systems exploiting (by operating outside) the detection/response thresholds of other organic systems. Certainly the details of these interactions remain a work in progress, but the principle is not at all mysterious. One might say the same of Anton’s syndrome or anosognosia more generally: disabling certain devices systematically impacts the capacities of the system in some dramatic ways, including deficit detection. The lack of information constrains computation, constrains cognition, period. It seems pretty straightforward, mechanically speaking.

So why, then, does Anton’s jar against our epistemic intuitions the way it does? Why do we want to assume that somehow, even if we experienced the precise pattern of neural damage, we would be the magical exception, we would say, “Aha! I only think I see!”

Because when we are blind to our blindnesses, we think we see, either actually or potentially, all that there is to be seen. Or as Kahneman would put it, because of WYSIATI. We think we would be the one Anton’s patient who would actually cognize their loss of sight, in other words, for the very same reason the Anton’s patient is convinced he can still see! The lack of information not only constrains cognition, it constrains cognition in ways that escape cognition. We possess, not a representational presumption of introspective omniscience, but a structural inability to cognize the limits of metacognition.

You might say introspection is a kind of anosognosiac.

So why does Carruthers assume the mindreading system needs an incorrigibility device? The Accomplishment Assumption forces his hand, certainly. He thinks he has an apparently discrete intuition–self-transparency–that has to be generated somehow. But in explaining away the intuition he is also paradoxically serving it, because even if we agree with Carruthers, we nonetheless assume we would know something is up if incorrigibility wasn’t somehow signalled. There’s a sense, in other words, in which Carruthers’ argument against self-transparency appeals to it!

Now this broaches the question of how informatic neglect bears on our epistemic intuitions more generally. My goal here, however, is to simply illustrate that informatic neglect has to have a pivotal role to play in our understanding of cognition through an account of the role it plays in introspection. Suffice to say the ‘basic principle or law’ that Carruthers considers in passing is actually more basic than the ‘disposition to believe in the absence of countervailing considerations.’ Our cognitive systems simply cannot allow, to use Kahneman’s terms, for information they do not have. This is a brute fact of natural information processing systems.

Sufficiency is the default because information, understood as systematic differences making systematic differences, is effective. This is why, for instance, unknowns must be known, to effect changes in behaviour. And this is what makes research on cognitive biases and the neuropathologies of neglect so unsettling: they clearly show the way we are mere mechanisms, cognitive systems causally bound to the information available. If the informatic and cognitive limits of introspection are not available for introspection (and how could they be?), then introspection will seem, curiously, limitless, no matter how severe the actual limits may be.

The potential severity of those limits remains to be seen.

.

Introspection and the Bayesian Brain

Since unknown unknowns offer FIG nothing to follow, it should perhaps come as no surprise that the potential relevance of unk-unks has itself remained an unknown unknown in cognitive science. The idea proposed here is that ‘naive introspection’ be viewed as a kind of natural anosognosia, as a case where we think we see, even though we are largely blind. It stands, therefore, squarely in the ‘introspective unreliability’ camp most forcefully defended by Eric Schwitzgebel (2007, 2008, 2011a, 2011b, 2012). Jacob Hohwy (2011, 2012), however, has offered a novel defence of introspective reliability via a sustained consideration of Karl Friston’s (2006, 2012, for an overview) free energy elaboration of the Bayesian brain hypothesis, an approach which has been recently been making inroads due to the apparent comprehensiveness of its explanatory power.

Hohwy (2011) argues that the introspective unreliability suggested by Schwitzgebel is in fact better explained by phenomenological variability. Introspection only appears as unreliable as it does on Schwitzgebel’s account because it assumes a relatively stable phenomenology. “The evidence,” Hohwy writes, “can be summarized like this: everyday or ‘naive’ introspection tells us that our phenomenology is stable and certain but, surprisingly, calm and attentive introspection tells us our phenomenology is not stable and certain, rather it is variable and uncertain” (265). In other words, either ‘attentive introspection’ is unreliable and phenomenology is stable, or ‘naive introspection’ is unreliable and phenomenology is in fact variable.

Hohwy identifies at least three sources of potential phenomenological variability on Friston’s free energy account: 1) attenuation of the ‘prediction error landscape’ through ‘inferences’ that cancel out predictive success and allow unpredicted input to ascend; 2) change through ‘agency’ and movement; and 3) increase in precision and gain via attention. Thus, he argues “[i]f the brain is this kind of inference-machine, then it is a fundamental expectation that there is variability in the phenomenology engendered by perceptual inferences, and to which introspection in turn has access” (270).

The problem with saving introspective reliability by arguing phenomenal variability, however, is that it becomes difficult to understand what in operational terms is exactly being saved. Is the target too quick? Or is the tracking too slow? Hohwy can adduce evidence and arguments for the variability of conscious experience, and Schwitzgebel can adduce evidence and arguments for the unreliability of introspection, but there is a curious sense in which their conclusions are the same: in a number of respects conscious experience eludes introspective cognition.

Setting aside this argument, the real value in Hohwy’s account lies in his consideration of what might be called introspective applicability and introspective interference. Regarding the first, applicability, Hohwy is concerned with distinguishing those instances where the researcher’s request, ‘Please, introspect,’ is warranted and where it is ‘suboptimal.’ He discusses the so-called ‘default mode network,’ the systems of the brain engaged when the subject’s thoughts and imagery are detached from the world, as opposed to the systems engaged when the subject is directly involved with his or her environment. He then argues that the variance in introspective reliability one finds between experiments can be explained by whether the mental tasks involve the default mode as opposed to mental tasks involving the environmental mode. Tasks involving the default mode evince greater reliability when compared to tasks involving the environmental mode, he suggests, simply because the request to introspect is profoundly artificial in the latter.

His argument, in other words, is that introspection, as an adaptive, evolutionary artifact, is not a universally applicable form of cognition, and that the apparent unreliability of introspection is potentially a product of researchers asking subjects to apply introspection ‘out of bounds,’ in ways that it simply was not designed to be used. In ecological rationality terms (Todd and Gigarenzer, 2012), one might say introspection is a specialized cognitive tool (or collection of tools), a heuristic like any other, and as such will only properly function the degree to which it is properly matched to its ‘ecology.’ This possibility raises a host of questions. If introspection, far from being the monolithic, information-maximizing faculty assumed by the tradition, is actually a kind of cognitive tool box, a collection of heuristics adapted to discharge specific functions, then we seem to be faced with the onerous task of identifying the tools and matching them to the appropriate tasks.

Regarding introspective interference, the question, to paraphrase Hohwy is whether introspection changes or leaves phenomenal states as they are (262). In the course of discussing the likelihood that introspection involves a plurality of processes pertaining to different domains, he provides the following footnote:

Another tier can potentially be added to this account, directed specifically at the cognitive mechanisms underpinning introspection itself. If introspection is itself a type of internal predictive inference taking phenomenal states as input, then introspective inference would be subject to the similar types of prediction error dynamics as perceptual inference itself. In this way introspective inference about phenomenality would add variability to the already variable phenomenality. This sketch of an approach to introspection is attractive because it treats introspection as also a type of unconscious inference; however, it remains to be seen if it can be worked out in satisfactory detail and I do not here want to defend introspection by subscribing to a particular theory about it. 270

By ascribing to Friston’s free energy account, Hohwy is committed to an account that conceives the brain as a mechanism that extracts information regarding the causal structure of its environment via the sensory effects of that environment. As Hohwy (2012) puts it, a ‘problem of representation’ follows from this, since the brain is stranded with sensory effects and so has no direct access to causes. As a result it needs to establish causal relations de novo, as he puts it. Sensory input contains patterns as well as noise, the repetition of which allows the formation of predictions, which can be ‘tested’ against further repetitions. Prediction error minimization (PEM) allows the system to automatically adapt to real causal patterns in the environment, which can then be said to ‘supervise’ the system. The idea is that the brain contains a hierarchy of ascending PEM levels, beginning with basic sensory and causal regularities, and with the ‘harder to predict’ signals being passed upward, ultimately producing representations of the world possessing ‘causal depth.’ All these levels exhibit ‘lateral connectivity,’ allowing the refinement of prediction via ‘contextual information.’

Although the free energy account is not an account of consciousness, it does seem to explain what Floridi (2011) calls the ‘one dimensionality of experience,’ the way, as he writes, “experience is experience, only experience, and nothing but experience” (296). If the brain is a certain kind of Bayesian causal inference engine, then one might expect the generative models it produces to be utterly lacking any explicit neurofunctional information, given the dedication of neural structure and function to minimizing environmental surprise. One might expect, in other words, that the causal structure of the brain will be utterly invisible to the brain, that it will remain, out of structural necessity, a dreaded unknown unknown–or unk-unk.

The brain, on this kind of prediction error minimization account, simply has to be ‘blind’ to itself. And this is where, far from ‘attractive’ as Hohwy suggests, the mere notion of ‘introspection’ modelled on prediction error minimization becomes exceedingly difficult to understand. Does introspection (or the plurality of processes we label as such) proceed via hierarchical prediction error minimization from sensory effects to build generative models of the causal structure of the human brain? Almost certainly not. Why? Because as a free energy minimizing mechanism (or suite of mechanisms), introspection would seem to be thoroughly hobbled for at least four different reasons:

  • 1) Functional dependence: On the free energy account, the human brain distills the causal structure of its environments from the sensory effects of that causal structure. One might, on this model, isolate two distinct vectors of causality, one, which might be called the ‘lateral,’ pertaining to the causal structure of the environment, and another, which might be call the ‘medial,’ pertaining to the causal structure of sensory inputs and the brain. As mentioned above, the brain can only model the lateral vector of environmental causal structure by neglecting the medial vector of its own causal structure. This neglect requires that the brain enjoy a certain degree of functional independence from the causal structure of its environment, simply because ‘medial interference’ will necessarily generate ‘lateral noise,’ thus rendering the causal structure of the environment more difficult, if not impossible, to model. The sheer interconnectivity of the brain, however, would likely render substantial medial interference difficult for any introspective device (or suite of devices) to avoid.
  • 2) Structural immobility: Proximity complicates cognition. To get an idea of the kind of modelling constraints any neurally embedded introspective device would suffer, think of the difference between two anthropologists trying to understand a preliterate tribesman from the Amazon, the one ranging freely with her subject in the field, gathering information from a plurality of sources, the other locked with him in a coffin. Since it is functionally implicated–or brainbound–relative to its target, the ability of any introspective device (or suite of devices) to engage in the ‘active inferences’ would be severely restricted. On Friston’s free energy account the passive reception of sensory input is complemented by behavioural outputs geared to maximizing information from a variety of positions within the organism’s environment, thus minimizing the likelihood of ‘perspectival’ or angular illusions, false inferences due to the inability to test predictions from alternate angles and positions. Geocentrism is perhaps the most notorious example of such an illusion. Given structural immobility, one might suppose, any introspective device (or suite of devices) would suffer ‘phenomenal’ analogues to this and other illusions pertaining to limits placed on exploratory information-gathering.
  • 3) Cognitive resources: If we assume that human introspective capacity is a relatively recent evolutionary adaptation, we might expect any introspective device (or suite of devices) to exploit preexisting cognitive resources, which is to say, cognitive systems primarily adapted to environmental prediction error minimization. For instance, one might argue that both (1) and (2) fairly necessitate the truth of something like Carruther’s mindreading account, particularly if (as seems to be the case) mindreading antedates introspection. Functional dependence and structural immobility suggest that we are actually in a better position mechanically to accurately predict the behaviour of others than ourselves, as indeed a growing body of evidence indicates (Carruthers (2009) provides an excellent overview). Otherwise, given our apparent ability to attend to the whole of experience, does it make sense, short of severe evolutionary pressure, to presume the evolution of entirely novel cognitive systems adapted to the accurate modelling second-order, medial information? It seems far more likely that access to this information was incremental across generations, and that it was initially selected for the degree to which it proved advantageous given our preexisting suite of environmentally oriented cognitive abilities.
  • 4) Target complexity: Any introspective device (or suite of devices) modelled on the PEM (or, for that matter, any other mechanistic) account must also cope with the sheer functional complexity of the human brain. It is difficult to imagine, particularly given (1), (2), and (3) above, how the tracking that results could avoid suffering out-and-out astronomical ‘resolution deficits’ and distortions of various kinds.

The picture these complicating factors paint is sobering. Any introspective device (or suite of devices) modelled on free energy Bayesian principles would be almost fantastically crippled: neurofunctionally embedded (which is to say, functionally entangled and structurally imprisoned) in the most complicated machinery known, accessing information for environmentally biased cognitive systems. Far from what Hohwy supposes, the problems of applicability and interference, when pursued through a free energy lens, at least, would seem to preclude introspection as a possibility.

But there is another option, one that would be unthinkable were it not for the pervasiveness and profundity of the unk-unk effect: that this is simply what introspection is, a kind of near blindness that we confuse for brilliant vision, simply because it’s the only vision we know.

The problem facing any mechanistic account of introspection can be generalized as the question of information rendered and cognitive system applied: to what extent is the information rendered insufficient, and to what extent is the cognitive system activated misapplied? This, I would argue, is the great fork in the FIG road. On the ‘information rendered’ side of the issue, informatic neglect means the assumption of sufficiency. We have no idea, as a rule, whether we have the information we need for effective deliberation or not. One need only consider the staggering complexity of the brain–complex enough to stymy a science that has puzzled through the origins of the universe in the meantime–to realize the astronomical amounts of information occluded by metacognition. On the ‘cognitive system applied’ side, informatic neglect means the assumption of universality. We have no idea, as a rule, whether we’re misapplying ‘introspection’ or not. One need only consider the heuristic nature of human cognition, the fact that heuristics are adaptive and so matched to specific sets of problems, to realize that introspective misapplications, such as those argued by Hohwy, are likely an inevitability.

This is the turn where unknown unknowns earn their reputation for dread. Given the informatic straits of introspection, what are the chances that we, blind as we are, have anything approaching the kind of information we require to make accurate introspective judgments regarding the ‘nature’ of mind and consciousness? Given the heuristic limitations of introspection, what are the chances that we, blind as we are, somehow manage to avoid colouring far outside the cognitive lines? Is it fair to assume that the answer is, ‘Not good’?

Before continuing to consider this question in more detail, it’s worth noting how this issue of informatic availability and cognitive applicability becomes out-and-out unavoidable once you acknowledge the problem of the ‘dreaded unknown unknowns.’ If the primary symptom of patients suffering neuropathological neglect is the inability to cognize their cognitive deficits, then how do we know that we don’t suffer from any number of ‘natural’ forms of metacognitive neglect? The obvious answer is, We don’t. Could what we call ‘philosophical introspection’ simply be a kind of mitigated version of Anton’s Syndrome? Could this be the reason why we find consciousness so stupendously difficult to understand? Given millennia of assuming the best of introspection and finding only perplexity, perhaps, finally, the time has come to assume the worst, and to reconceptualize the problematic of consciousness in terms of privation, distortion, and neglect.

.

Conclusion: Introspection, Tangled and Blind

Cognitive science and philosophy of mind suffer from a profound scotoma, a blindness to the structural role blindness plays in our intuitive assumptions. As we saw in passing, FIG actually plays into this blindness, encouraging theorists and researchers to conceive the relationship between information and experience exclusively in what I called Accomplishment terms. If self-transparency is the ubiquitous assumption, then it follows that some mechanism possessing some ‘self-transparency representation’ must be responsible. Informatic neglect, however, allows us to see it in more parsimonious, structural terms, as a positive, discrete feature of human cognition possessing no discrete neurofunctional correlate. And this, I would argue, counts as a game-changer as far as FIG is concerned. The possibility that certain, various discrete features of cognition and consciousness could be structural expressions of various kinds of informatic neglect not only rewrites the rules of FIG, it drastically changes the field of play.

That FIG needs to be sensitive to informatic neglect I take as uncontroversial. Informatic neglect seems to be one of those peculiar issues that everyone acknowledges but never quite sees, one that goes without saying because it goes unseen. Schwitzgebel (2012), for instance, provides a number of examples of the complications and ambiguities attending ‘acts of introspection’ to call attention to the artificial division of introspective and non-introspective processes, and in particular, to what might be called the ‘transparency problem,’ the way judgments about experience effortlessly slip into judgments about the objects/contents of experience. Given this welter of obscurities, complicating factors, not to mention the “massive interconnection of the brain,” he advocates what might be called a ‘tangled’ account of introspective cognitive processes:

What we have, or seem to have, is a cognitive confluence of crazy spaghetti, with aspects of self-detection, self-shaping, self-fulfilment, spontaneous expression, priming and association, categorical assumptions, outward perception, memory, inference, hypothesis testing, bodily activity, and who only knows what else, all feeding into our judgments about current states of mind. To attempt to isolate a piece of this confluence as the introspective process – the one true introspective process, though influenced by, interfered with, supported by, launched or halted by, all the others – is, I suggest, like trying to find the one way in which a person makes her parenting decisions… 19

Given that you accept his conclusion as a mere possibility (or as I would argue, a distinct probability), you implicitly accept much of what I’m saying here regarding informatic neglect. You accept that introspection could be massively plural while appearing to be unitary. You accept that introspection could be skewed and distorted while appearing to be the very rule. How could this be, short of informatic neglect? Recall Pronin’s (2009) ‘bias blind spots,’ or Hohwy’s (2011) mismatched ‘plurality of processes.’ How could it be that we swap between cognitive systems oblivious, with nothing, no intuition, no feel, to demarcate any transitions, let alone their applicability? As I hope should be clear, this question is simply a version of Carruthers’ question from above: How could it be we once unanimously thought that introspection was incorrigible? Both questions ask the same thing of introspection, namely, To what extent are the various limits of introspection available to introspection?

The answer, quite simply, is that they are not. Introspection is out-and-out blind to its internal structure, its cognitive applicability, and its informatic insufficiencies–let alone to its neurofunctionality. To the extent that we fail to recognize these blindnesses, we are effectively introspective anosognosiacs, simply hoping that things are ‘just so.’ And this is just to say that informatic neglect, once acknowledged, constitutes a genuine theoretical crisis, for philosophy of mind as well as for cognitive science, insofar as their operational assumptions turn on interpretations of information gleaned, by hook or by crook, from ‘introspection.’

Of course, the ‘problem of introspection’ is nothing new (in certain circles, at least). The literature abounds with attempts to ‘sanitize’ introspective data for scientific consumption. Given this, one might wonder what distinguishes informatic neglect from the growing army of experimental confounds already identified. Perhaps the appropriate methodological precautions will allow us to quarantine the problem. Schooler and Schreiber (2004), for instance, offer one such attempt to ‘massage’ FIG in such a way to preserve the empirical utility of introspection. After considering a variety of ‘introspective failures,’ they pin the bulk of the blame on what they call ‘translation dissociations’ between consciousness and meta-consciousness, the idea being that the researcher’s demand, ‘Please, introspect,’ forces the subject to translate information available for introspection into action. They categorize three kinds of translation dissociations: 1) detection, where the ‘signal’ to be introspected is too weak or ambiguous; 2) transformation, where tasks “require intervening operations for which the system is ill-equipped” (32); and 3) substitution, where the information rendered has no connection to the information experimentally targeted. Once these ‘myopias’ are identified, the assumption is, methodologies can be designed to act as corrective lenses.

The problem that informatic neglect poses for FIG, however, is far and away more profound. To see this, one need only consider the dichotomy of ‘consciousness versus metaconsciousness,’ and the assumption that there is some fact of the matter pertaining to the first that is in principle accessible to the latter. The point isn’t that no principled distinction can be made between the two, but rather that even if it can, the putative target, consciousness, is every bit as susceptible to informatic neglect as any metaconscious attempt to cognize it. The assumption is simply this: Information that finds itself globally broadcast or integrated will not, as a rule, include information regarding its ‘limits.’ Insofar as we can assume this, we can assume that informatic neglect isn’t so much a ‘problem of introspection’ as it is a problem afflicting consciousness as whole.

Our sketch of Friston’s Bayesian brain above demonstrated why this must be the case. Simply ask: What would the brain require to accurately model itself from within itself? On the PEM account, the brain is a dedicated causal inference engine, as it must be, given the difficulties of isolating the causal structure of its environment from sensory effects. This means that the brain has no means of modelling its own causal structure, short of either 1) analogizing from brains found in its environment, or 2) developing some kind of onboard ‘secondary inference’ system, one which, as was argued above, we should expect would face a number of dramatic informatic and cognitive obstacles. Functionally entangled with, structurally immured in, and heuristically mismatched to the most complicated machinery known, such a secondary inference system, one might expect, would suffer any number of deficits, all the while assuming itself incorrigible simply because it lacks any direct means of detecting otherwise.

Consciousness could very well be a cuckoo, an imposter with ends or functions all its own, and we would never be able to intuit otherwise. As we have seen, from the mechanistic standpoint this has to be a possibility. And given this possibility, informatic neglect plainly threatens all our assumptions. Once again: What would the brain require to model itself from within itself? What evolutionary demands were answered how? Bracket, as best you can, your introspective assumptions, and ask yourself how many ways these questions can be cogently answered. Far more than is friendly to our intuitive assumptions–these little blind men who wander out of the darkness telling fantastic and incomprehensible tales.

Even apparent boilerplate intuitions like efficacy become moot. The argument that the brain is generally efficacious is trivial. Given that the targets of introspective tracking are systematically related to the function of the brain, informatic neglect (and the illusion of sufficiency in particular) suggests that what we introspect or intuit will evince practical efficacy no matter how drastically its actual neural functions differ or even contradict our manifest assumptions. Neurofunctional dissociations, as unknown unknowns, simply do not exist for metacognition. “[T]he absence of representation,” as Dennett (1991) famously writes, “is not the same as the representation of absence” (359). Since the ‘unk-unk effect’ has no effect, cognition is stranded with assumptive sufficiency on the one hand, and the efficacy of our practices on the other. Informatic neglect, in other words, means that our manifest intuitions (not to mention our traditional assumptions) of efficacy are all but worthless. The question of the efficacy of what philosophers think they intuit or introspect is what it has always been: a question that only a mature neuroscience can resolve. And given that nothing biases intuition or introspection ‘friendly’ outcomes over unfriendly outcomes, we need to grapple with the fact that any future neuroscience is far more likely to be antagonistic to our intuitive, introspective assumptions than otherwise. There are far more ways for neurofunctionality to contradict our manifest and traditional assumptions than to rescue them. And perhaps this is precisely what we should expect, given the dismal history of traditional discourses once science colonizes their domain.

It is worth noting that a priori arguments simply beg the question, since it is entirely possible (likely probable given the free energy account) that evolution stranded us with suboptimal metacognitive capacities. One might simply ask, for instance, from where do our intuitions regarding the a priori come?

Evolutionary arguments, on the other hand, cut both ways. Everyone agrees that our general metacognitive capacities are adaptations of some kind, but adaptations for what? The accurate second-order appraisals of cognitive structure or ‘mind’ more generally? Seems unlikely. As far as we know, our introspective capacities could be the result of very specific evolutionary demands that required only gross distortions to be discharged. What need did our ancestors have for ‘theoretical descriptions of the mental’? Given informatic neglect (and the spectre of ‘Carruthers’ Syndrome’), evolutionary appeals would actually seem to count against the introspectionist, insofar as any story told would count as ‘just so,’ and thus serve to underscore the improbability of that story.

Again, the two question to be asked are: What would the brain require to model itself from within itself? What evolutionary demands were answered how? Informatic neglect, the dreaded unknown unknown, allows us to see how many ways these questions can be answered. By doing so, it makes plain the dramatic extent of our anosognosia, to think that we had won the magical introspection lottery.

Short of default self-transparency, why would anyone trust in any intuitions incompatible with those that underwrite the life sciences? If it is the case that evolution stranded us with just enough second-order information and cognitive resources to discharge a relatively limited repertoire of processes, then perhaps the last two millennia of second-order philosophical perplexity should not surprise us. Maybe we should expect that science, when it finally provides a detailed picture of informatic availability and cognitive applicability, will be able to diagnose most traditional philosophical problematics as the result of various, unavoidable cognitive illusions pertaining to informatic depletion, distortion and neglect. Then, perhaps, we will at last be able to see the terrain of perennial philosophical problems as a kind of ‘free energy landscape’ sustained by the misapplication of various, parochial cognitive systems to insufficient information. Perhaps noocentrism, like biocentrism and geocentrism before it, will become the purview of historians, a third and final ‘narcissistic wound.’

.

References

Armor, D., Taylor, S. (1998). Situated optimism: specific outcome expectancies and self-regulation. In M. P. Zanna (ed.), Advances in Experimental Social Psychology. 30. 309-379. New York, NY: Academic Press.

Baars, B. (1988). A Cognitive Theory of Consciousness. Cambridge, MA: Cambridge University Press.

Bakker, S. (2012). The last magic show: a blind brain theory of the appearance of consciousness. Retrieved from http://www.academia.edu/1502945/The_Last_Magic_Show_A_Blind_Brain_Theory_of_the_Appearance_of_Consciousness

Bechtel, W, and Abrahamson, A. (2005). Explanation: a mechanist alternative. Studies in the History of Biological Biomedical Sciences. 36. 421-441.

Bechtel, W. (2008). Mental Mechanisms: Philosophical Perspectives on Cognitive Neuroscience. New York, NY: Psychology Press.

Carruthers, P. (forthcoming). On knowing your own beliefs: a representationalist account. Retrieved from http://www.philosophy.umd.edu/Faculty/pcarruthers/On%20knowing%20your%20own%20beliefs.pdf * [In Nottelman (ed.). New Essays on Belief: Structure, Constitution and Content. Palgrave MacMillan]

Carruthers, P. (2011). The Opacity of Mind: An Integrative Theory of Self-Knowledge. Oxford: Oxford University Press.

Carruthers, P. (2009a). Introspection: divided and partly eliminated. Philosophy and Phenomenological Research. 80(1). 76-111.

Carruthers, P. (2009b). How we know our own minds: the relationship between mindreading and metacognition. Behavioral and Brain Sciences. 1-65. doi:10.1017/S0140525X09000545

Carruthers, P. (2008). Cartesian epistemology: is the theory of the self-transparent mind innate? Journal of Consciousness Studies. 15(4). 28-53.

Carruthers, P. (2006). The Architecture of the Mind: Massive Modularity and the Flexibility of Thought. Oxford: Clarendon Press.

Dennett, D. C. (2002). How could I be wrong? How wrong could I be? Journal of Consciousness Studies. 9. 1-4.

Dennett, D. C. (1991). Consciousness Explained. Boston, MA: Little Brown.

Dewey, J. (1958). Experience and Nature. New York, NY: Dover Publications.

Ehrlinger, J., Gilovich, T., and Ross, L. (2005). Peering into the bias blind spot: people’s assessments of bias in themselves and others. Personality and Social Psychology Bulletin, 31. 680-692.

Floridi, L. (2011). The Philosophy of Information. Oxford: Oxford University Press.

Friston, K. (2012). A free energy principle for biological systems. Entropy, 14. doi: 10.3390/e14112100.

Friston, K., Kilner, J., and Harrison, L. (2006). A free energy principle for the brain. Journal of Physiology – Paris, 100(1-3). 70-87.

Gigarenzer, G., Todd, P. and the ABC Research Group. (1999). Simple Heuristics that Make Us Smart. Oxford: Oxford University Press.

Heilman, K. and Harciarek, M. (2010). Anosognosia and anosodiaphoria of weakness. In G. P. Prigatano (ed.), The Study of Anosognosia. 89-112. Oxford: Oxford University Press.

Helweg-Larsen, M. and Shepperd, J. (2001). Do moderators of the optimistic bias affect personal or target risk estimates? A review of the literature. Personality and Social Psychology Review, 5. 74-95.

Hohwy, J. (2012). Attention and conscious perception in the hypothesis testing brain. Frontiers in Psychology, 3(96) 1-14. doi: 10.3389/fpsyg.201200096.

Hohwy, J. (2011). Phenomenal variability and introspective reliability. Mind & Language, 26(3). 261-286.

Huang, G. T. (2008). Is this a unified theory of the brain? The New Scientist. (2658). 30-33.

Hurlburt, R. T. and Schwitzgebel, E. (2007). Describing Inner Experience? Proponent Meets Skeptic. Cambridge, MA: MIT Press.

Irvine, E. (2012). Consciousness as a Scientific Concept: A Philosophy of Science Perspective. New York, NY: Springer.

Kahneman, D. (2011, October 19). Don’t blink! The hazards of confidence. The New York Times. Retrieved from http://www.nytimes.com/2011/10/23/magazine/don’t-blink-the-hazards-of-confidence.html?pagewanted=all&_r=0

Kahneman, Daniel (2011). Thinking, Fast and Slow. Toronto, ON: Doubleday Canada.

Lopez, J. K., and Fuxjager, M. J. (2012). Self-deception’s adaptive value: effects of positive thinking and the winner effect. Consciousness and Cognition. 21. 315-324.

Prigatano, G. and Wolf, T. (2010). Anton’s Syndrome and unawareness of partial or complete blindness. In G. P. Prigatano (ed.), The Study of Anosognosia. 455-467. Oxford: Oxford University Press.

Pronin, E. (2009). The introspection illusion. In M. P. Zanna (ed.), Advances in Experimental Social Psychology, 41. 1-68. Burlington: Academic Press.

Sa, W. C., West, R. F. and Stanovich, K. E. (1999). The domain specificity and generality of belief bias. Journal of Educational Psychology, 91(3). 497-510.

Schooler, J. W., and Schreiber, C. A. (2004). Experience, meta-consciousness, and the paradox of introspection. Journal of Consciousness Studies. 11. 17-39.

Schwitzgebel, E. (2012). Introspection, what? In D. Smithies & D. Stoljar (eds.), Introspection and Consciousness. Oxford: Oxford University Press.

Schwitzgebel, E. (2011a). Perplexities of Consciousness. Cambridge, MA: MIT Press.

Schwitzgebel, E. (2011b). Self-Ignorance. In J. Liu and J. Perry (eds.), Consciousness and the Self. Cambridge, MA: Cambridge University Press.

Schwitzgebel, E. (2008). The unreliability of naive introspection. Philosophical Review, 117(2). 245-273.

Sklar, A. Y., Levy, N., Goldstein, A., Mandel, R., Maril, A., and Hassin, R. R. (2012). Reading and doing arithmetic nonconsciously. Proceedings of the National Academy of Sciences. 1-6. doi: 10.1073/pnas.1211645109.

Stanovich, K. E. (1999). Who is Rational? Studies of Individual Differences in Reasoning. Mahwah, NJ: Lawrence Erlbaum Associates.

Stanovich, K. E. and Toplak, M. E. (2012). Defining features versus incidental correlates of Type 1 and Type 2 processing. Mind and Society. 11(1). 3-13.

Taylor, S. and Brown, J. (1988). Illusion and well-being: a social psychological perspective on mental health. Psychological Bulletin, 103. 193-210.

There are known knowns. (2012, November 7). In Wikipedia. Retrieved from http://en.wikipedia.org/wiki/There_are_known_knowns

Todd, P., Gigarenzer, G., and the ABC Research Group. (2012). What is ecological rationality? Ecological Rationality: Intelligence in the World. 3-30. Oxford: Oxford University Press.

von Hippel, W., & Trivers, R. (2011). The evolution and psychology of self-deception. Behavioral and Brain Sciences, 34, 1–56.

Weinstein, E. A. and Kahn, R. L. (1955). Denial of Illness: Symbolic and Physiological Aspects. Springfield, IL: Charles C. Thomas.

Weinstein, N. (1980). Unrealistic optimism about future life events. Journal of Personality and Social Psychology, 39. 806-820.

Wigner, E. (1960). The unreasonable effectiveness of mathematics in the natural sciences. Richard Courant lecture in mathematical sciences delivered at New York University, May 11, 1959. Communication on Pure and Applied Mathematics. 13. 1-14. doi: 10.1002