Three Pound Brain

No bells, just whistling in the dark…

Tag: introspection

Do Zombies Dream of Undead Sheep?

by rsbakker

My wife gave me my first Kindle this Christmas, so I purchased a couple of those ‘If only I had a Kindle’ titles I have encountered over the years. I began with Routledge’s reboot of Brie Gertler’s collection, Privileged Access. The first essay happens to be Dretske’s “How Do You Know You are Not a Zombie?” an article I had hoped to post on for a while now as a means to underscore the inscrutability of metacognitive awareness. To explain how you know you’re not a zombie, you need to explain how you know you possess conscious experience.

What Dretske is describing, in fact, is nothing other than medial neglect; our abject blindness to the structure and dynamics of our own cognitive capacities. What I hope to show is the way the theoretical resources of Heuristic Neglect Theory allow us to explain a good number of the perplexities uncovered by Dretske in this awesome little piece. If Gertler’s anthology demonstrates anything decisively, it’s the abject inability of our traditional tools to decisively answer any of the questions posed. As William Lycan admits at the conclusion of his contribution, “[t]he moral is that introspection will not be well understood anytime soon.”

Dretske himself thinks his own question is ridiculous. He doesn’t believe he’s a zombie—he knows, in other words, that he possesses awareness. The question is how does he or anyone else know this. What in conscious experience evidences the conclusion that we are conscious or aware of that experience? “There is nothing you are aware of, external or internal,” Dretske will conclude, “that tells you that, unlike a zombie, you are aware of it.”

The primary problem, he suggests, is the apparent ‘transparency’ of conscious experience, the fact that attending to experience amounts to attending to whatever is being experienced.

“Watching your son do somersaults in the living room is not like watching the Olympics on television. Perception of your son may involve mental representations, but, if it does, the perception is not secured, as it is with objects seen on television, by awareness of these intermediate representations. It is the occurrence of (appropriately situated) representations in us, not our awareness of them that makes us aware of the external object being represented.”

Experience in the former sense, watching somersaults, is characterized by a lack of awareness of any intermediaries. Experience is characterized, in other words, by metacognitive insensitivity to the enabling dimension of cognition. This, as it turns out, is the definition of medial neglect.

So then, given medial neglect, what faculty renders us aware of our awareness? The traditional answer, of course, is introspection. But then the question becomes one of what introspection consists in.

“In one sense, a perfectly trivial sense, introspection is the answer to our question. It has to be. We know by introspection that we are not zombies, that we are aware of things around (and in) us. I say this is trivial because ‘introspection’ is just a convenient word to describe our way of knowing what is going on in our own mind, and anyone convinced that we know – at least sometimes – what is going on in our own mind and, therefore, that we have a mind and, therefore, that we are not zombies, must believe that introspection is the answer we are looking for.”

Introspection, he’s saying, is just the posit used to paper over the fact of medial neglect, the name for a capacity that escapes awareness altogether. And this, he points out, dooms inner sense models either to perpetual underdetermination, or the charge of triviality.

“Unless an inner sense model of introspection specifies an object of awareness whose properties (like the properties of beer bottles) indicate the facts we come to know about, an inner sense model of introspection does not tell us how we know we have conscious experiences. It merely tells us that, somehow, we know it. This is not in dispute.”

The problem is pretty clear. We have conscious experiences, but we have no conscious experience of the mechanisms mediating conscious experience. But there’s a further problem as well. As Stanislau Dehaene puts it, “[w]e constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79). Our insensitivity to the structure and dynamics of cognition out-and-out entails  insensitivity to the limits of cognition as well.

“There is a perspective we have on the world, a ‘boundary’, if you will, between things we see and things we don’t see. And of the things we see, there are parts (surfaces) we see and parts (surfaces) we don’t see. This partition determines a point of view that changes as we move around.”

What Dretske calls ‘partition’ here, Continental phenomenologists call ‘horizon,’ an experiential boundary that does not appear within experience—what I like to call a ‘limit-with-one-side’ (LWOS). The most immediately available–and quite dramatic, I think–example is the boundary of your visual field, the way vision trails into oblivion instead of darkness. To see the boundary of seeing as such we would have to see what lays beyond sight. To the extent that darkness is something seen, it simply cannot demarcate the limit of your visual field.

“Points of view, perspectives, boundaries and horizons certainly exist in vision, but they are not things you see. You don’t see them for the same reason you don’t feel the boundaries between objects you touch and those you don’t. Tactile boundaries are not tactile and visual boundaries are not visible. There is a difference between the surfaces you see and the surfaces you don’t see, and this difference determines a ‘point of view’ on the world, but you don’t see your point of view.”

Our perspective, in other words, is hemmed at every turn by limits-with-one-side. Conscious experience possesses what might be called a multi-modal neglect structure: limits on availability and capacity that circumscribe what can be perceived or cognized.

When it comes to environmental cognition, the horizons are both circumstantially contingent, varying according to things like position and prior experience, and congenital, fixed according to our various sensory and cognitive capacities. We can chase a squirrel around a tree (to use James’ famous example from What Pragmatism Means), engage in what Karl Friston calls ‘active inference,’ but barring scientific instrumentation, we cannot chase a squirrel around the electromagnetic spectrum. We can see the backside of countless environmental features, but we have no way of contemporaneously seeing the biological backside of sight. (As Wittgenstein famously puts it in the Tractatus, “nothing in the visual field allows you to infer it is seen by an eye” (5.633)). For some reason, all or our cognitive and perceptual modalities suffer their own version of medial neglect.

For Dretske, the important point is the Heideggerean one (though I’m sure the closest he ever came to Heidegger was a night of drinking with Dreyfus!): that LWOS prevent any perspective on our perspective as such. For a perspective to contemporaneously appear in experience, it would cease to possess LWOS and so cease to be a perspective.

We perceive and cognize but a slice of ourselves and our environments, as must be the case on any plausible biological account of cognition. In a sense, what Dretske is calling attention to is so obvious as to escape interrogation altogether: Why medial neglect? We have a vast number of cognitive degrees of freedom relative to our environments, and yet we have so few relative to ourselves. Why? Biologically speaking, why should a human find itself so difficult to cognize?

Believe it or not, no one in Gertler’s collection tackles this question. In fact, since they begin presuming the veracity of various traditional ontologizations of experience and cognition, consciousness and intentionality, they actually have no way of posing this question. Rather than seeing the question of self-knowledge as the question of how a brain could possibly communicate/cognize its own activity, they see it as the question of how a mind can know its own mental states. They insist on beginning, as Dretske shows, where the evidence is not.

Biologically speaking, humanity was all but doomed to be confounded by itself. One big reason is simply indisposition: the machinery of seeing is indisposed, too busy seeing. This is what renders modality specific medial neglect, our inability ‘to see seeing’ and the like inescapable. Another involves the astronomical complexity of cognitive processes. Nothing prevents us from seeing where touch ends, or where hearing is mistaken. What one modality neglects can be cognized by another, then subsequently integrated. The problem is that the complexity of these cognitive processes far, far outruns their cognitive capacity. As the bumper-sticker declares, if our brains were so simple we could understand them, we would be too simple to understand our brains!

The facts underwriting medial neglect mean that, from an evolutionary perspective, we should expect cognitive sensitivity to enabling systems to be opportunistic (special purpose) as opposed to accurate (general purpose). Suddenly Dretske’s question of how we know we’re aware becomes the far less demanding question of how could a species such as ours report awareness? As Dretske says, we perceive/cognize but a slice of our environments, those strategic bits unearthed by evolution. Given that introspection is a biological capacity (and what else would it be?), we can surmise that it perceives/cognizes but a slice as well. And given the facts of indisposition and complexity, we can suppose that slice will be both fractionate and heuristic. In other words, we should expect introspection (to the extent it makes sense to speak of any such unified capacity) consists of metacognitive hacks geared to the solution of ancestral problems.

What Gertler and her academic confrere’s call ‘privileged access’ is actually a matter of specialized access and capacity, the ability to derive as many practical solutions as possible out of as little information as possible.

So what are we to make of the philosophical retasking of these metacognitive hacks? Given our blindness to the structure and dynamics of our metacognitive capacities, we had no way of intuiting how few degrees of metacognitive freedom we possessed–short, that is, of the consequences of our inquiries. How much more evidence of this lack of evidence do we need? Brie Gertler’s anthology, I think, wonderfully illustrates the way repurposing metacognitive hacks to answer philosophical questions inevitably crashes them. If we persist it’s because our fractionate slice is utterly insensitive to its own heuristic parochialism—because these capacities also suffer medial neglect! Availability initially geared to catching our tongue and the like becomes endless speculative fodder.

Consider an apparently obvious but endlessly controversial property of conscious experience, ‘transparency’ (or ‘intentional inexistence’) the way the only thing ‘in experience’ (its ‘content’) is precisely what lies outside experience. Why not suppose transparency—something which remains spectacularly inexplicable—is actually a medial artifact? The availability for conscious experience of only things admitting (originally ancestral) conscious solution is surely no accident. Conscious experience, as a biological artifact, is ‘need to know’ the same as everything else. Does the interval between sign and signified, subject and object, belief and proposition, experience and environment shout transparency, a miraculous vehicular vanishing act, or does it bellow medial neglect, our opportunistic obliviousness to the superordinate machinery enabling consciousness and cognition.

The latter strikes me as the far more plausible possibility, especially since its the very kind of problem one should expect, given the empirical inescapability of medial neglect.

Where transparency renders conscious experience naturalistically inscrutable, something hanging inexplicably in the neural millhouse, medial neglect renders it a component of a shallow information ecology, something broadcast to facilitate any number of possible behavioural advantages in practical contexts. Consciousness cuts the natural world at the joints—of this I have no doubt—but conscious experience, what we report day-in and day-out, cuts only certain classes of problems ‘at the joints.’ And what Dretske shows us, quite clearly, I think, is that the nature of conscious experience does not itself belong to that class of problems—at least not in any way that doesn’t leave us gasping for decisive evidence.

How do we know we’re not zombies? On Heuristic Neglect, the answer is straightforward (at certain level of biological generality at least): via one among multiple metacognitive hacks adapted to circumventing medial neglect, and even then, only so far as our ancestors required.

In other words, barely, if at all. The fact is, self-knowledge was never so important to reproduction as to warrant the requisite hardware.

Advertisements

Introspection Explained

by rsbakker

Las Meninas

So I couldn’t get past the first paper in Thomas Metzinger’s excellent Open MIND offering without having to work up a long-winded blog post! Tim Bayne’s “Introspective Insecurity” offers a critique of Eric Schwitzgebel’s Perplexities of Consciousness, which is my runaway favourite book on introspection (and consciousness, for that matter). This alone might have sparked me to write a rebuttal, but what I find most extraordinary about the case Bayne lays out against introspective skepticism is the way it directly implicates Blind Brain Theory. His  defence of introspective optimism, I want to show, actually vindicates an even more radical form of pessimism than the one he hopes to domesticate.

In the article, Bayne divides the philosophical field into two general camps, the introspective optimists, who think introspection provides reliable access to conscious experience, and introspective pessimists, who do not. Recent years have witnessed a sea change in philosophy of mind circles (one due in no small part to Schwitzgebel’s amiable assassination of assumptions). The case against introspective reliability has grown so prodigious that what Bayne now terms ‘optimism’–introspection as a possible source of metaphysically reliable information regarding the mental/phenomenal–would have been considered rank introspective pessimism not so long ago. The Cartesian presumption of ‘self-transparency’ (as Carruthers calls it in his excellent The Opacity of Mind) has died a sudden death at the hands of cognitive science.

Bayne identifies himself as one of these new optimists. What introspection needs, he claims, is a balanced account, one sensitive to the vulnerabilities of both positions. Where proponents of optimism have difficulty accounting for introspective error, proponents of pessimism have difficulty accounting for introspective success. Whatever it amounts to, introspection is characterized by perplexing failures and thoughtless successes. As he writes in his response piece,  “The epistemology of introspection is that it is not flat but contains peaks of epistemic security alongside troughs of epistemic insecurity” (“Introspection and Intuition,” 1). Since any final theory of introspection will have to account for this mixed ‘epistemic profile,’ Bayne suggests that it provides a useful speculative constraint, a way to sort the metacognitive wheat from the chaff.

According to Bayne, introspective optimists motivate their faith in the deliverances of introspection on the basis of two different arguments: the Phenomenological Argument and the Conceptual Argument. He restricts his presentation of the phenomenological argument to a single quote from Brie Gertler’s “Renewed Acquaintance,” which he takes as representative of his own introspective sympathies. As Gertler writes of the experience of pinching oneself:

When I try this, I find it nearly impossible to doubt that my experience has a certain phenomenal quality—the phenomenal quality it epistemically seems to me to have, when I focus my attention on the experience. Since this is so difficult to doubt, my grasp of the phenomenal property seems not to derive from background assumptions that I could suspend: e.g., that the experience is caused by an act of pinching. It seems to derive entirely from the experience itself. If that is correct, my judgment registering the relevant aspect of how things epistemically seem to me (this phenomenal property is instantiated) is directly tied to the phenomenal reality that is its truthmaker. “Renewed Acquaintance,” Introspection and Consciousness, 111.

When attending to a given experience, it seems indubitable that the experience itself has distinctive qualities that allow us to categorize it in ways unique to first-person introspective, as opposed to third-person sensory, access. But if we agree that the phenomenal experience—as opposed to the object of experience—drives our understanding of that experience, then we agree that the phenomenal experience is what makes our introspective understanding true. “Introspection,” Bayne writes, “seems not merely to provide one with information about one’s experiences, it seems also to ‘say’ something about the quality of that information” (4). Introspection doesn’t just deliver information, it somehow represents these deliverances as true.

Of course, this doesn’t make them true: we need to trust introspection before we can trust our (introspective) feeling of introspective truth. Or do we? Bayne replies:

it seems to me not implausible to suppose that introspection could bear witness to its own epistemic credentials. After all, perceptual experience often contains clues about its epistemic status. Vision doesn’t just provide information about the objects and properties present in our immediate environment, it also contains information about the robustness of that information. Sometimes vision presents its take on the world as having only low-grade quality, as when objects are seen as blurry and indistinct or as surrounded by haze and fog. At other times visual experience represents itself as a highly trustworthy source of information about the world, such as when one takes oneself to have a clear and unobstructed view of the objects before one. In short, it seems not implausible to suppose that vision—and perceptual experience more generally—often contains clues about its own evidential value. As far as I can see there is no reason to dismiss the possibility that what holds of visual experience might also hold true of introspection: acts of introspection might contain within themselves information about the degree to which their content ought to be trusted. 5

Vision is replete with what might be called ‘information information,’ features that indicate the reliability of the information available. Darkness, for instance, is a great example, insofar as it provides visual information to the effect that visual information is missing. Our every glance is marbled with what might be called ‘more than meets the eye’ indicators. As we shall, this analogy to vision will come back and haunt Bayne’s thesis. The thing to keep in mind is the fact that the cognition of missing information requires more information. For the nonce, however, his claim is modest enough to acknowledge his point: as it stands, we cannot rule out the possibility that introspection, like exospection, reliably indicates its own reliability. As such, the door to introspective optimism remains open.

Here we see the ‘foot-in-the-door strategy’ that Bayne adopts throughout the article, where his intent isn’t so much to decisively warrant introspective optimism as it is to point out and elucidate the ways that introspective pessimism cannot decisively close the door on introspection.

The conceptual motivation for introspective optimism turns on the necessity of epistemic access implied in the very concept of ‘what is it likeness.’ The only way for something to be ‘like something’ is for it to like something for somebody. “[I]f a phenomenal state is a state that there is something it is like to be in,” Bayne writes, “then the subject of that state must have epistemic access to its phenomenal character” (5). Introspection has to be doing some kind of cognitive work, otherwise “[a] state to which the subject had no epistemic access could not make a constitutive contribution to what it was like for that subject to be the subject that it was, and thus it could not qualify as a phenomenal state” (5-6).

The problem with this argument, of course, is that it says little about the epistemic access involved. Apart from some unspecified ability to access information, it really implies very little. Bayne convincingly argues that the capacity to cognize differences, make discriminations, follows from introspective access, even if the capacity to correctly categorize those discriminations does not. And in this respect, it places another foot in the introspective door.

Bayne then moves on to the case motivating pessimism, particularly as Eric presents it in his Perplexities of Consciousness. He mentions the privacy problems that plague scientific attempts to utilize introspective information (Irvine provides a thorough treatment of this in her Consciousness as a Scientific Concept), but since his goal is to secure introspective reliability for philosophical purposes, he bypasses these to consider three kinds of challenges posed by Schwitzgebel in Perplexities, the Dumbfounding, Dissociation, and Introspective Variation Arguments. Once again, he’s careful to state the balanced nature of his aim, the obvious fact that

any comprehensive account of the epistemic landscape of introspection must take both the hard and easy cases into consideration. Arguably, generalizing beyond the obviously easy and hard cases requires an account of what makes the hard cases hard and the easy cases easy. Only once we’ve made some progress with that question will we be in a position to make warranted claims about introspective access to consciousness in general. 8

His charge against Schwitzgebel, then, is that even conceding his examples of local introspective unreliability, we have no reason to generalize from these to the global unreliability of introspection as a philosophical tool. Since this inference from local unreliability to global unreliability is his primary discursive target, Bayne doesn’t so much need to problematize Schwitzgebel’s challenges as to reinterpret—‘quarantine’—their implications.

So in the case of ‘dumbfounding’ (or ‘uncertainty’) arguments, Schwitzgebel reveals the epistemic limitations of introspection via a barrage of what seem to be innocuous questions. Our apparent inability to answer these questions leaves us ‘dumbfounded,’ stranded on a cognitive limit we never knew existed. Bayne’s strategy, accordingly, is to blame the questions, to suggest that dumbfounding, rather than demonstrating any pervasive introspective unreliability, simply reveals that the questions being asked possess no determinate answers. He writes:

Without an account of why certain introspective questions leave us dumbfounded it is difficult to see why pessimism about a particular range of introspective questions should undermine the epistemic credentials of introspection more generally. So even if the threat posed by dumbfounding arguments were able to establish a form of local pessimism, that threat would appear to be easily quarantined. 11

Once again, local problems in introspection do not warrant global conclusions regarding introspective reliability.

Bayne takes a similar tack with Schwitzgebel’s dissociation arguments, examples where our naïve assumptions regarding introspective competence diverge from actual performance. He points out the ambiguity between the reliability of experience and the reliability of introspection: Perhaps we’re accurately introspecting mistaken experiences. If there’s no way to distinguish between these, Bayne, suggests, we’ve made room for introspective optimism. He writes: “If dissociations between a person’s introspective capacities and their first-order capacities can disconfirm their introspective judgments (as the dissociation argument assumes), then associations between a person’s introspective judgments and their first-order capacities ought to confirm them” (12). What makes Schwitzgebel’s examples so striking, he goes on to argue, is precisely that fact that introspective judgments are typically effective.

And when it comes to the introspective variation argument, the claim that the chronic underdetermination that characterizes introspective theoretical disputes attests to introspective incapacity, Bayne once again offers an epistemologically fractionate picture of introspection as a way of blocking any generalization from given instances of introspective failure. He thinks that examples of introspective capacity can be explained away, “[b]ut even if the argument from variation succeeds in establishing a local form of pessimism, it seems to me there is little reason to think that this pessimism generalizes” (14).

Ultimately, the entirety of his case hangs on the epistemologically fractionate nature of introspection. It’s worth noting at this point, that from a cognitive scientific point of view, the fractionate nature of introspection is all but guaranteed. Just think of the mad difference between Plato’s simple aviary, the famous metaphor he offers for memory in the Theaetetus, and the imposing complexity of memory as we understand it today. I raise this ‘mad difference’ for two reasons. First, it implies that any scientific understanding of introspection is bound to radically complicate our present understanding. Second, and even more importantly, it evidences the degree to which introspection is blind, not only to the fractionate complexity of memory, but to its own fractionate complexity as well.

For Bayne to suggest that introspection is fractionate, in other words, is for him to claim that introspection is almost entirely blind to its own nature (much as it is to the nature of memory). To the extent that Bayne has to argue the fractionate nature of introspection, we can conclude that introspection is not only blind to its own fractionate nature, it is also blind to the fact of this blindness. It is in this sense that we can assert that introspection neglects its own fractionate nature. The blindness of introspection to introspection is the implication that hangs over his entire case.

In the meantime, having posed an epistemologically plural account of introspection, he’s now on the hook to explain the details. “Why,” he now asks, “might certain types of phenomenal states be elusive in a way that other types of phenomenal states are not?” (15). Bayne does not pretend to possess any definitive answers, but he does hazard one possible wrinkle in the otherwise featureless face of introspection, the 2010 distinction that he and Maja Spener made in “Introspective Humility” between ‘scaffolded’ and ‘freestanding’ introspective judgments. He notes that those introspective judgments that seem to be the most reliable, are those that seem to be ‘scaffolded’ by first-order experiences. These include the most anodyne metacognitive statements we make, where we reference our experiences of things to perspectivally situate them in the world, as in, ‘I see a tree over there.’ Those introspective judgments that seem the least reliable, on the other hand, have no such first-order scaffolding. Rather than piggy-back on first-order perceptual judgments, ‘freestanding’ judgments (the kind philosophers are fond of making) reference our experience of experiencing, as in, ‘My experience has a certain phenomenal quality.’

As that last example (cribbed from the Gertler quote above) makes plain, there’s a sense in which this distinction doesn’t do the philosophical introspective optimist any favours. (Max Engel exploits this consequence to great effect in his Open MIND reply to Bayne’s article, using it to extend pessimism into the intuition debate). But Bayne demurs, admitting that he lacks any substantive account. As it stands, he need only make the case that introspection is fractionate to convincingly block the ‘globalization’ of Schwitzgebel’s pessimism. As he writes:

perhaps the central lesson of this paper is that the epistemic landscape of introspection is far from flat but contains peaks of security alongside troughs of insecurity. Rather than asking whether or not introspective access to the phenomenal character of consciousness is trustworthy, we should perhaps focus on the task of identifying how secure our introspective access to various kinds of phenomenal states is, and why our access to some kinds of phenomenal states appears to be more secure than our access to other kinds of phenomenal states. 16

The general question of whether introspective cognition of conscious experience is possible is premature, he argues, so long as we have no clear idea of where and why introspection works and does not work.

This is where I most agree with Bayne—and where I’m most puzzled. Many things puzzle me about the analytic philosophy of mind, but nothing quite so much as the disinclination to ask what seem to me to be relatively obvious empirical questions.

In nature, accuracy and reliability are expensive achievements, not gifts from above. Short of magic, metacognition requires physical access and physical capacity. (Those who believe introspection is magic—and many do—need only be named magicians). So when it comes to deliberative introspection, what kind of neurobiological access and capacity are we presuming? If everyone agrees that introspection, whatever it amounts to, requires the brain do honest-to-goodness work, then we can begin advancing a number of empirical theses regarding access and capacity, and how we might find these expressed in experience.

So given what we presently know, what kind of metacognitive access and capacity should we expect our beans to possess? Should we, for instance, expect it to rival the resolution and behavioural integration of our environmental capacities? Clearly not. For one, environmental cognition coevolved with behaviour and so has the far greater evolutionary pedigree—by hundreds of millions of years, in fact! As it turns out, reproductive success requires that organisms solve their surroundings, not themselves. So long as environmental challenges are overcome, they can take themselves for granted, neglect their own structure and dynamics. Metacognition, in other words, is an evolutionary luxury. There’s no way of saying how long homo sapiens has enjoyed the particular luxury of deliberative introspection (as an exaptation, the luxury of ‘philosophical reflection’ is no older than recorded history), but even if we grant our base capacity a million year pedigree, we’re still talking about a very young, and very likely crude, system.

Another compelling reason to think metacognition cannot match the dimensionality of environmental cognition lies in the astronomical complexity of its target. As a matter of brute empirical fact, brains simply cannot track themselves the high-dimensional way they track their environments. Thus, once again, ‘Dehaene’s Law,’ the way “[w]e constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79).  The vast resources society is presently expending to cognize the brain attests to the degree to which our brain exceeds its own capacity to cognize in high dimensional terms. However the brain cognizes its own operations, then, it can only do so in a radically low dimensional way. We should expect, in other words, our brains to be relatively insensitive to their own operation—to be blind to themselves.

A third empirical reason to assume that metacognition falls short environmental dimensionality is found in the way it belongs to the very system it tracks, and so lacks the functional independence as well as the passive and active information-seeking opportunities belonging to environmental cognition. The analogy I always like to use here is that of a primatologist sewn into a sack with a troop of chimpanzees versus one tracking them discretely in the field. Metacognition, unlike environmental cognition, is structurally bound to its targets. It cannot move toward some puzzling item—an apple say—peer at it, smell it, touch it, turn it over, crack it open, taste it, scrutinize the components. As embedded, metacognition is restricted to fixed channels of information that it could not possibly identify or source. The brain, you could say, is simply too close to itself to cognize itself as it is.

Viewed empirically, then, we should expect metacognitive access and capacity to be more specialized, more adventitious, and less flexible compared to that of environmental cognition. Given the youth of the system, the complexity of its target, and the proximity of its target, we should expect human metacognition will consist of various kluges, crude heuristics that leverage specific information to solve some specific range of problems. As Gerd Gigerenzer and the Adaptive Behaviour and Cognition Research Group have established, simple heuristics are often far more effective than optimization methods at solving problems. “As the amount of data available to make predictions in an environment shrinks, the advantage of simple heuristics over complex algorithms grows” (Hertwig and Hoffrage, “The Research Agenda,” Simple Heuristics in a Social World, 23). With complicated problems yielding little data, adding parameters to a solution can compound the chances of making mistakes. Low dimensionality, in other words, need not be a bad thing, so long as the information consumed is information enabling the solution of some problem set. This is why evolution so regularly makes use of it.

Given this broad-stroke picture, human metacognition can be likened to a toolbox containing multiple, special-purpose tools, each possessing specific ‘problem-ecologies,’ narrow, but solvable domains that trigger their application frequently and decisively enough to have once assured the tool’s generational selection. The problem with heuristics, of course, lies in the narrowness of their respective domains. If we grant the brain any flexibility in the application of its metacognitive tools, then the potential for heuristic misapplication is always a possibility. If we deny the brain any decisive capacity to cognize these misapplications outside their consequences (if the brain suffers ‘tool agnosia’), then we can assume these misapplications will be indistinguishable from successful applications short of those consequences.

In other words, this picture of human metacognition (which is entirely consistent with contemporary research) provides an elegant (if sobering) recapitulation and explanation of what Bayne calls the ‘epistemic landscape of introspection.’ Metacognition is fractionate because of the heuristic specialization required to decant behaviourally relevant information from the brain. The ‘peaks of security’ correspond to the application of metacognitive heuristics to matching problem-ecologies, while the ‘troughs of insecurity’ correspond to the application of metacognitive heuristics to problem-ecologies they could never hope to solve.

Since those matching problem-ecologies are practical (as we might expect, given the cultural basis of regimented theoretical thinking), it makes sense that practical introspection is quite effective, whereas theoretical introspection, which attempts to intuit the general nature of experience, is anything but. The reason the latter strike us as so convincing—to the point of seeming impossible to doubt, no less—is simply that doubt is expensive: there’s no reason to presume we should happily discover the required error-signalling machinery awaiting any exaptation of our deliberative introspective capacity, let alone one so unsuccessful as philosophy. As I mentioned above, the experience of epistemic insufficiency always requires more information. Sufficiency is the default simply because the system has no way of anticipating novel applications, no decisive way of suddenly flagging information that was entirely sufficient for ancestral problem-ecologies and so required no flagging.

Remember how Bayne offered what I termed ‘information information’ provided by vision as a possible analogue of introspection? Visual experience cues us to the unreliability or absence of information in a number of ways, such as darkness, blurring, faintness, and so on. Why shouldn’t we presume that deliberative introspection likewise flags what can and cannot be trusted? Because deliberative introspection exapts information sufficient for one kind of practical problem-solving (Did I leave my keys in the car? Am I being obnoxious? Did I read the test instructions carefully enough?) for the solution of utterly unprecedented ontological problems. Why should repurposing introspective deliverances in this way renovate the thoughtless assumption of ‘default sufficiency’ belonging to their original purposes?

This is the sense in which Blind Brain Theory, in the course of explaining the epistemic profile of introspection, also explodes Bayne’s case for introspective optimism. By tying the contemplative question of deliberative introspection to the empirical question of the brain’s metacognitive access and capacity, BBT makes plain the exorbitant biological cost of the optimistic case. Exhaustive, reliable intuition of anything involves a long evolutionary history, tractable targets, and flexible information access—that is, all the things that deliberative introspection does not possess.

Does this mean that deliberative introspection is a lost cause, something possessing no theoretical utility whatsoever? Not necessarily. Accidents happen. There’s always a chance that some instance of introspective deliberation could prove valuable in some way. But we should expect such solutions to be both adventitious and local, something that stubbornly resists systematic incorporation into any more global understanding.

But there’s another way, I think, in which deliberative introspection can play a genuine role in theoretical cognition—a way that involves looking at Schwitzgebel’s skeptical project as a constructive, rather than critical, theoretical exercise.

To show what I mean, it’s worth recapitulating one of the quotes Bayne selects from Perplexities of Consciousness for sustained attention:

How much of the scene are you able vividly to visualize at once? Can you keep the image of your chimney vividly in mind at the same time you vividly imagine (or “image”) your front door? Or does the image of your chimney fade as your attention shifts to the door? If there is a focal part of your image, how much detail does it have? How stable is it? Suppose that you are not able to image the entire front of your house with equal clarity at once, does your image gradually fade away towards the periphery, or does it do so abruptly? Is there any imagery at all outside the immediate region of focus? If the image fades gradually away toward the periphery, does one lose colours before shapes? Do the peripheral elements of the image have color at all before you think to assign color to them? Do any parts of the image? If some parts of the image have indeterminate colour before a colour is assigned, how is that indeterminacy experienced—as grey?—or is it not experienced at all? If images fade from the centre and it is not a matter of the color fading, what exactly are the half-faded images like? Perplexities, 36

Questions in general are powerful insofar as they allow us to cognize the yet-to-be-cognized. The slogan feels ancient to me now, but no less important: Questions are how we make ignorance visible, how we become conscious of cognitive incapacity. In effect, then, each and every question in this quote brings to light a specific inability to answer. Granting that this inability indicates either a lack of information access and/or metacognitive incapacity, we can presume these questions enumerate various cognitive dimensions missing from visual imagery. Each question functions as an interrogative ‘ping,’ you could say, showing us another direction that (for many people at least) introspective inquiry cannot go—another missing dimension.

So even though Bayne and Schwitzgebel draw negative conclusions from the ‘dumbfounding’ that generally accompanies these questions, each instance actually tells us something potentially important about the limits of our introspective capacities. If Schwitzgebel had been asking these questions of a painting—Las Meninas, say—then dumbfounding wouldn’t be a problem at all. The information available, given the cognitive capacity possessed, would make answering them relatively straightforward. But even though ‘visual imagery’ is apparently ‘visual’ the same as a painting, the selfsame questions stop us in our tracks. Each question, you could say, closes down a different ‘degree of cognitive freedom,’ reveals how few degrees of cognitive freedom human deliberative introspection possesses for the purposes of solving visual imagery. Not much at all, as it turns out.

Note this is precisely what we should expect on a ‘blind brain’ account. Once again, simply given the developmental and structural obstacles confronting metacognition, it almost certainly consists of an ‘adaptive toolbox’ (to use Gerd Gigerenzer’s phrase), a suite of heuristic devices adapted to solve a restricted set of problems given only low-dimensional information. The brain possesses a fixed set of metacognitive channels available for broadcast, but no real ‘channel channel,’ so that it systematically neglects metacognition’s own fractionate, heuristic structure.

And this clearly seems to be what Schwitzgebel’s interrogative barrage reveals: the low dimensionality of visual imagery (relative to vision), the specialized problem-solving nature of visual imagery, and our profound inability to simply intuit as much. For some mysterious reason we can ask visual questions that for some mysterious reason do not apply to visual imagery. The ability of language to retask cognitive resources for introspective purposes seems to catch the system as a whole by surprise, confronts us with what had been hitherto relegated to neglect. We find ourselves ‘dumbfounded.’

So long as we assume that cognition requires work, we must assume that metacognition trades in low dimensional information to solve specific kinds of problems. To the degree that introspection counts as metacognition, we should expect it to trade in low-dimensional information geared to solve particular kinds of practical problems. We should also expect it to be blind to introspection, to possess neither the access nor the capacity required to intuit its own structure. Short of interrogative exercises such as Schwitzgebel’s, deliberative introspection has no inkling of how many degrees of cognitive freedom it possesses in any given context. We have to figure out what information is for what inferentially.

And this provides the basis for a provocative diagnosis of a good many debates in contemporary psychology and philosophy of mind. So for instance, a blind brain account implies that our relation to something like ‘qualia’ is almost certainly one possessing relatively few degrees of cognitive freedom—a simple heuristic. Deliberative introspection neglects this, and at the same time, via questioning, allows other cognitive capacities to consume the low-dimensional information available. ‘Dumbfounding’ often follows—what the ancient Greeks liked to call, thaumazein. The practically minded, sniffing a practical dead end, turn away, but the philosopher famously persists, mulling the questions, becoming accustomed to them, chasing this or that inkling, borrowing many others, all of which, given the absence of any real information information, cannot but suffer from some kind of ‘only game in town effect’ upon reflection. The dumbfounding boundary is trammelled to the point of imperceptibility, and neglect is confused with degrees of cognitive freedom that simply do not exist. We assume that a quale is something like an apple—we confuse a low-dimensional cognitive relationship with a high-dimensional one. What is obviously specialized, low-dimensional information becomes, for a good number of philosophers at least, a special ‘immediately self-evident’ order of reality.

Is this Adamic story really that implausible? After all, something has to explain our perpetual inability to even formulate the problem of our nature, let alone solve it. Blind Brain Theory, I would argue, offers a parsimonious and comprehensive way to extricate ourselves from the traditional mire. Not only does it explain Bayne’s ‘epistemic profile of introspection,’ it explains why this profile took so long to uncover. By reinterpreting the significance of Schwitzgebel’s ‘dumbfounding’ methods, it raises the possibility of ‘Interrogative Introspection’ as a scientific tool. And lastly, it suggests the problems that neglect foists on introspection can be generalized, that much of our inability to cognize ourselves turns on the cognitive short cuts evolution had to use to assure we could cognize ourselves at all.

Zombie Interpretation: Eliminating Kriegel’s Asymmetry Argument

by rsbakker

Could zombie versions of philosophical problems, versions that eliminate all intentionality from the phenomena at issue, shed any light on those problems?

The only way to find out is to try.

Since I’ve been railing so much about the failure of normativism to account for its evidential basis, I thought it worthwhile to consider the work of a very interesting intentionalist philosopher, Uriah Kriegel, who sees the need quite clearly. The question could not be more simple: What justifies philosophical claims regarding the existence and nature of intentional phenomena? For Kriegel the most ‘natural’ and explanatorily powerful answer is observational contact with experiential intentional states. How else, he asks, can we come to know our intentional states short of experiencing them? In what follows I propose to consider two of Kriegel’s central arguments against the backdrop of ‘zombie interpretations’ of the very activities he considers, and in doing so, I hope to undermine not only his argument, but the general abductive strategy one finds intentionalists taking throughout philosophy more generally, the presumption that only theoretical accounts somehow involving intentionality can account for intentional phenomena.

In his 2011 book, The Sources of Intentionality, Kriegel attempts to remedy semantic externalism’s failure to naturalize intentionality via a carefully specified return to phenomenology, an account of how intentional concepts arise from our introspective ‘observational contact’ with mental states possessing intentional content. Experience, he claims, is intrinsically intentional. Introspective contact with this intrinsic intentionality is what grounds our understanding of intentionality, providing ‘anchoring instances’ for our various intentional concepts.

As Kriegel is quick to point out, such a thesis implies a crucial distinction between experiential intentionality, the kind of intentionality we experience, and nonexperiential intentionality, the kind of intentionality we ascribe without experiencing. This leads him to Davidson’s account of radical interpretation, and to what he calls the “remarkable asymmetry” between various ascriptions of intentionality. On radical interpretation as Davidson theorizes it, our attempts to interpret one another are so evidentially impoverished that interpretative success fundamentally requires assuming the rationality of our interlocutor—what he terms ‘charity.’ The ascription of some intentional state to another turns on the prior assumption that he or she believes, desires, fears and so on as they should, otherwise we would have no way of deciding among the myriad interpretations consistent with the meagre behavioural data available. Kriegel argues “that while the Davidsonian insight is cogent, it applies only to the ascription of non-experiential intentionality, as well as the ascription of experiential intentionality to others, but not to the ascription of experiential intentionality to oneself” (29). We require charity when it comes to ascribing varieties of intentionality to signs, others, and even our nonconscious selves, but not when it comes to ascribing intentionality to our own experiences. So why this basic asymmetry? Why do we have to attribute true beliefs and rational desires—take the ‘intentional stance’—with regards to others and our nonconscious selves, and not our consciously experienced selves? Why do we seem to be the one self-interpreting entity?

Kriegel thinks observational contact with our actual intentionality provides the most plausible answer, that “[i]nsofar as it is appropriate to speak of data for ascription here, the only relevant datum seems to be a certain deliverance of introspection” (33). He continues:

There is thus a contrast between the mechanics of first-person [experiential]-intentional ascription and third-person … intentional ascription. The former is based on endorsement of introspective seemings, the latter on causal inference from behavior. This is hardly deniable: as noted, when you ascribe to yourself a perceptual experience as of a table, you do not observe putative causal effects of your experience and infer on their basis the existence of a hidden experiential cause. Rather, you seem to make the ascription on the basis of observing, in some (not unproblematic) sense, the experience itself—observing, that is, the very state which you ascribe. The Sources of Intentionality, 33

The mechanics of first-person and third-person intentional cognition differ in that the latter requires explanatory posits like ‘hidden mental causes.’ Since self-ascription involves nothing hidden, no interpretation is required. And it is this elegant and intuitive explanation of first-person interpretative asymmetry that provides abductive warrant for the foundational argument of the text:

1. All the anchoring instances of intentionality are such that we have observational contact with them;

2. The only instances of intentionality with which we have observational contact are experiential-intentional states; therefore,

3. All anchoring instances of intentionality are experiential-intentional states. 38

Given the abductive structure of Kriegel’s argument, those who dissent with either (1) or (2) need a better explanation of asymmetry. Those who deny the anchoring instance model of concept acquisition will target (1), arguing, say, that concept acquisition is an empirical process requiring empirical research. Kriegal simply punts on this issue, claiming we have no reason to think that concept acquisition, no matter how empirically detailed the story turns out to be, is insoluble at this (armchair) level of generality. Either way, his position still enjoys the abductive warrant of explaining asymmetry.

For Kriegal, (2) is the most philosophically controversial premise, with critics either denying we have any ‘observational contact’ with experiential-intentional states, or that we have observational contact with only such experiential-intentional states. The problem faced by both angles, Kriegal points out, is that asymmetry still holds whether one denies (2) or not: we can ascribe intentional experiences to ourselves without requiring charity. If observational contact—the ‘natural explanation’ Kriegal calls it—doesn’t lie at the root of this capacity, then what does?

For an eliminativist such as myself, however, the problem is more a matter of definition. I actually agree that suffering a certain kind of observational contact–namely, one that systematically neglects tremendous amounts of information–can anchor our philosophical concept of intentionality. Kriegel is fairly dismissive of eliminativism in The Sources of Intentionality, and even then the eliminativism he dismisses acknowledges the existence of intentional experiences! As he writes, “if eliminativism cannot be acceptable unless a relatively radical interpretation of cognitive science is adopted, then eliminativism is not in good shape” (199). The problem is that this assumes cognitive science is itself in fine shape, when Kriegel himself emphatically asserts “that it is not doing fine” (A Hesitant Defence of Introspection, 3). Cognitive science is fraught with theoretical dispute, certainly more than enough (and for long enough!) to seriously entertain the possibility that something radical has been overlooked.

So the radicality of eliminativism is neither here nor there regarding its ‘shape.’ The real problem faced by eliminativism, which Kriegel glosses, is abductive. Eliminativism simply cannot account for what seem to be obvious intentional phenomena.

Which brings me to zombies and what these kinds of issues might look like in their soulless, shuffling world…

In the zombie world I’m imagining, what Sellars called the ‘scientific image of man’ is the only true image. There quite simply is no experience or meaning or normativity as we intentionally characterize these things in our world. So zombies, in their world, possess only systematic causal relations to their environments. No transcendental rules or spooky functions haunt their brains. No virtual norms slumber in their community’s tacit gut. ‘Zombie knowledge’ is simply a matter of biomechanistic systematicity, having the right stochastic machinery to solve various problem ecologies. So although they use sounds to coordinate their behaviours, the efficacies involved are purely causal, a matter of brains conditioning brains. ‘Zombie language,’ then, can be understood as a means of resolving discrepancies via strings of mechanical code. Given only a narrow band of acoustic sensitivity, zombies constantly update their covariational schema relative to one another and their environments. They are ‘communicatively attuned.’

So imagine a version of radical zombie interpretation, where a zombie possessing one code—Blue—is confronted by another zombie possessing another code—Red. And now let’s ask the zombie version of Davidson’s question: What would it take for these zombies to become communicatively attuned?

Since the question is one of overcoming difference, it serves to recall what our zombies share: a common cognitive biology and environment. An enormous amount of evolutionary stage-setting underwrites the encounter. They come upon one another, in other words, differing only in code. And this is just to say that radical zombie interpretation occurs within a common attunement to each other and the world. They share both a natural environment and the sensorimotor systems required to exploit it. They also share powerful ‘brain-reading’ systems, a heuristic toolbox that allows them to systematically coordinate their behaviour with that of their zombie fellows without any common code. Even more, they share a common code apparatus, which is to say, the same system adapted to coordinate behaviours via acoustic utterances.

Given this ‘pre-established harmony’—common environment, common brain-reading and code-using biology—how might a code Blue zombie come to interpret (be systematically coordinated with) the utterances of a code Red zombie?

Since both zombies were once infant zombies, each has already undergone ‘code conditioning’; they have already tested innumerable utterances against innumerable environments, isolating and preserving robust covariances (and structural operators) on the way to acquiring their respective codes. At the same time, their brain-reading systems allow them to systematically coordinate their behaviours to some extent, to find a kind of basic attunement. All that remains is a matter of covariant sound substitution, of swapping the sounds belonging to code Blue for the sounds belonging to code Red, a process requiring little more than testing code-specific covariations against real-time environments. Perhaps radical zombie interpretation is not so radical after all!

The first thing to note is how the reliable coordination of behaviours is all that matters in this process: idiosyncrasies in their respective implementations of Red or Blue matter only insofar as they impact this coordination. The ‘synonymy’ involved is entirely coincident because it is entirely physical.

The second thing to note is how pre-established harmony is simply a structural feature of the encounter. These are just the problems that nature has already solved for our two intrepid zombies, what has to be the case for the problem of radical zombie interpretation to even arise. At no point do our zombies ‘attribute’ or ‘ascribe’ anything to their counterpart. Sensing another zombie simply triggers their zombie-brain-reading machinery, which modifies their behaviour and so on. There’s no ‘charity’ involved, no ‘attribution of rationality,’ just the environmental cuing of heuristic systems adapted to solve certain zombie-social environments.

Of course each zombie resorts to their brain-reading systems to behaviourally coordinate with its counterpart, but this is an automatic feature of the encounter, what happens whenever zombies detect zombies. Each engages in communicative troubleshooting behaviour in the course of executing some superordinate disposition to communicatively coordinate. Brains are astronomically complicated mechanisms—far too complicated for brains to intuit them as such. Thus the radically heuristic nature of zombie brain-reading. Thus the perpetual problem of covariational discrepancies. Thus the perpetual expenditure of zombie neural resources on the issue of other zombies.

Leading us to a third thing of note: how the point of radical zombie interpretation is to increase behavioural possibilities by rendering behavioural interactions more systematic. What makes this last point so interesting lies in the explanation it provides regarding why zombies need not first decode themselves to decode others. As a robust biomechanical system, ‘self-systematicity’ is simply a given. The whole problem of zombie interpretation resides in one zombie gaining some systematic purchase on other zombies in an effort to create some superordinate system—a zombie community. Asymmetry, in other words, is a structural given.

In radical zombie interpretation, then, not only do we have no need for ‘charity,’ we somehow manage to circumvent all the controversies pertaining to radical human interpretation.

Now of course the great zombie/human irony is that humans are everything that zombies are and more. So the question immediately becomes one of why radical human interpretation should prove to be so problematic when the radical zombie interpretation of the same problem is not. Where the zombie story certainly entails a vast number of technical details, it does not involve anything conceptually occult or naturalistically inexplicable. If mere zombies could avoid these problems using nothing more than zombie resources, why should humans find themselves perennially confounded?

This really is an extraordinary question. The intentionalist will cry foul, of course, reference all the obvious intentional phenomena pertaining to the communicative coordination of humans, things like rules and reasons and references and so on, and ask how this zombie fairy tale could possibly explain any of them. So even though this story of zombie interpretation provides, in outline at least, the very kind of explanation that Kriegel demands, it quite obviously throws out the baby with the bathwater in the course of doing so. Asymmetry becomes perspicuous, but now the whole of human intentional activity becomes impossible to explain (assuming that anything at this level has ever been genuinely explained). Zombie interpretation, in other words, wins the battle by losing the war.

It’s worth noting here the curious structure of the intentionalist’s abductive case. The idea is that we need a theoretical intentional account to explain human intentional activity. What warrants theoretical supernaturalism (or philosophy traditionally construed) is the matter-of-fact existence of everyday intentional phenomena (an existence that Kriegel thinks so obvious that on a couple of occasions he adduces arguments he claims he doesn’t need simply to bolster his case against skeptics such as myself). The curiosity, however, is that the ‘matter-of-fact existence of everyday intentional phenomena’ that at once “underscores the depth of eliminativism’s (quasi-) empirical inadequacy” (199) and motivates theoretical intentional accounts is itself a matter of theoretical controversy—just not for intentionalists! The problem with abductive appeals like Kriegel’s, in other words, is the way they rely on a prior theory of intentionality to anchor the need for theories of intentionality more generally.

This is what makes radical zombie interpretation out and out eerie. Because it does seem to be the case that zombies could achieve at least the same degree of communicative coordination absent any intentional phenomena at all. When you strip away the intentional glamour, when you simply look at the biology and the behaviour, it becomes hard to understand just what it is that humans do that requires anything over and above zombie biology and behaviour. Since some kind of gain in systematicity is the point of communicative coordination, it makes sense that zombies need not troubleshoot themselves in the course of troubleshooting other zombies. So it remains the case that radical zombie interpretation, analyzed at the same level of generality, seems to have a much easier time explaining the same degree of human communicative coordination sans bebe, than does radical human interpretation, which, quite frankly, strands us with a host of further, intractable mysteries regarding things like ‘ascription’ and ‘emergence’ and ‘anomalous causation.’

What could be going on? When it comes to Kriegel’s ‘remarkable asymmetry’ should we simply put our ‘zombie glasses’ on, or should we tough it out in the morass of intractable second-order accounts of intentionality on the basis of some ineliminable intentional remainder?

As Three Pound Brain regulars know, the eliminativism I’m espousing here is quite unique in that it arises, not out of concerns regarding the naturalistic inscrutability of intentional phenomena, but out of a prior, empirically grounded account of intentionality, what I’ve been calling Blind Brain Theory. On Blind Brain Theory the impasse described above is precisely the kind of situation we should expect given the kind of metacognitive capacities we possess. By its lights, zombies just are humans, and so-called intentional phenomena are simply artifacts of metacognitive neglect, what high-dimensional zombie brain functions ‘look like’ when low-dimensionally sampled for deliberative metacognition. Brains are simply too complicated to be effectively solved by causal cognition, so we evolved specialized fixes, ways to manage our brain and others in the absence of causal cognition. Since the high-dimensional actuality of those specialized fixes outruns our metacognitive capacity, philosophical reflection confuses what little it can access with everything required, and so is duped into the entirely natural (but nonetheless extraordinary) belief that it possesses ‘observational contact’ with a special, irreducible order of reality. Given this, we should expect that attempts to theoretically solve radical interpretation via our ‘mind’ reading systems would generate more mystery than it would dispel.

Blind Brain Theory, in other words, short circuits the abductive strategy of intentionalism. It doesn’t simply offer a parsimonious explanation of asymmetry; it proposes to explain all so-called intentional phenomena. It tells us what they are, why we’re prone to conceive them the naturalistically incompatible ways we do, and why these conceptions generate the perplexities they do.

To understand how it does so, it’s worth considering what Kriegel himself thinks is the ‘weak link’ in his attempt to source intentionality: the problem of introspective access. In The Sources of Intentionality, Kriegel is at pains to point out that “one need not be indulging in any mystery-mongering about first-person access” to provide the kind of experiential observational contact that he needs. No version of introspective incorrigibility follows “from the assertion that we have introspective observational contact with our intentional experiences” (34). Even still, the question of just what kind of observational contact is required is one that he leaves hanging.

In his 2013 paper, ‘A Hesitant Defence of Introspection,’ Kriegel attempts to tie down this crucial loose thread by arguing what he calls ‘introspective minimalism,’ an account of human introspective capacity that can weather what he terms ‘Schwitzgebel’s Challenge,’ essentially, the question (arising out of Eric’s watershed, Perplexities of Consciousness) of whether our introspective capacity, whatever it consists in, possesses any cognitive scientific value. He begins by arguing the pervasive, informal role that introspection plays in the ‘context of discovery’ of cognitive sciences. The question, however, is how introspection fits into the ‘context of justification’—the degree to which it counts as evidence as opposed to mere ‘inspiration.’ Given the obvious falsehood of what he terms ‘introspective maximalism,’ he sets out to save some minimalist version of introspection that can serve some kind of evidential role. He turns to olfaction to provide an analogy to the kind of minimal justification that introspection is capable of providing:

Suppose, for instance, that introspection turns out to be as trustworthy as our sense of smell, that is, as reliable and as potent as a normal adult human’s olfactory system. Then Introspective minimalism would be vindicated. Normally, when we have an olfactory experience as of raspberries, it is more likely that there are raspberries in our immediate environment (than if we do not have such an experience). Conversely, when there are raspberries in our immediate environment, it is more likely that we would have an olfactory experience as of raspberries (than if there are none). So the ‘equireliability’ of olfaction and introspection would support introspective minimalism. Such equireliability is highly plausible. 8

Kriegel’s argument is simply that introspecting some phenomenology reliably indicates the presence of that phenomenology the same way smelling raspberries reliably indicates the presence of raspberries. This is all that’s required, he thinks, to assert “that introspection affords us observational contact with our mental life” (13), and is thus “epistemically indispensable for any mature understanding of the mind” (13). It’s worth noting that Schwitzgebel is actually inclined to concede the analogy, suggesting that his own “dark pessimism about some of the absolutely most basic and pervasive features of consciousness, and about the future of any general theory of consciousness, seems to be entirely consistent with Uriah’s hesitant defense of introspection” (“Reply to Kriegel, Smithies, and Spener,” 4). He agrees then, that introspection reliably tells us that we possess a phenomenology, he just doubts it reliably tells us what it consists in. Kriegel, on the hand, thinks his introspective minimalism gives him the kind of ‘observational contact’ he needs to get his abductive asymmetry argument off the ground.

But does it?

Once again, it pays to flip to the zombie perspective. Given that the zombie olfactory system is a specialized system adapted to the detection of chemical residues in the immediate environment, one might expect the zombie olfactory system would reliably detect the chemical residue left by raspberries. Given that the zombie introspective system is a specialized system adapted to the detection of brain events, one might expect the zombie introspective system would reliably detect those brain events. The first system reliably allows zombies to detect raspberries, and the second system reliably allows zombies to detect activity in various parts of its zombie brain.

On this way of posing the problem, however, the disanalogy between the two systems all but leaps out at us. In fact, it’s hard to imagine two more disparate cognitive tasks than detecting something as simple as the chemical signature of raspberries versus something as complex as the machinations of the zombie brain. In point of fact, the brain is so astronomically complicated, it seems all but assured that zombie introspective capacity would be both fractionate and heuristic in the extreme, that it would consist of numerous fixes geared to a variety of problem-ecologies.

One way to possibly repair the analogy would be to scale up the complexity of the problem faced by olfaction. So it’s obvious, to give an example, that the information available for olfaction is far too low-dimensional, far too problem specific, to anchor theoretical accounts of the biosphere. Then, on this repaired analogy, we can say that just as zombie olfaction isn’t geared to the theoretical solution of the zombie biosphere, but rather to the detection of certain environmental obstacles and opportunities, it is almost certainly the case that zombie introspection isn’t geared to the theoretical solution of the zombie brain, but rather to more specific, environmentally germane tasks. Given this, we have no reason whatsoever to presume that what zombies metacognize and report possesses any ‘reliability and potency’ beyond very specific problem-ecologies—the same as with olfaction. On zombie introspection, then, we have no more reason to think that zombies could possibly accurately metacognize the structure of their brain than they could accurately smell the structure of the world.

And this returns us back to the whole question of Kriegel’s notion of ‘observational contact.’ Kriegel realizes that ‘introspection’ isn’t simply an all or nothing affair, that it isn’t magically ‘self-intimating’ and therefore admits of degrees of reliability—this is why he sets out to defend his minimalist brand. But he never pauses to seriously consider the empirical requirements of even such minimal introspective capacity.

In essence, what he’s claiming is that the kind of ‘observational contact’ available to philosophical introspection warrants complicating our ontology with a wide variety of (supernatural) intentional phenomena. Introspective minimalism, as he terms it, argues that we can metacognize some restricted set of intentional entities/relations with the same reliability that we cognize natural phenomena. We can sniff these things out, so it stands to reason that such things exist to be sniffed, that introspecting a phenomenology increases the chances that such phenomenology exists (as introspected). With zombie introspection, however, the analogy between olfaction and metacognition strained credulity given the vast disproportion in complexity between olfactory and metacognitive phenomena. It’s difficult to imagine how any natural system could possibly even begin to accurately metacognize the brain.

The difference Kriegel would likely press, however, is that we aren’t mindless zombies. Human metacognition, in other words, isn’t concerned with the empirical particulars of the brain as it is the functional particulars of the conscious mind. Even though the notion of accurate zombie introspection is obviously preposterous, the notion of accurate human metacognition would seem to be a different question altogether, the question of what a human introspective capacity requires to accurately metacognize human ‘phenomenology’ or ‘mind.’

The difficulty here, famously, is that there seems to be no noncircular way to answer this question. Because we can’t find intentional phenomena anywhere in the natural world, theoretical metacognition monopolizes our every attempt to specify their nature. This effectively renders assessing the reliability of such metacognitive exercises impossible apart from their ability to solve various kinds of problems. And the difficulty here is that the long history of introspectively motivated philosophical theorization (as opposed to other varieties of metacognition) regarding the nature of the intentional has only generated more problems. For some reason, the kind of metacognition involved in ‘philosophical reflection’ only seems to make matters worse when it comes to questions of intentional phenomena.

The zombie account of this second impasse is at once parsimonious and straightforward: phenomenology (or mind or what have you) is the smell, not the raspberry—that would be some systematic activity in the brain. It is absurd to think any evolved brain, zombie or human, could accurately cognize its own biomechanical operations the way it cognizes causal events in its environment. Kriegel himself agrees to this:

In fact cognitive science can partly illuminate why our introspective grasp of our inner world can be expected to be considerably weaker than our perceptual grasp of the external world. It is well-established that much of our perceptual grasp of the external world relies on calibration of information from different perceptual modalities. Our observation of our internal world, however, is restricted to a single source of information, and not the most powerful to begin with. (13)

And this is but one reason why the dimensionality of the mental is so low compared to the environmental. Given the evolutionary youth of human metacognition, the astronomical complexity of the human nervous system, and not to mention the problems posed by structural complicity, we should suppose that our metacognitive capacity evolved opportunistically, that it amounts to a metacognitive version of what Todd and Gigerenzer (2012) would call a ‘heuristic toolbox,’ a collection of systems geared to solve specific problem-ecologies. Since we neglect this heuristic toolbox, we remain oblivious to the fact we’re using a given cognitive tool at all, let alone the limits of its effectiveness. Given that systematic theoretical reflection of the kind philosophers practice is an exaptation from cognitive capacities that predate recorded history, the adequacy of Kriegel’s ‘deliverances’ assumes that our evolved introspective capacity can solve unprecedented questions. This is a very real empirical question. For if it turns out that the problems posed by theoretical reflection are not the problems that intentional cognition can solve, neglect means we would have no way of knowing short of actual problem solving, the solution of problems that plainly can be solved. The inability to plainly solve a problem—like the mind-body problem, say—might then be used as a way to identify where we have been systematically misapplying certain tools, asking information adapted to the solution of some specific problem to contribute to the solution of a very different kind of problem.

Kriegel agrees that self-ascriptions involve seemings, that we are blind to the causes of the mental, and that introspection is likely as low-dimensional as a smell, yet he nevertheless maintains on abductive grounds that observational contact with experiential intentionality sources our concepts of intentionality. But it is becoming difficult to understand what it is that’s being explained, or how simply adding inexplicable entities in explanations that bear all the hallmarks of heuristic missapplication is supposed to provide any real abductive warrant at all. Certainly it’s intuitive, powerfully so given we neglect certain information, but then so is geocentrism. The naturalist project, after all, is to understand how we are our brain and environment, not how we are more than our brain and environment. That is a project belonging to a more blinkered age.

And as it turns out, certain zombies in the zombie world hold parallel positions. Because zombie metacognition has no access to the impoverished and circumstantially specialized nature of the information it accesses, many zombies process the information they receive the way they would other information, and verbally report the existence of queerly structured entities somehow coinciding with the function of their brain. Since the solving systems involved possess no access to the high-dimensional, empirical structure of the neural systems they actually track, these entities are typically characterized by missing dimensions, be it causality, temporality, materiality. The fact that these dimensions are neglected disposes these particular zombies to function as if nothing were missing at all—as if certain ghosts, at least, were real.

Yes. You guessed it. The zombies have philosophy too.

The Crux

by rsbakker

Aphorism of the Day: Give me an eye blind enough, and I will transform guttering candles into exploding stars.

.

The Blind Brain Theory turns on the following four basic claims:

1) Cognition is heuristic all the way down.

2) Metacognition is continuous with cognition.

3) Metacognitive intuitions are the artifact of severe informatic and heuristic constraints. Metacognitive accuracy is impossible.

4) Metacognitive intuitions only loosely constrain neural fact. There are far more ways for neural facts to contradict our metacognitive intuitions than otherwise.

A good friend of mine, Dan Mellamphy, has agreed to go through a number of the posts from the past eighteen months with an eye to pulling them together into a book of some kind. I’m actually thinking of calling it Through the Brain Darkly: because of Neuropath, because the blog is called Three Pound Brain, and because of my apparent inability to abandon the tedious metaphorics of neural blindness. Either way, I thought boiling BBT down to its central commitments would be worthwhile exercise. Like a picture taken on a rare, good hair day…

.

1) Cognition is heuristic all the way down.

I take this claim to be trivial. Heuristics are problem-solving mechanisms that minimize computational costs via the neglect of extraneous or inaccessible information. The human brain is itself a compound heuristic device, one possessing a plurality of cognitive tools (innate and learned component heuristics) adapted to a broad but finite range of environmental problems. The human brain, therefore, possesses a ‘compound problem ecology’ consisting of the range of those problems primarily responsible for driving its evolution, whatever they may be. Component heuristics likewise possess problem ecologies, or ‘scopes of application.’

.

2) Metacognition is continuous with cognition.

I also take this claim to be trivial. The most pervasive problem (or reproductive obstacle) faced by the human brain is the inverse problem. Inverse problems involve deriving effective information (ie., mass and trajectory) from some unknown, distal phenomenon (ie., a falling tree) via proximal information (ie., retinal stimuli) possessing systematic causal relations (ie., reflected light) to that phenomenon. Hearing, for instance, requires deriving distal causal structures, an approaching car, say, on the basis of proximal effects, the cochlear signals triggered by the sound emitted from the car. Numerous detection technologies (sonar, radar, fMRI, and so on) operate on this very principle, determining the properties of unknown objects from the properties of some signal connected to them.

The brain can mechanically engage its environment because it is mechanically embedded in its environment–because it is, quite literally, just more environment. The brain is that part of the environment that models/exploits the rest of the environment. Thus the crucial distinction between those medial environmental components involved in modelling/enacting (sensory media, neural mechanisms) and those lateral environmental components modelled. And thus, medial neglect, the general blindness of the human brain to its own structure and function, and its corollary, lateral sensitivity, the general responsiveness of the brain to the structure and function of its external environments–or in other words, the primary problem ecology of the heuristic brain.

Medial neglect and lateral sensitivity speak to a profound connection between ignorance and knowledge, how sensitivity to distal, lateral complexities necessitates insensitivity to proximal, medial complexities. Modelling environments necessarily exacts what might be called an ‘autoepistemic toll’ on the systems responsible. The greater the lateral fidelity, the more sophisticated the mechanisms, the greater the surplus of ‘blind,’ or medial, complexity. The brain, you could say, is an organ that transforms ‘risky complexity’ into ‘safe complexity,’ that solves distal unknowns that kill by accumulating proximal unknowns (neural mechanisms) that must be fed.

The parsing of the environment into medial and lateral components represents more a twist than a scission: the environment remains one environment. Information pertaining to brain function is environmental information, which is to say, information pertinent to the solution of potential environmental problems. Thus metacognition, heuristics that access information pertaining to the brain’s own operations.

Since metacognition is continuous with cognition, another part of the environment engaged in problem solving the environment, it amounts to the adaptation of neural mechanisms sensitive in effective ways to other neural mechanisms in the brain. The brain, in other words, poses an inverse problem for itself.

.

3) Metacognitive intuitions are the artifact of severe informatic and heuristic constraints. Metacognitive accuracy is impossible.

This claim, which is far more controversial than those above, directly follows from the continuity of metacognition and cognition–from the fact that the brain itself constitutes an inverse problem. This is because, as an inverse problem, the brain is quite clearly insoluble. Two considerations in particular make this clear:

1) Target complexity: The human brain is the most complicated mechanism known. Even as an external environmental problem, it has taken science centuries to accumulate the techniques, information, and technology required to merely begin the process of providing any comprehensive mechanistic explanation.

2) Target complicity: The continuity of metacognition and cognition allows us to see that the structural entanglement of metacognitive neural mechanisms with the neural mechanisms tracked, far from providing any cognitive advantage, thoroughly complicates the ability of the former to derive high-dimensional information from the latter. One might analogize the dilemma in terms of two biologists studying bonobos, the one by observing them in their natural habitat, the other by being sewn into a burlap sack with one. Relational distance and variability provide the biologist-in-the-habitat quantities and kinds (dimensions) of information simply not available to the biologist-in-the-sack. Perhaps more importantly, they allow the former to cognize the bonobos without the complication of observer effects. Neural mechanisms sensitive to other neural mechanisms* access information via dedicated, as opposed to variable, channels, and as such are entirely ‘captive’: they cannot pursue the kinds of active environmental engagement that permit the kind of high-dimensional tracking/modelling characteristic of cognition proper.

Target complexity and complicity mean that metacognition is almost certainly restricted to partial, low-dimensional information. There is quite literally no way for the brain to cognize itself as a brain–which is to say, accurately. Thus the mind-body problem. And thus a good number of the perennial problems that have plagued philosophy of mind and philosophy more generally (which can be parsimoniously explained away as different consequences of informatic privation). Heuristic problem-solving does not require the high-dimensional fidelity that characterizes our sensory experience of the world, as simpler life forms show. The metacognitive capacities of the human brain turn on effective information, scraps gleaned via adventitious mutations that historically provided some indeterminate reproductive advantage in some indeterminate context. It confuses these scraps for wholes–suffers the cognitive illusion of sufficiency–simply because it has no way of cognizing its informatic straits as such. Because of this, it perpetually mistakes what could be peripheral fragments in neurofunctional terms, for the entirety and the crux.

.

4) Metacognitive intuitions only loosely constrain neural fact. There are far more ways for neural facts to contradict our metacognitive intuitions than otherwise.

Given the above, the degree to which the mind is dissimilar to the brain is the degree to which deliberative metacognition is simply mistaken. The futility of philosophy is no accident on this account. When we ‘reflect upon’ conscious cognition or experience, we are accessing low-dimensional information adapted to metacognitive heuristics adapted to narrow problem ecologies faced by our preliterate–prephilosophical–ancestors. Thanks to medial neglect, we are utterly blind to the actual neurofunctional context of the information expressed in experience. Likewise, we have no intuitive inkling of the metacognitive apparatuses at work, no idea whether they are many as opposed to one, let alone whether they are at all applicable to the problem they have been tasked to solve. Unless, that is, the task requires accuracy–getting some theoretical metacognitive account of mind or meaning or morality or phenomenology right–in which case we have good grounds (all our manifest intuitions to the contrary) to assume that such theoretical problem ecologies are hopelessly out of reach.

Experience, the very sum of significance, is a kind of cartoon that we are. Metacognition assumes the mythical accuracy (as opposed to the situation-specific efficacy) of the cartoon simply because that cartoon is all there is, all there ever has been. It assumes sufficiency because, in other words, cognizing its myriad limits and insufficiencies requires access to information that simply does not exist for metacognition.

The metacognitive illusion of sufficiency means that the dissociation between our metacognitive intuition of function and actual neural function can be near complete, that memory need not be veridical, the feeling of willing need not be efficacious, self-identity need not be a ‘condition of possibility,’ and so on, and so on. It means, in other words, that what we call ‘experience’ can be subreptive through and through, and still seem the very foundation of the possibility of knowledge.

It means that, all things being equal, the thoroughgoing neuroscientific overthrow our manifest self-understanding is far, far more likely than even its marginal confirmation.

The Introspective Peepshow: Consciousness and the ‘Dreaded Unknown Unknowns’

by rsbakker

On February 12th, 2002, Secretary of Defence Donald Rumsfeld was famously asked in a DoD press conference about the American government’s failure to provide evidence regarding Iraq’s alleged provision of weapons of mass destruction to terrorist groups. His reply, which was lampooned in the media at the time, has since become something of a linguistic icon:

[T]here are known knowns; there are things we know that we know. There are known unknowns; that is to say there are things that we know we don’t know. But there are also unknown unknowns; there are things we don’t know we don’t know.

In 2003, this comment earned Rumsfeld the ‘Foot in Mouth Award’ from the British-based Plain English Campaign. Despite the scorn and hilarity it occasioned in mainstream culture at the time, the concept of unknown unknowns, or ‘unk-unk’ as it is sometimes called, has enjoyed long-standing currency in military and engineering circles. Only recently has it found its way to business and economics (in large part due to the work of Daniel Kahneman), where it is often referred to as the ‘dreaded unknown unknown.’ For enterprises involving risk, the reason for this dread is quite clear. Even in daily life, we speak of being ‘blind-sided,’ of things happening ‘out of the blue’ or coming ‘out of left field.’ Our institutions, like our brains, have evolved to manage and exploit environmental regularities. Since knowing everything is impossible, we have at our disposal any number of rehearsed responses, precooked ways to deal with ‘known unknowns,’ or irregularities that are regular enough to be anticipated. Unknown unknowns refer to those events that find us entirely unprepared–often with catastrophic consequences.

Given that few human activities are quite so sedate or ‘risk free,’ unk-unk might seem out of place in the context of consciousness research and the philosophy of mind. But as I hope to show, such is not the case. The unknown unknown, I want to argue, has a profound role to play in developing our understanding of consciousness. Unfortunately, since the unknown unknown itself constitutes an unknown unknown within cognitive science, let alone consciousness research, the route required to make my case is necessarily circuitous. As John Dewey (1958) observed, “We cannot lay hold of the new, we cannot even keep it before our minds, much less understand it, save by the use of ideas and knowledge we already possess” (viii-ix).

Blind-siding readers rarely pays. With this in mind, I begin with a critical consideration of Peter Carruthers (forthcoming, 2011, 2009a, 2009b, 2008) ‘innate self-transparency thesis,’ the account of introspection entailed by his more encompassing ‘mindreading first thesis’ (or as he calls it in The Opacity of the Mind (2011), Interpretative Sensory-Access Theory (ISA)). I hope to accomplish two things with this reading: 1) illustrate the way explanations in the cognitive sciences so often turn on issues of informatic tracking; and 2) elaborate an alternative to Carruthers’ innate self-transparency thesis that makes, in a preliminary fashion at least, the positive role played of the unknown unknown clear.

Since what I propose subsequent to this first leg of the article can only sound preposterous short of this preliminary, I will commit the essayistic sin (and rhetorical virtue) of leaving my final conclusions unstated–as a known unknown, worth mere curiosity, perhaps, but certainly not dread.

.

Follow the Information

Explanations in cognitive science generally adhere to the explanatory paradigm found in the life sciences: various operations are ‘identified’ and a variety of mechanisms, understood as systems of components or ‘working parts,’ are posited to discharge them (Bechtel and Abrahamson 2005, Bechtel 2008). In cognitive science in particular, the operations tend to be various cognitive capacities or conscious phenomena, and the components tend to be representations embedded in computational procedures that produce more representations. Theorists continually tear down and rebuild what are in effect virtual ‘explanatory machines,’ using research drawn from as many related fields as possible to warrant their formulations. Whether the operational outputs are behavioural, epistemic, or phenomenal, these virtual machines inevitably involve asking what information is available for what component system or process.

Let’s call this process of information tracking the ‘Follow the Information Game’ (FIG).

In a superficial sense, playing FIG is not all that different from playing detective. In the case of criminal investigations, evidence is assembled and assessed, possible motives are considered, various parties to the crime are identified, and an overarching narrative account of who did what to whom is devised and, ideally, tested. In the case of cognitive investigations, evidence is likewise assembled and assessed, possible evolutionary ‘motives’ are considered, a number of contributing component mechanisms are posited, and an overarching mechanistic account what does what for what is devised for possible experimental testing. The ‘doing’ invariably involves discharging some computational function, processing and disseminating information for subsequent computation. The theorist quite literally ‘follows the information’ from mechanism to mechanism, using a complex stew of evolutionary rationales, experimental results, and neuropathological case studies to warrant the various specifics of the resulting theoretical account.

We see this quite clearly in the mindreading versus metacognition debate, where the driving question is one of how we attribute propositional attitudes to ourselves as opposed to others. Do we have direct ‘metacognitive’ access to our beliefs and desires? Is mindreading a function of metacognition? Is metacognition a function of mindreading? Or are they simply different channels of a singular mechanism? Any answer to these questions requires mapping the flow of information, which is to say, playing FIG. This is why, for example, Peter Carruthers’ “How we know our own minds” and the following Open Peer Commentary read like transcripts of the diplomatic feuding behind the Treaty of Versailles. It’s an issue of mapping, but instead of arguing coal mines in Silesia and ports on the Baltic, the question is one of how the brain’s informatic spoils are divided.

Carruthers holds forth a ‘mindreading first’ account, arguing that our self-attributions of PAs rely on the same interpretative mechanisms we use to ‘mind read’ the PAs of others:

There is just a single metarepresentational faculty, which probably evolved in the first instance for purposes of mindreading… In order to do its work, it needs to have access to perceptions of the environment. For if it is to interpret the actions of others, it plainly requires access to perceptual representations of those actions. Indeed, I suggest that, like most other conceptual systems, the mindreading system can receive as input any sensory or quasi-sensory (eg., imagistic or somatosensory) state that gets “globally broadcast” to all judgment-forming, memory-forming, desire-forming, and decision-making systems. (2009b, 3-4)

In this article, he provides a preliminary draft of the informatic map he subsequently fleshes out in The Opacity of the Mind. He takes Baars (1988) Global Workspace Theory of Consciousness as a primary assumption, which requires him to distinguish between information that is and is not ‘globally broadcast.’ Consistent with the massive modularity endorsed in The Architecture of the Mind (2006), he posits a variety of informatically ‘encapsulated’ mechanisms operating ‘subpersonally’ or outside conscious access. The ‘mindreading system,’ not surprisingly, is accorded the most attention. Other mechanisms, when not directly recruited from preexisting cognitive scientific sources, are posited to explain various folk-psychological categories, such as belief. The tenability of these mechanisms turns on what might be called the ‘Accomplishment Assumption,’ the notion that all aspects of mental life that can be (or as in the case of folk psychology, already are) individuated are the accomplishments of various discrete neural mechanisms.

Given these mechanisms, Carruthers makes a number of ‘access inferences,’ each turning on the kinds of information required for each mechanism to discharge its function. To interpret the actions of others, the mindreading system needs access to information regarding those actions, which means it needs access to those systems dedicated to gathering that information. Given the apparently radical difference between self and other interpretation, Carruthers needs to delineate the kind of access characteristic of each:

Although the mindreading system has access to perceptual states, the proposal is that it lacks any access to the outputs of the belief-forming and decision-making mechanisms that feed off those states. Hence, self-attributions of propositional attitude events like judging and deciding are always the result of a swift (and unconscious) process of self-interpretation. However, it isn’t just the subject’s overt behavior and physical circumstances that provide the basis for the interpretation. Data about perceptions, visual and auditory imagery (including sentences rehearsed in “inner speech”), patterns of attention, and emotional feelings can all be grist for the self-interpretative view. (2009b, 4)

So the brain does possess belief mechanisms and the like, but they are informatically segregated from the suite of mechanisms responsible for generating the self-attribution of PAs. The former, it seems, do not ‘globally broadcast,’ and so their machinations must be gleaned the same way our brains glean the machinations of other brains, via their interpretative mindreading systems. Since, however, the mindreading system has no access to any information globally broadcast by other brains, he has to concede that the mindreading system is privy to additional information in instances of self-attribution, just not any involving direct access to the mechanisms responsible for PAs. So he lists what he presumes is available.

The problem, of course, is that it just doesn’t feel that way. Assumptions of unmediated access or self-transparency, Carruthers writes, “seem to be almost universal across times and cultures” (2011 15), not to mention “widespread in philosophy.” If we are forced to rely on our environmentally-oriented mindreading systems to interpret, as opposed to intuit, the function of our own brains, then why should we have any notion of introspective access to our PAs, let alone the presumption of unmediated access? Why presume an incorrigible introspective access that we simply do not have?

Carruthers offers what might be called a ‘less is more account.’ The mindreading system, he proposes, represents its self-application as direct rather than interpretative,. Our sense of self-transparency is the product of a mechanism. Once we have a mechanism, however, we require some kind of evolutionary story warranting its development. Carruthers argues that the presumption of incorrigible introspective access spares the brain a complicated series of computations pertaining to reliability without any real gain in reliability. “The transparency of our minds to ourselves,” he explains in an interview, “is a simplifying but false heuristic…” Citing Gigarenzer and Todd (1999), he points out that heuristics, even deceptive ones, regularly out-perform more fine-grained computational processes simply because of the relation between complexity and error. So long as self-interpretation via the mindreading system is generally reliable, this ‘Cartesian assumption’ or ‘self-transparency thesis’ (Carruthers 2008) possesses the advantage of simplicity to the extent that it relieves the need for computational estimations of interpretative reliability. The functional adequacy of a direct access model, in other words, more than compensates for its epistemic inadequacy, once one considers the metabolic cost and ‘robustness,’ as they say in ecological rationality circles, of the former versus the latter.

This explanation provides us with a clear-cut example of what I called the Accomplishment Assumption above. Given that ‘direct introspective access’ seems to be a discrete feature of mental life, it seems plausible to suppose that some discrete neural mechanism must be responsible for producing it. But there is a simpler explanation, one that draws out some of the problematic consequences of the ‘Follow the Information Game’ as it is presently played in cognitive science. A clue to this explanation can be found when Eric Schwitzgebel (2011) considers the selfsame problem:

Why, then, do people tend to be so confident in their introspective judgments, especially when queried in a casual and trusting way? Here is my guess: Because no one ever scolds us for getting it wrong about our experience and we never see decisive evidence of our error, we become cavalier. This lack of corrective feedback encourages a hypertrophy of confidence. [emphasis added] 130

Given his skepticism of ‘boxological’ mechanistic explanation (2011, 2012), Schwitzgebel can circumvent Carruthers’ dilemma (the mindreading system represents agent access either as direct or as interpretative) and simply pose the question in a far less structured way. Why do we possess unwarranted confidence in our introspective judgements? Well, no one tells us otherwise. But this simply begs the question of why. Why should we require ‘social scolding’ to ‘see decisive evidence of our error’? Why can’t we just see it on our own?

The easy answer is that, short of different perspectives, the requisite information is simply not available to us. The problem, in Schwitzgebel’s characterization, is that we have only a single perspective on our conscious experience, one lacking access to information regarding the limitations of introspection. In other words, the near universal presumption of self-transparency is an artifact of the near universal lack of any information otherwise. On this account, you could say the traditional, prescientific assumption of self-transparency is not so different from the traditional, prescientific assumption of geocentrism. We experience ‘vection,’ a sense of bodily displacement, whenever a large portion of our visual field moves. Short of that perceived motion (or other vestibular effects), a sense of motionless is the cognitive default. This was why the accumulation of so much (otherwise inaccessible) scientific knowledge was required to overturn geocentrism: not because we possessed an ‘innate representation’ of a motionless earth, but because of the interplay between our sensory limitations and our evolved capacity to detect motion.

The self-transparency assumption, on this account, is simply a kind of ‘noocentrism,’ the result of a certain limiting relationship between the information available and the cognitive systems utilized. The problem with geocentrism was that we were all earthbound, literally limited to what meagre extraterrestrial information our native senses could provide. That information, given our cognitive capacities, made geocentrism intuitively obvious. Thus the revolutionary significance of Galileo and his Dutch Spyglass. The problem with noocentrism, on the other hand, is that we are all brainbound, literally limited to what neural information our introspective ‘sense’ can provide. As it turns out that information, given our cognitive capacities, makes noocentrism intuitively obvious. Why? Because short of any Neural Spyglass, we lack any information regarding the insufficiency of the information at our disposal. We assume self-transparency because there is literally no other assumption to make.

One need only follow the information. Adopting a dual process perspective (Stanovich, 1999; Stanovich and Toplak, 2011), the globally broadcast information accessed for System 2 deliberation contains no information regarding its interpretative (and thus limited) status. Given that global broadcasting or integration operates within fixed bounds, System 2 has no way of testing, let alone sourcing, the information it provides. Thus, one cannot know whether the information available for introspection is insufficient in this or that respect. But since the information accessed is never flagged for insufficiencies (and why should it be, when it is generally reliable?) this suggests sufficiency will always be the assumptive default.

Given that Carruthers’ innate self-transparency account is one that he has developed with great care and ingenuity over the course of several years, a full rebuttal of the position would require an article in its own right. It’s worth noting, however, that many of the advantages that he attributes to his self-transparency mechanism also fall out of the default self-transparency account proposed here, with the added advantage of exacting no metabolic or computational cost whatsoever. You could say it’s a ‘more for even less’ account.

But despite its parsimony, there’s something decidedly strange about the notion of default self-transparency. Carruthers himself briefly entertains the possibility in The Opacity of the Mind, stating that “[a] universal or near-universal commitment to transparency may then result from nothing more than the basic principle or ‘law’ that when something appears to be the case one is disposed to form the belief that it is the case, in the absence of countervailing considerations or contrary evidence” (15). How might this ‘basic principle or law’ be characterized? Carruthers, I think, shies from pursuing this line of questioning simply because it presses FIG into hitherto unexplored territory.

Parsimony alone motivates a sustained consideration of what lies behind default self-transparency. Emily Pronin (2009), for instance, in her consideration of the ‘introspection illusion,’ draws an important connection between the assumption of self transparency and the so-called ‘bias blind spot,’ the fact that biases we find obvious in others are almost entirely invisible to ourselves. She details a number of studies where subjects were even more prone to exhibit this ‘blindness’ when provided opportunities to introspect. Now why are these biases invisible to us? Should we assume, as Carruthers does in the case of self-transparency, that some mechanism or mechanisms are required to represent our intuitions as unbiased in each case? Or should we exercise thrift and suppose that something structural is implicit in each?

In what follows, I propose to pursue the latter possibility, to argue that what I called ‘default sufficiency’ above is an inevitable consequence of mechanistic explanation, or FIG, once we appreciate the systematic role informatic neglect plays in human cognition.

.

The Invisibility of Ignorance

Which brings us to Daniel Kahneman. In a New York Times (2011, October 19) piece entitled “Don’t Blink! The Hazards of Confidence,” he writes of his time in the Psychology Branch of the Israeli Army, where he was tasked with evaluating candidates for officer training by observing them in a variety of tests designed to isolate soldiers’ leadership skills. His evaluations, as it turned out, were almost entirely useless. But what surprised him was the way knowing this seemed to have little or no impact on the confidence with which he and his fellows submitted their subsequent evaluations, time and again. He was so struck by the phenomenon that he would go on to study it as the ‘illusion of validity,’ a specific instance of the general role the availability of information seems to plays in human cognition–or as he later terms it, What-You-See-Is-All-There-Is, or WYSIATI.

The idea, quite simply, is that because you don’t know what you don’t know, you tend, in many contexts, to think you know all that you need to know. As he puts it in Thinking, Fast and Slow:

An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. [Our automatic cognitive system] excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have. (2011, 85)

As Kahneman shows, this leads to myriad errors in reasoning, including our peculiar tendency in certain contexts to be more certain about our interpretations the less information we have available. The idea is so simple as to be platitudinal: only the information available for cognition can be cognized. Other information, as Kahneman says, “might as well not exist” for the systems involved. Human cognition, it seems, abhors a vacuum.

The problem with platitudes, however, is that they are all too often overlooked, even when, as I shall argue in this case, their consequences are spectacularly profound. In the case of informatic availability, one need only look to clinical cases of anosognosia to see the impact of what might be called domain specific informatic neglect, the neuropathological loss of specific forms of information. Given a certain, complex pattern of neural damage, many patients suffering deficits as profound as lateralized paralysis, deafness, even complete blindness, appear to be entirely unaware of the deficit. Perhaps because of the informatic bandwidth of vision, visual anosognosia, or ‘Anton’s Syndrome,’ is generally regarded as the most dramatic instance of the malady. Prigatano (2010) enumerates the essential features of the syndrome as following:

First, the patient is completely blind secondary to cortical damage in the occipital regions of the brain. Second, these lesions are bilateral. Third, the patient is not only unaware of her blindness; she rejects any objective evidence of her blindness. Fourth, the patient offers plausible, but at times confabulatory responses to explain away any possible evidence of her failure to see (e.g., “The room is dark,” or “I don’t have my glasses, therefore how can I see?”). Fifth, the patient has an apparent lack of concern (or anosodiaphoria) over her neurological condition. (456)

These symptoms are almost tailor-made for FIG. Obviously, the blindness stems from the occlusion of raw visual information. The second-order ‘blindness,’ the patient’s inability to ‘see’ that they cannot see, turns, one might suppose, on the unavailability of information regarding the unavailability of visual information. At some crucial juncture, the information required to process the lack of visual information has gone missing. As Kahneman might say, since System 1 is dedicated to the construction of ‘the best possible story’ given only the information it has, the patient confabulates, utterly convinced they can see even though they are quite blind.

Anton’s Syndrome, in other words, can be seen as a neuropathological instance of WYSIATI. And WYSIATI, conversely, can be seen as a non-neuropathological version of anosognosia. And both, I want to argue, are analogous to the default self-transparency thesis I offered in lieu of Carruthers’ innate self-transparency thesis above. Consider the following ‘translation’ of Prigatano’s symptoms, only applied to what might be called ‘Carruthers’ Syndrome’:

First, the philosopher is introspectively blind to his PAs secondary to various developmental and structural constraints. Second, the philosopher is not aware of his introspective blindness, and is prone to reject objective evidence of it. Third, the philosopher offers plausible, but at times confabulatory responses to explain away evidence of his inability to introspectively access his PAs. And fourth, the philosopher often exhibits an apparent lack of concern for his less than ideal neurological constitution.

Here we see how the default self-transparency thesis I offered above is capable of filling the explanatory shoes of Carruthers’ innate self-transparency thesis: it simply falls out of the structure of cognition. In FIG terms, what philosophers call ‘introspection’ possibly provides some combination of impoverished information, skewed information, or (what amounts to the same) information matched to cognitive systems other than those employed in deliberative cognition, without–and here’s the crucial twist–providing information to this effect. Our sense of self-transparency, in other words, is a kind of ‘unk-unk effect,’ what happens when we can’t see that we can’t see. In the absence of information to the contrary, what is globally broadcast (or integrated) for System 2 deliberative uptake, no matter how attenuated, seems become everything there is to apprehend.

But what does it mean to that say that default self-transparency ‘falls out of the structure of cognition’? Isn’t this, for instance, a version of ‘belief perseverance’? Prima facie, at least, something like Keith Stanovich’s (1999) ‘knowledge projection argument’ might seem to offer an explanation, the notion that “in a natural ecology where most of our prior beliefs are true, projecting our beliefs onto new data will lead to faster accumulation of knowledge” (Sa, 1999, 506). But as the analogy to Kahneman’s WYSIATI and Anton’s Syndrome should make clear, something considerably more profound than the ‘projection of prior beliefs’ seems to be at work here. The question is what.

Consider the following: On Carruthers’ innate self-transparency account, the assumption seems to be that short of the mindreading system telling us otherwise, we would know that something hinky is afoot. But how? To paraphrase Plato, how could we, having never seen otherwise, know that we were simply guessing at a parade of shadows? What kind of cognitive resources could we draw on? We couldn’t source the information back to the mindreading system. Neither could we compare it with some baseline–some introspective yardstick of informatic sufficiency. In fact, it’s actually difficult to imagine how we might come to doubt introspectively accessed information at all, short of regimented, deliberative inquiry.

So then why does Carruthers seem to make the opposite assumption? Why does he assume that we would know short of some representational device telling us otherwise?

To answer this question we first need to appreciate the ubiquity of ‘unk-unk effects’ in the natural world. The exploitation of cognitive scotoma or blind spots has shaped the evolution of entire species, including our own. Consider the apparently instinctive nature of human censoriousness, the implicit understanding that managing the behaviour of others requires managing the information they have available. Consider mimicry or camouflage. Or consider ‘obligate brood parasites’ such as the cuckoo, which lays its eggs in the nests of other birds to be raised to maturity by them. Looked at in purely biomechanical terms, these are all examples of certain organic systems exploiting (by operating outside) the detection/response thresholds of other organic systems. Certainly the details of these interactions remain a work in progress, but the principle is not at all mysterious. One might say the same of Anton’s syndrome or anosognosia more generally: disabling certain devices systematically impacts the capacities of the system in some dramatic ways, including deficit detection. The lack of information constrains computation, constrains cognition, period. It seems pretty straightforward, mechanically speaking.

So why, then, does Anton’s jar against our epistemic intuitions the way it does? Why do we want to assume that somehow, even if we experienced the precise pattern of neural damage, we would be the magical exception, we would say, “Aha! I only think I see!”

Because when we are blind to our blindnesses, we think we see, either actually or potentially, all that there is to be seen. Or as Kahneman would put it, because of WYSIATI. We think we would be the one Anton’s patient who would actually cognize their loss of sight, in other words, for the very same reason the Anton’s patient is convinced he can still see! The lack of information not only constrains cognition, it constrains cognition in ways that escape cognition. We possess, not a representational presumption of introspective omniscience, but a structural inability to cognize the limits of metacognition.

You might say introspection is a kind of anosognosiac.

So why does Carruthers assume the mindreading system needs an incorrigibility device? The Accomplishment Assumption forces his hand, certainly. He thinks he has an apparently discrete intuition–self-transparency–that has to be generated somehow. But in explaining away the intuition he is also paradoxically serving it, because even if we agree with Carruthers, we nonetheless assume we would know something is up if incorrigibility wasn’t somehow signalled. There’s a sense, in other words, in which Carruthers’ argument against self-transparency appeals to it!

Now this broaches the question of how informatic neglect bears on our epistemic intuitions more generally. My goal here, however, is to simply illustrate that informatic neglect has to have a pivotal role to play in our understanding of cognition through an account of the role it plays in introspection. Suffice to say the ‘basic principle or law’ that Carruthers considers in passing is actually more basic than the ‘disposition to believe in the absence of countervailing considerations.’ Our cognitive systems simply cannot allow, to use Kahneman’s terms, for information they do not have. This is a brute fact of natural information processing systems.

Sufficiency is the default because information, understood as systematic differences making systematic differences, is effective. This is why, for instance, unknowns must be known, to effect changes in behaviour. And this is what makes research on cognitive biases and the neuropathologies of neglect so unsettling: they clearly show the way we are mere mechanisms, cognitive systems causally bound to the information available. If the informatic and cognitive limits of introspection are not available for introspection (and how could they be?), then introspection will seem, curiously, limitless, no matter how severe the actual limits may be.

The potential severity of those limits remains to be seen.

.

Introspection and the Bayesian Brain

Since unknown unknowns offer FIG nothing to follow, it should perhaps come as no surprise that the potential relevance of unk-unks has itself remained an unknown unknown in cognitive science. The idea proposed here is that ‘naive introspection’ be viewed as a kind of natural anosognosia, as a case where we think we see, even though we are largely blind. It stands, therefore, squarely in the ‘introspective unreliability’ camp most forcefully defended by Eric Schwitzgebel (2007, 2008, 2011a, 2011b, 2012). Jacob Hohwy (2011, 2012), however, has offered a novel defence of introspective reliability via a sustained consideration of Karl Friston’s (2006, 2012, for an overview) free energy elaboration of the Bayesian brain hypothesis, an approach which has been recently been making inroads due to the apparent comprehensiveness of its explanatory power.

Hohwy (2011) argues that the introspective unreliability suggested by Schwitzgebel is in fact better explained by phenomenological variability. Introspection only appears as unreliable as it does on Schwitzgebel’s account because it assumes a relatively stable phenomenology. “The evidence,” Hohwy writes, “can be summarized like this: everyday or ‘naive’ introspection tells us that our phenomenology is stable and certain but, surprisingly, calm and attentive introspection tells us our phenomenology is not stable and certain, rather it is variable and uncertain” (265). In other words, either ‘attentive introspection’ is unreliable and phenomenology is stable, or ‘naive introspection’ is unreliable and phenomenology is in fact variable.

Hohwy identifies at least three sources of potential phenomenological variability on Friston’s free energy account: 1) attenuation of the ‘prediction error landscape’ through ‘inferences’ that cancel out predictive success and allow unpredicted input to ascend; 2) change through ‘agency’ and movement; and 3) increase in precision and gain via attention. Thus, he argues “[i]f the brain is this kind of inference-machine, then it is a fundamental expectation that there is variability in the phenomenology engendered by perceptual inferences, and to which introspection in turn has access” (270).

The problem with saving introspective reliability by arguing phenomenal variability, however, is that it becomes difficult to understand what in operational terms is exactly being saved. Is the target too quick? Or is the tracking too slow? Hohwy can adduce evidence and arguments for the variability of conscious experience, and Schwitzgebel can adduce evidence and arguments for the unreliability of introspection, but there is a curious sense in which their conclusions are the same: in a number of respects conscious experience eludes introspective cognition.

Setting aside this argument, the real value in Hohwy’s account lies in his consideration of what might be called introspective applicability and introspective interference. Regarding the first, applicability, Hohwy is concerned with distinguishing those instances where the researcher’s request, ‘Please, introspect,’ is warranted and where it is ‘suboptimal.’ He discusses the so-called ‘default mode network,’ the systems of the brain engaged when the subject’s thoughts and imagery are detached from the world, as opposed to the systems engaged when the subject is directly involved with his or her environment. He then argues that the variance in introspective reliability one finds between experiments can be explained by whether the mental tasks involve the default mode as opposed to mental tasks involving the environmental mode. Tasks involving the default mode evince greater reliability when compared to tasks involving the environmental mode, he suggests, simply because the request to introspect is profoundly artificial in the latter.

His argument, in other words, is that introspection, as an adaptive, evolutionary artifact, is not a universally applicable form of cognition, and that the apparent unreliability of introspection is potentially a product of researchers asking subjects to apply introspection ‘out of bounds,’ in ways that it simply was not designed to be used. In ecological rationality terms (Todd and Gigarenzer, 2012), one might say introspection is a specialized cognitive tool (or collection of tools), a heuristic like any other, and as such will only properly function the degree to which it is properly matched to its ‘ecology.’ This possibility raises a host of questions. If introspection, far from being the monolithic, information-maximizing faculty assumed by the tradition, is actually a kind of cognitive tool box, a collection of heuristics adapted to discharge specific functions, then we seem to be faced with the onerous task of identifying the tools and matching them to the appropriate tasks.

Regarding introspective interference, the question, to paraphrase Hohwy is whether introspection changes or leaves phenomenal states as they are (262). In the course of discussing the likelihood that introspection involves a plurality of processes pertaining to different domains, he provides the following footnote:

Another tier can potentially be added to this account, directed specifically at the cognitive mechanisms underpinning introspection itself. If introspection is itself a type of internal predictive inference taking phenomenal states as input, then introspective inference would be subject to the similar types of prediction error dynamics as perceptual inference itself. In this way introspective inference about phenomenality would add variability to the already variable phenomenality. This sketch of an approach to introspection is attractive because it treats introspection as also a type of unconscious inference; however, it remains to be seen if it can be worked out in satisfactory detail and I do not here want to defend introspection by subscribing to a particular theory about it. 270

By ascribing to Friston’s free energy account, Hohwy is committed to an account that conceives the brain as a mechanism that extracts information regarding the causal structure of its environment via the sensory effects of that environment. As Hohwy (2012) puts it, a ‘problem of representation’ follows from this, since the brain is stranded with sensory effects and so has no direct access to causes. As a result it needs to establish causal relations de novo, as he puts it. Sensory input contains patterns as well as noise, the repetition of which allows the formation of predictions, which can be ‘tested’ against further repetitions. Prediction error minimization (PEM) allows the system to automatically adapt to real causal patterns in the environment, which can then be said to ‘supervise’ the system. The idea is that the brain contains a hierarchy of ascending PEM levels, beginning with basic sensory and causal regularities, and with the ‘harder to predict’ signals being passed upward, ultimately producing representations of the world possessing ‘causal depth.’ All these levels exhibit ‘lateral connectivity,’ allowing the refinement of prediction via ‘contextual information.’

Although the free energy account is not an account of consciousness, it does seem to explain what Floridi (2011) calls the ‘one dimensionality of experience,’ the way, as he writes, “experience is experience, only experience, and nothing but experience” (296). If the brain is a certain kind of Bayesian causal inference engine, then one might expect the generative models it produces to be utterly lacking any explicit neurofunctional information, given the dedication of neural structure and function to minimizing environmental surprise. One might expect, in other words, that the causal structure of the brain will be utterly invisible to the brain, that it will remain, out of structural necessity, a dreaded unknown unknown–or unk-unk.

The brain, on this kind of prediction error minimization account, simply has to be ‘blind’ to itself. And this is where, far from ‘attractive’ as Hohwy suggests, the mere notion of ‘introspection’ modelled on prediction error minimization becomes exceedingly difficult to understand. Does introspection (or the plurality of processes we label as such) proceed via hierarchical prediction error minimization from sensory effects to build generative models of the causal structure of the human brain? Almost certainly not. Why? Because as a free energy minimizing mechanism (or suite of mechanisms), introspection would seem to be thoroughly hobbled for at least four different reasons:

  • 1) Functional dependence: On the free energy account, the human brain distills the causal structure of its environments from the sensory effects of that causal structure. One might, on this model, isolate two distinct vectors of causality, one, which might be called the ‘lateral,’ pertaining to the causal structure of the environment, and another, which might be call the ‘medial,’ pertaining to the causal structure of sensory inputs and the brain. As mentioned above, the brain can only model the lateral vector of environmental causal structure by neglecting the medial vector of its own causal structure. This neglect requires that the brain enjoy a certain degree of functional independence from the causal structure of its environment, simply because ‘medial interference’ will necessarily generate ‘lateral noise,’ thus rendering the causal structure of the environment more difficult, if not impossible, to model. The sheer interconnectivity of the brain, however, would likely render substantial medial interference difficult for any introspective device (or suite of devices) to avoid.
  • 2) Structural immobility: Proximity complicates cognition. To get an idea of the kind of modelling constraints any neurally embedded introspective device would suffer, think of the difference between two anthropologists trying to understand a preliterate tribesman from the Amazon, the one ranging freely with her subject in the field, gathering information from a plurality of sources, the other locked with him in a coffin. Since it is functionally implicated–or brainbound–relative to its target, the ability of any introspective device (or suite of devices) to engage in the ‘active inferences’ would be severely restricted. On Friston’s free energy account the passive reception of sensory input is complemented by behavioural outputs geared to maximizing information from a variety of positions within the organism’s environment, thus minimizing the likelihood of ‘perspectival’ or angular illusions, false inferences due to the inability to test predictions from alternate angles and positions. Geocentrism is perhaps the most notorious example of such an illusion. Given structural immobility, one might suppose, any introspective device (or suite of devices) would suffer ‘phenomenal’ analogues to this and other illusions pertaining to limits placed on exploratory information-gathering.
  • 3) Cognitive resources: If we assume that human introspective capacity is a relatively recent evolutionary adaptation, we might expect any introspective device (or suite of devices) to exploit preexisting cognitive resources, which is to say, cognitive systems primarily adapted to environmental prediction error minimization. For instance, one might argue that both (1) and (2) fairly necessitate the truth of something like Carruther’s mindreading account, particularly if (as seems to be the case) mindreading antedates introspection. Functional dependence and structural immobility suggest that we are actually in a better position mechanically to accurately predict the behaviour of others than ourselves, as indeed a growing body of evidence indicates (Carruthers (2009) provides an excellent overview). Otherwise, given our apparent ability to attend to the whole of experience, does it make sense, short of severe evolutionary pressure, to presume the evolution of entirely novel cognitive systems adapted to the accurate modelling second-order, medial information? It seems far more likely that access to this information was incremental across generations, and that it was initially selected for the degree to which it proved advantageous given our preexisting suite of environmentally oriented cognitive abilities.
  • 4) Target complexity: Any introspective device (or suite of devices) modelled on the PEM (or, for that matter, any other mechanistic) account must also cope with the sheer functional complexity of the human brain. It is difficult to imagine, particularly given (1), (2), and (3) above, how the tracking that results could avoid suffering out-and-out astronomical ‘resolution deficits’ and distortions of various kinds.

The picture these complicating factors paint is sobering. Any introspective device (or suite of devices) modelled on free energy Bayesian principles would be almost fantastically crippled: neurofunctionally embedded (which is to say, functionally entangled and structurally imprisoned) in the most complicated machinery known, accessing information for environmentally biased cognitive systems. Far from what Hohwy supposes, the problems of applicability and interference, when pursued through a free energy lens, at least, would seem to preclude introspection as a possibility.

But there is another option, one that would be unthinkable were it not for the pervasiveness and profundity of the unk-unk effect: that this is simply what introspection is, a kind of near blindness that we confuse for brilliant vision, simply because it’s the only vision we know.

The problem facing any mechanistic account of introspection can be generalized as the question of information rendered and cognitive system applied: to what extent is the information rendered insufficient, and to what extent is the cognitive system activated misapplied? This, I would argue, is the great fork in the FIG road. On the ‘information rendered’ side of the issue, informatic neglect means the assumption of sufficiency. We have no idea, as a rule, whether we have the information we need for effective deliberation or not. One need only consider the staggering complexity of the brain–complex enough to stymy a science that has puzzled through the origins of the universe in the meantime–to realize the astronomical amounts of information occluded by metacognition. On the ‘cognitive system applied’ side, informatic neglect means the assumption of universality. We have no idea, as a rule, whether we’re misapplying ‘introspection’ or not. One need only consider the heuristic nature of human cognition, the fact that heuristics are adaptive and so matched to specific sets of problems, to realize that introspective misapplications, such as those argued by Hohwy, are likely an inevitability.

This is the turn where unknown unknowns earn their reputation for dread. Given the informatic straits of introspection, what are the chances that we, blind as we are, have anything approaching the kind of information we require to make accurate introspective judgments regarding the ‘nature’ of mind and consciousness? Given the heuristic limitations of introspection, what are the chances that we, blind as we are, somehow manage to avoid colouring far outside the cognitive lines? Is it fair to assume that the answer is, ‘Not good’?

Before continuing to consider this question in more detail, it’s worth noting how this issue of informatic availability and cognitive applicability becomes out-and-out unavoidable once you acknowledge the problem of the ‘dreaded unknown unknowns.’ If the primary symptom of patients suffering neuropathological neglect is the inability to cognize their cognitive deficits, then how do we know that we don’t suffer from any number of ‘natural’ forms of metacognitive neglect? The obvious answer is, We don’t. Could what we call ‘philosophical introspection’ simply be a kind of mitigated version of Anton’s Syndrome? Could this be the reason why we find consciousness so stupendously difficult to understand? Given millennia of assuming the best of introspection and finding only perplexity, perhaps, finally, the time has come to assume the worst, and to reconceptualize the problematic of consciousness in terms of privation, distortion, and neglect.

.

Conclusion: Introspection, Tangled and Blind

Cognitive science and philosophy of mind suffer from a profound scotoma, a blindness to the structural role blindness plays in our intuitive assumptions. As we saw in passing, FIG actually plays into this blindness, encouraging theorists and researchers to conceive the relationship between information and experience exclusively in what I called Accomplishment terms. If self-transparency is the ubiquitous assumption, then it follows that some mechanism possessing some ‘self-transparency representation’ must be responsible. Informatic neglect, however, allows us to see it in more parsimonious, structural terms, as a positive, discrete feature of human cognition possessing no discrete neurofunctional correlate. And this, I would argue, counts as a game-changer as far as FIG is concerned. The possibility that certain, various discrete features of cognition and consciousness could be structural expressions of various kinds of informatic neglect not only rewrites the rules of FIG, it drastically changes the field of play.

That FIG needs to be sensitive to informatic neglect I take as uncontroversial. Informatic neglect seems to be one of those peculiar issues that everyone acknowledges but never quite sees, one that goes without saying because it goes unseen. Schwitzgebel (2012), for instance, provides a number of examples of the complications and ambiguities attending ‘acts of introspection’ to call attention to the artificial division of introspective and non-introspective processes, and in particular, to what might be called the ‘transparency problem,’ the way judgments about experience effortlessly slip into judgments about the objects/contents of experience. Given this welter of obscurities, complicating factors, not to mention the “massive interconnection of the brain,” he advocates what might be called a ‘tangled’ account of introspective cognitive processes:

What we have, or seem to have, is a cognitive confluence of crazy spaghetti, with aspects of self-detection, self-shaping, self-fulfilment, spontaneous expression, priming and association, categorical assumptions, outward perception, memory, inference, hypothesis testing, bodily activity, and who only knows what else, all feeding into our judgments about current states of mind. To attempt to isolate a piece of this confluence as the introspective process – the one true introspective process, though influenced by, interfered with, supported by, launched or halted by, all the others – is, I suggest, like trying to find the one way in which a person makes her parenting decisions… 19

Given that you accept his conclusion as a mere possibility (or as I would argue, a distinct probability), you implicitly accept much of what I’m saying here regarding informatic neglect. You accept that introspection could be massively plural while appearing to be unitary. You accept that introspection could be skewed and distorted while appearing to be the very rule. How could this be, short of informatic neglect? Recall Pronin’s (2009) ‘bias blind spots,’ or Hohwy’s (2011) mismatched ‘plurality of processes.’ How could it be that we swap between cognitive systems oblivious, with nothing, no intuition, no feel, to demarcate any transitions, let alone their applicability? As I hope should be clear, this question is simply a version of Carruthers’ question from above: How could it be we once unanimously thought that introspection was incorrigible? Both questions ask the same thing of introspection, namely, To what extent are the various limits of introspection available to introspection?

The answer, quite simply, is that they are not. Introspection is out-and-out blind to its internal structure, its cognitive applicability, and its informatic insufficiencies–let alone to its neurofunctionality. To the extent that we fail to recognize these blindnesses, we are effectively introspective anosognosiacs, simply hoping that things are ‘just so.’ And this is just to say that informatic neglect, once acknowledged, constitutes a genuine theoretical crisis, for philosophy of mind as well as for cognitive science, insofar as their operational assumptions turn on interpretations of information gleaned, by hook or by crook, from ‘introspection.’

Of course, the ‘problem of introspection’ is nothing new (in certain circles, at least). The literature abounds with attempts to ‘sanitize’ introspective data for scientific consumption. Given this, one might wonder what distinguishes informatic neglect from the growing army of experimental confounds already identified. Perhaps the appropriate methodological precautions will allow us to quarantine the problem. Schooler and Schreiber (2004), for instance, offer one such attempt to ‘massage’ FIG in such a way to preserve the empirical utility of introspection. After considering a variety of ‘introspective failures,’ they pin the bulk of the blame on what they call ‘translation dissociations’ between consciousness and meta-consciousness, the idea being that the researcher’s demand, ‘Please, introspect,’ forces the subject to translate information available for introspection into action. They categorize three kinds of translation dissociations: 1) detection, where the ‘signal’ to be introspected is too weak or ambiguous; 2) transformation, where tasks “require intervening operations for which the system is ill-equipped” (32); and 3) substitution, where the information rendered has no connection to the information experimentally targeted. Once these ‘myopias’ are identified, the assumption is, methodologies can be designed to act as corrective lenses.

The problem that informatic neglect poses for FIG, however, is far and away more profound. To see this, one need only consider the dichotomy of ‘consciousness versus metaconsciousness,’ and the assumption that there is some fact of the matter pertaining to the first that is in principle accessible to the latter. The point isn’t that no principled distinction can be made between the two, but rather that even if it can, the putative target, consciousness, is every bit as susceptible to informatic neglect as any metaconscious attempt to cognize it. The assumption is simply this: Information that finds itself globally broadcast or integrated will not, as a rule, include information regarding its ‘limits.’ Insofar as we can assume this, we can assume that informatic neglect isn’t so much a ‘problem of introspection’ as it is a problem afflicting consciousness as whole.

Our sketch of Friston’s Bayesian brain above demonstrated why this must be the case. Simply ask: What would the brain require to accurately model itself from within itself? On the PEM account, the brain is a dedicated causal inference engine, as it must be, given the difficulties of isolating the causal structure of its environment from sensory effects. This means that the brain has no means of modelling its own causal structure, short of either 1) analogizing from brains found in its environment, or 2) developing some kind of onboard ‘secondary inference’ system, one which, as was argued above, we should expect would face a number of dramatic informatic and cognitive obstacles. Functionally entangled with, structurally immured in, and heuristically mismatched to the most complicated machinery known, such a secondary inference system, one might expect, would suffer any number of deficits, all the while assuming itself incorrigible simply because it lacks any direct means of detecting otherwise.

Consciousness could very well be a cuckoo, an imposter with ends or functions all its own, and we would never be able to intuit otherwise. As we have seen, from the mechanistic standpoint this has to be a possibility. And given this possibility, informatic neglect plainly threatens all our assumptions. Once again: What would the brain require to model itself from within itself? What evolutionary demands were answered how? Bracket, as best you can, your introspective assumptions, and ask yourself how many ways these questions can be cogently answered. Far more than is friendly to our intuitive assumptions–these little blind men who wander out of the darkness telling fantastic and incomprehensible tales.

Even apparent boilerplate intuitions like efficacy become moot. The argument that the brain is generally efficacious is trivial. Given that the targets of introspective tracking are systematically related to the function of the brain, informatic neglect (and the illusion of sufficiency in particular) suggests that what we introspect or intuit will evince practical efficacy no matter how drastically its actual neural functions differ or even contradict our manifest assumptions. Neurofunctional dissociations, as unknown unknowns, simply do not exist for metacognition. “[T]he absence of representation,” as Dennett (1991) famously writes, “is not the same as the representation of absence” (359). Since the ‘unk-unk effect’ has no effect, cognition is stranded with assumptive sufficiency on the one hand, and the efficacy of our practices on the other. Informatic neglect, in other words, means that our manifest intuitions (not to mention our traditional assumptions) of efficacy are all but worthless. The question of the efficacy of what philosophers think they intuit or introspect is what it has always been: a question that only a mature neuroscience can resolve. And given that nothing biases intuition or introspection ‘friendly’ outcomes over unfriendly outcomes, we need to grapple with the fact that any future neuroscience is far more likely to be antagonistic to our intuitive, introspective assumptions than otherwise. There are far more ways for neurofunctionality to contradict our manifest and traditional assumptions than to rescue them. And perhaps this is precisely what we should expect, given the dismal history of traditional discourses once science colonizes their domain.

It is worth noting that a priori arguments simply beg the question, since it is entirely possible (likely probable given the free energy account) that evolution stranded us with suboptimal metacognitive capacities. One might simply ask, for instance, from where do our intuitions regarding the a priori come?

Evolutionary arguments, on the other hand, cut both ways. Everyone agrees that our general metacognitive capacities are adaptations of some kind, but adaptations for what? The accurate second-order appraisals of cognitive structure or ‘mind’ more generally? Seems unlikely. As far as we know, our introspective capacities could be the result of very specific evolutionary demands that required only gross distortions to be discharged. What need did our ancestors have for ‘theoretical descriptions of the mental’? Given informatic neglect (and the spectre of ‘Carruthers’ Syndrome’), evolutionary appeals would actually seem to count against the introspectionist, insofar as any story told would count as ‘just so,’ and thus serve to underscore the improbability of that story.

Again, the two question to be asked are: What would the brain require to model itself from within itself? What evolutionary demands were answered how? Informatic neglect, the dreaded unknown unknown, allows us to see how many ways these questions can be answered. By doing so, it makes plain the dramatic extent of our anosognosia, to think that we had won the magical introspection lottery.

Short of default self-transparency, why would anyone trust in any intuitions incompatible with those that underwrite the life sciences? If it is the case that evolution stranded us with just enough second-order information and cognitive resources to discharge a relatively limited repertoire of processes, then perhaps the last two millennia of second-order philosophical perplexity should not surprise us. Maybe we should expect that science, when it finally provides a detailed picture of informatic availability and cognitive applicability, will be able to diagnose most traditional philosophical problematics as the result of various, unavoidable cognitive illusions pertaining to informatic depletion, distortion and neglect. Then, perhaps, we will at last be able to see the terrain of perennial philosophical problems as a kind of ‘free energy landscape’ sustained by the misapplication of various, parochial cognitive systems to insufficient information. Perhaps noocentrism, like biocentrism and geocentrism before it, will become the purview of historians, a third and final ‘narcissistic wound.’

.

References

Armor, D., Taylor, S. (1998). Situated optimism: specific outcome expectancies and self-regulation. In M. P. Zanna (ed.), Advances in Experimental Social Psychology. 30. 309-379. New York, NY: Academic Press.

Baars, B. (1988). A Cognitive Theory of Consciousness. Cambridge, MA: Cambridge University Press.

Bakker, S. (2012). The last magic show: a blind brain theory of the appearance of consciousness. Retrieved from http://www.academia.edu/1502945/The_Last_Magic_Show_A_Blind_Brain_Theory_of_the_Appearance_of_Consciousness

Bechtel, W, and Abrahamson, A. (2005). Explanation: a mechanist alternative. Studies in the History of Biological Biomedical Sciences. 36. 421-441.

Bechtel, W. (2008). Mental Mechanisms: Philosophical Perspectives on Cognitive Neuroscience. New York, NY: Psychology Press.

Carruthers, P. (forthcoming). On knowing your own beliefs: a representationalist account. Retrieved from http://www.philosophy.umd.edu/Faculty/pcarruthers/On%20knowing%20your%20own%20beliefs.pdf * [In Nottelman (ed.). New Essays on Belief: Structure, Constitution and Content. Palgrave MacMillan]

Carruthers, P. (2011). The Opacity of Mind: An Integrative Theory of Self-Knowledge. Oxford: Oxford University Press.

Carruthers, P. (2009a). Introspection: divided and partly eliminated. Philosophy and Phenomenological Research. 80(1). 76-111.

Carruthers, P. (2009b). How we know our own minds: the relationship between mindreading and metacognition. Behavioral and Brain Sciences. 1-65. doi:10.1017/S0140525X09000545

Carruthers, P. (2008). Cartesian epistemology: is the theory of the self-transparent mind innate? Journal of Consciousness Studies. 15(4). 28-53.

Carruthers, P. (2006). The Architecture of the Mind: Massive Modularity and the Flexibility of Thought. Oxford: Clarendon Press.

Dennett, D. C. (2002). How could I be wrong? How wrong could I be? Journal of Consciousness Studies. 9. 1-4.

Dennett, D. C. (1991). Consciousness Explained. Boston, MA: Little Brown.

Dewey, J. (1958). Experience and Nature. New York, NY: Dover Publications.

Ehrlinger, J., Gilovich, T., and Ross, L. (2005). Peering into the bias blind spot: people’s assessments of bias in themselves and others. Personality and Social Psychology Bulletin, 31. 680-692.

Floridi, L. (2011). The Philosophy of Information. Oxford: Oxford University Press.

Friston, K. (2012). A free energy principle for biological systems. Entropy, 14. doi: 10.3390/e14112100.

Friston, K., Kilner, J., and Harrison, L. (2006). A free energy principle for the brain. Journal of Physiology – Paris, 100(1-3). 70-87.

Gigarenzer, G., Todd, P. and the ABC Research Group. (1999). Simple Heuristics that Make Us Smart. Oxford: Oxford University Press.

Heilman, K. and Harciarek, M. (2010). Anosognosia and anosodiaphoria of weakness. In G. P. Prigatano (ed.), The Study of Anosognosia. 89-112. Oxford: Oxford University Press.

Helweg-Larsen, M. and Shepperd, J. (2001). Do moderators of the optimistic bias affect personal or target risk estimates? A review of the literature. Personality and Social Psychology Review, 5. 74-95.

Hohwy, J. (2012). Attention and conscious perception in the hypothesis testing brain. Frontiers in Psychology, 3(96) 1-14. doi: 10.3389/fpsyg.201200096.

Hohwy, J. (2011). Phenomenal variability and introspective reliability. Mind & Language, 26(3). 261-286.

Huang, G. T. (2008). Is this a unified theory of the brain? The New Scientist. (2658). 30-33.

Hurlburt, R. T. and Schwitzgebel, E. (2007). Describing Inner Experience? Proponent Meets Skeptic. Cambridge, MA: MIT Press.

Irvine, E. (2012). Consciousness as a Scientific Concept: A Philosophy of Science Perspective. New York, NY: Springer.

Kahneman, D. (2011, October 19). Don’t blink! The hazards of confidence. The New York Times. Retrieved from http://www.nytimes.com/2011/10/23/magazine/don’t-blink-the-hazards-of-confidence.html?pagewanted=all&_r=0

Kahneman, Daniel (2011). Thinking, Fast and Slow. Toronto, ON: Doubleday Canada.

Lopez, J. K., and Fuxjager, M. J. (2012). Self-deception’s adaptive value: effects of positive thinking and the winner effect. Consciousness and Cognition. 21. 315-324.

Prigatano, G. and Wolf, T. (2010). Anton’s Syndrome and unawareness of partial or complete blindness. In G. P. Prigatano (ed.), The Study of Anosognosia. 455-467. Oxford: Oxford University Press.

Pronin, E. (2009). The introspection illusion. In M. P. Zanna (ed.), Advances in Experimental Social Psychology, 41. 1-68. Burlington: Academic Press.

Sa, W. C., West, R. F. and Stanovich, K. E. (1999). The domain specificity and generality of belief bias. Journal of Educational Psychology, 91(3). 497-510.

Schooler, J. W., and Schreiber, C. A. (2004). Experience, meta-consciousness, and the paradox of introspection. Journal of Consciousness Studies. 11. 17-39.

Schwitzgebel, E. (2012). Introspection, what? In D. Smithies & D. Stoljar (eds.), Introspection and Consciousness. Oxford: Oxford University Press.

Schwitzgebel, E. (2011a). Perplexities of Consciousness. Cambridge, MA: MIT Press.

Schwitzgebel, E. (2011b). Self-Ignorance. In J. Liu and J. Perry (eds.), Consciousness and the Self. Cambridge, MA: Cambridge University Press.

Schwitzgebel, E. (2008). The unreliability of naive introspection. Philosophical Review, 117(2). 245-273.

Sklar, A. Y., Levy, N., Goldstein, A., Mandel, R., Maril, A., and Hassin, R. R. (2012). Reading and doing arithmetic nonconsciously. Proceedings of the National Academy of Sciences. 1-6. doi: 10.1073/pnas.1211645109.

Stanovich, K. E. (1999). Who is Rational? Studies of Individual Differences in Reasoning. Mahwah, NJ: Lawrence Erlbaum Associates.

Stanovich, K. E. and Toplak, M. E. (2012). Defining features versus incidental correlates of Type 1 and Type 2 processing. Mind and Society. 11(1). 3-13.

Taylor, S. and Brown, J. (1988). Illusion and well-being: a social psychological perspective on mental health. Psychological Bulletin, 103. 193-210.

There are known knowns. (2012, November 7). In Wikipedia. Retrieved from http://en.wikipedia.org/wiki/There_are_known_knowns

Todd, P., Gigarenzer, G., and the ABC Research Group. (2012). What is ecological rationality? Ecological Rationality: Intelligence in the World. 3-30. Oxford: Oxford University Press.

von Hippel, W., & Trivers, R. (2011). The evolution and psychology of self-deception. Behavioral and Brain Sciences, 34, 1–56.

Weinstein, E. A. and Kahn, R. L. (1955). Denial of Illness: Symbolic and Physiological Aspects. Springfield, IL: Charles C. Thomas.

Weinstein, N. (1980). Unrealistic optimism about future life events. Journal of Personality and Social Psychology, 39. 806-820.

Wigner, E. (1960). The unreasonable effectiveness of mathematics in the natural sciences. Richard Courant lecture in mathematical sciences delivered at New York University, May 11, 1959. Communication on Pure and Applied Mathematics. 13. 1-14. doi: 10.1002

Reengineering Dennett: Intentionality and the ‘Curse of Dimensionality’

by rsbakker

Aphorism of the Day: A headache is one of those rare and precious things that is both in your head and in your head.

.

In a few weeks time, Three Pound Brain will be featuring an interview with Alex Rosenberg, who has become one of the world’s foremost advocates of Eliminativism. If you’re so inclined, now would be a good time to pick up his Atheist’s Guide to Reality, which will be the focus of much of the interview.

The primary reason I’m mentioning this has to do with a comment of Alex’s regarding Dennett’s project in our back and forth, how he “has long sought an account of intentionality that constructs it out of nonintentional resources in the brain.” This made me think of a paper of Dennett’s entitled “A Route to Intelligence: Oversimplify and Self-Monitor” that is only available on his website, and which he has cryptically labelled, ‘NEVER-TO-APPEAR PAPERS BY DANIEL DENNETT.’ Now maybe it’s simply a conceit on my part, given that pretty much everything I’ve written falls under the category of ‘never-to-appear,’ but this quixotic piece has been my favourite Dennett article every since I first stumbled upon it. In the note that Dennett appends to the beginning, he explains the provenance of the paper, how it was written for a volume that never coalesced, but he leaves its ‘never-to-be-published’ fate to the reader’s imagination. (If I had to guess, I would say it has to do with the way the piece converges on what is now a dated consideration of the frame problem).

Now in this paper, Dennett does what he often does (most recently, in this talk), which is to tell a ‘design process’ story that begins with the natural/subpersonal and ends with the intentional/personal. The thing I find so fascinating about this particular design process narrative is the way it outlines, albeit in a murky form, what I think actually is an account of how intentionality arises ‘out of the nonintentional resources of the brain,’ or the Blind Brain Theory. What I want to do is simply provide a close reading of the piece (the first of its kind, given that no one I know of has referenced this piece apart from Dennett himself), suggesting, once again, that Dennett was very nearly on the right track, but that he simply failed to grasp the explanatory opportunities his account affords in the proper way. “A Route to Intelligence” fairly bowled me over when I first read it a few months ago, given the striking way it touches on so many of the themes I’ve been developing here. So what follows, then, begins with a consideration of the way BBT itself follows from certain, staple observations and arguments belonging to Dennett’s incredible oeuvre. More indirectly, it will provide a glimpse of how the mere act of conceptualizing a given dynamic can enable theoretical innovation.

Dennett begins with the theme of avoidance. He asks us to imagine that scientists discover an asteroid on a collision course with earth. We’re helpless to stop it, so the most we can do is prepare for our doom. Then, out of nowhere, a second asteroid appears, striking the first in the most felicitous way possible saving the entire world. It seems like a miracle, but of course the second meteor was always out there, always hurtling on its auspicious course. What Dennett wants us to consider is the way ‘averting’ or ‘preventing’ is actually a kind of perspectival artifact. We only assumed the initial asteroid was going to destroy earth because of our ignorance of the subsequent: “It seems appropriate to speak of an averted or prevented catastrophe because we compare an anticipated history with the way things turned out and we locate an event which was the “pivotal” event relative to the divergence between that anticipation and the actual course of events, and we call this the “act” of preventing or avoiding” (“A Route to Intelligence,” 3).

In BBT terms, the upshot of this fable is quite clear: Ignorance–or better, the absence of information–has a profound, positive role to play in the way we conceive events. Now coming out of the ‘Continental’ tradition this is no great shakes: one only need think of Derrida’s ‘trace structure’ or Adorno’s ‘constellations.’ But as Dennett has found, this mindset is thoroughly foreign to most ‘Analytic’ thinkers. In a sense, Dennett is providing a peculiar kind of explanation by subtraction, bidding us to understand avoidance as the product of informatic inaccessibility. Here it’s worth calling attention to what I’ve been calling the ‘only game in town effect,’ or sufficiency. Avoidance may be the artifact of information scarcity, but we never perceive it as such. Avoidance, rather, is simply avoidance. It’s not as if we catch ourselves after the fact and say, ‘Well, it only seemed like a close call.’

Academics spend so much time attempting to overcome the freshman catechism, ‘It-is-what-it-is!’ that they almost universally fail to consider how out-and-out peculiar it is, even as it remains the ‘most natural thing in the world.’ How could ignorance, of all things, generate such a profound and ubiquitous illusion of epistemic sufficiency? Why does the appreciation of contextual relativity, the myriad ways our interpretations are informatically constrained, count as a kind of intellectual achievement?

Sufficiency can be seen as a generalization of what Daniel Kahneman refers to as WYSIATI (‘What You See Is All There Is’), the way we’re prone to confuse the information we have for all the information required. Lacking information regarding the insufficiency of the information we have, such as the existence of a second ‘saviour’ asteroid, we assume sufficiency, that we are doomed.  Sufficiency is the assumptive default, which is why undergrads, who have yet to be exposed to information regarding the insufficiency of the information they have, assume things like ‘It-is-what-it-is.’

The concept of sufficiency (and its flip-side, asymptosis) is of paramount importance. It explains why, for instance, experience is something that can be explained via subtraction. Dennett’s asteroid fable is a perfect case in point: catastrophe was ‘averted’ because we had no information regarding the second asteroid. If you think about it, we regularly explain one another’s experiences, actions, and beliefs by reference to missing information, anytime we say something of the form, So-and-so didn’t x (realize, see, etc.) such-and-such, in fact. Implicit in all this talk is the presumption of sufficiency, the ‘It-is-what-it-is! assumption,’ as well as the understanding that missing information can make no difference–precisely what we should expect of a biomechanical brain. I’ll come back to all this in due course, but the important thing to note, at this juncture at least, is that Dennett is arguing (though he would likely dispute this) that avoidance is a kind of perspectival illusion.

Dennett’s point is that the avoidance world-view is the world-view of the rational deliberator, one where prediction, the ability to anticipate environmental changes, is king. Given this, he asks:

Suppose then that one wants to design a robot that will live in the real world and be capable of making decisions so that it can further its interests–whatever interests we artificially endow it with. We want in other words to design a foresightful planner. How must one structure the capacities–the representational and inferential or computational capacities–of such a being? 4

The first design problem that confronts us, he suggests, involves the relationship between response-time, reliability, and environmental complexity.

No matter how much information one has about an issue, there is always more that one could have, and one can often know that there is more that one could have if only one were to take the time to gather it. There is always more deliberation possible, so the trick is to design the creature so that it makes reliable but not foolproof decisions within the deadlines naturally imposed by the events in its world that matter to it. 4

Our design has to perform a computational balancing act: Since the well of information has no bottom, and the time constraints are exacting, our robot has to be able to cherry-pick only the information it needs to make rough and reliable determinations: “one must be designed from the outset to economize, to pass over most of the available information” (5). This is the problem now motivating work in the field of rational ecology, which looks at human cognition as a ‘toolbox’ filled with a variety of heuristics, devices adapted to solve specific problems in specific circumstances–‘ecologies’–via the strategic neglect of various kinds of information. On the BBT account, the brain itself is such a heuristic device, a mechanism structurally adapted to walk the computational high-wire between behavioural efficiency and environmental complexity.

And this indeed is what Dennett supposes:

How then does one partition the task of the robot so that it is apt to make reliable real time decisions? One thing one can do is declare that some things in the world of the creature are to be considered fixed; no effort will be expended trying to track them, to gather more information on them. The state of these features is going to be set down in axioms, in effect, but these are built into the system at no representational cost. One simply designs the system in such a way that it works well provided the world is as one supposes it always will be, and makes no provision for the system to work well (“properly”) under other conditions. The system as a whole operates as if the world were always going to be one way, so that whether the world really is that way is not an issue that can come up for determination. 5

So, for instance, the structural fact that the brain is a predictive system simply reflects the fundamental fact that our environments not only change in predictable ways, but allow for systematic interventions given prediction. The most fundamental environmental facts, in other words, will be structurally implicit in our robot, and so will not require modelling. Others, meanwhile, will “be declared as beneath notice even though they might in principle be noticeable were there any payoff to be gained thereby” (5). As he explains:

The “grain” of our own perception could be different; the resolution of detail is a function of our own calculus of wellbeing, given our needs and other capacities. In our design, as in the design of other creatures, there is a trade-off in the expenditure of cognitive effort and the development of effectors of various sorts. Thus the insectivorous bird has a trade-off between flicker fusion rate and the size of its bill. If it has a wider bill it can harvest from a larger volume in a single pass, and hence has a greater tolerance for error in calculating the location of its individual prey. 6

Since I’ve been arguing for quite some time that we need to understand the appearance of consciousness as a kind of ‘flicker fusion writ large,’ I can tell you my eyebrows fairly popped off my forehead reading this particular passage. Dennett is isolating two classes of information that our robot will have no cause to model: environmental information so basic that it’s written into the structural blueprint or ‘fixed’, and environmental information so irrelevant that it is ignored outright or ‘beneath notice.’ What remains is to consider the information our robot will have cause to model:

If then some of the things in the world are considered fixed, and others are considered beneath notice, and hence are just averaged over, this leaves the things that are changing and worth caring about. These things fall roughly into two divisions: the trackable and the chaotic. The chaotic things are those things that we cannot routinely track, and for our deliberative purposes we must treat them as random, not in the quantum mechanical sense, and not even in the mathematical sense (e.g., as informationally incompresssible), but just in the sense of pseudo-random. These are features of the world which, given the expenditure of cognitive effort the creature is prepared to make, are untrackable; their future state is unpredictable. 6-7

Signal and noise. If we were to design our robot along, say, the lines of a predictive processing account of the brain, its primary problem would be one of deriving the causal structure of its environment on the basis of sensory effects. As it turns out, this problem (the ‘inverse problem’) is no easy one to solve. We evolved sets of specialized cognitive tools, heuristics with finite applications, for precisely this reason. The ‘signal to noise ratio’ for any given feature of the world will depend on the utility of the signal versus the computational expense of isolating it.

So far so good. Dennett has provided four, explicitly informatic categories–fixed, beneath notice, trackable, and chaotic–‘design decisions’ that will enable our robot to successfully cope with the complexities confronting it. This is where Dennett advances a far more controversial claim: that the ‘manifest image’ belonging to any species is itself an artifact of these decisions.

Now in a certain sense this claim is unworkable (and Dennett realizes as much) given the conceptual interdependence of the manifest image and the mental. The task, recall, was to build a robot that could tackle environmental complexity, not become self-aware. But his insight here stands tantalizingly close to BBT, which explains our blinkered metacognitive sense of ‘consciousness’ and ‘intentionality’ in the self-same terms of informatic access.

And things get even more interesting, first with his consideration of the how the scientific image might be related to the manifest image thus construed:

The principles of design that create a manifest image in the first place also create the loose ends that can lead to its unraveling. Some of the engineering shortcuts that are dictated if we are to avoid combinatorial explosion take the form of ignoring – treating as if non-existent – small changes in the world. They are analogous to “round off error”in computer number-crunching. And like round-off error, their locally harmless oversimplifications can accumulate under certain conditions to create large errors. Then if the system can notice the large error, and diagnose it (at least roughly), it can begin to construct the scientific image. 8

And then with his consideration of the constraints facing our robot’s ability to track and predict itself:

One of the pre-eminent varieties of epistemically possible events is the category of the agent’s own actions. These are systematically unpredictable by it. It can attempt to track and thereby render predictions about the decisions and actions of other agents, but (for fairly obvious and well-known logical reasons, familiar in the Halting Problem in computer science, for instance) it cannot make fine-grained predictions of its own actions, since it is threatened by infinite regress of self-monitoring and analysis. Notice that this does not mean that our creature cannot make some boundary-condition predictions of its own decisions and actions. 9

Because our robot possesses finite computational resources in an informatically bottomless environment, it must neglect information, and so must be heuristic through and through. Given that heuristics possess limited applicability in addition to limited computational power, it will perforce continually bump into problems it cannot solve. This will be especially the case when it comes the problem of itself–for the very reasons that Dennett adduces in the above quote. Some of these insoluble problems, we might imagine, it will be unable to see as problems, at least initially. Once it becomes aware of its informatic and cognitive limitations, however, it could begin seeking supplementary information and techniques, ways around its limits, allowing the creation of a more ‘scientific’ image.

Now Dennett is simply brainstorming here–a fact that likely played some role in his failure to pursue its publication. But “A Route to Intelligence” stuck with him as well, enough for him to reference it on a number of occasions, and to ultimately give it a small internet venue all of its own. I would like to think this is because he senses (or at least once sensed) the potential of this general line of thinking.

What makes this paper so extraordinary, for me, is the way he explicitly begins the work of systematically thinking through the informatic and cognitive constraints facing the human brain, both with respect to its attempts to cognize its environment and itself. For his part, Dennett never pursues this line of speculative inquiry in anything other than a piecemeal and desultory way. He never thinks through the specifics of the informatic privation he discusses, and so, despite many near encounters, never finds his way to BBT. And it this failure, I want to argue, that makes his pragmatic recovery of intentionality, the ‘intentional stance,’ seem feasible–or so I want to argue.

As it so happens, the import and feasibility of Dennett’s ‘intentional stance,’ has taken a twist of late, thanks to some of his more recent claims. In “The Normal Well-tempered Mind,” for instance, he claims that he was (somewhat) mistaken in thinking that “the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine,” the problem being that “each neuron, far from being a simple switch, is a little agent with an agenda, and they are much more autonomous and much more interesting than any switch.” For all his critiques of original intentionality in the heyday of computationalism, Dennett’s intentional apologetics have become increasingly strident and far-reaching. In what follows I will argue that his account of the intentional stance, and the ever expanding range of interpretative applicability that he accords it actually depends on his failure to think through the informatic straits of the human brain. If he had, I want to suggest, he would have seen that intentionality, like avoidance, is best explained in terms of missing information, which is to say, as a kind of perspectival illusion.

Diagram cube 1

Now of course all this betrays more than a little theoretical vanity on my part, the assumption that Dennett has to be peering, stumped, at some fragmentary apparition of my particular inferential architecture. But this presumption stands high among my motives for writing this post. Why? Because for the life of me I can’t see any way around those inferences–and I distrust this ‘only game in town’ feeling I have.

But I’ll be damned if I can find a way out. As I hope to show, as soon as you begin asking what cognitive systems are accessing what information, any number of dismal conclusions seem to directly follow. We literally have no bloody clue what we’re talking about when begin theorizing ‘mind.’

To see this, it serves to diagram the different levels of information privation Dennett considers:

Levels of information privation

The evolutionary engineering problem, recall, is one of finding some kind of ‘golden informatic mean,’ extracting only the information required to maximize fitness given the material and structural resources available and nothing else. This structurally constrained select-and-neglect strategy is what governs the uptake of information from the sum of all information available for cognition and thence to the information available for metacognition. The Blind Brain Theory is simply an attempt to think this privation through in a principled and exhaustive way, to theorize what information is available to what cognitive systems, and the kinds of losses and distortions that might result.

Information is missing. No one I know of disputes this. Each of these ‘pools’ are the result of drastic reductions in dimensionality (number of variables). Neuroscientists commonly refer to something called the ‘Curse of Dimensionality,’ the way the difficulty of finding statistical patterns in data increases exponentially as the data’s dimensionality increases. Imagine searching for a ring on a 100m length of string, which is to say, in one dimension. No problem. Now imagine searching for that ring in two dimensions, a 100m by 100m square. More difficult, but doable. Now imagine trying to find that ring in three dimensions, in a 100m by 100m by 100m cube. The greater the dimensionality, the greater the volume, the more difficult it becomes extracting statistical relationships, whether you happen to be a neuroscientist trying to decipher relations between high-dimensional patterns of stimuli and neural activation, or a brain attempting to forge adaptive environmental relations.

For example, ‘semantic pointers,’ Eliasmith’s primary innovation in creating SPAUN (the recent artificial brain simulation that made headlines around the world) are devices that maximize computational efficiency by collapsing or inflating dimensionality according to the needs of the system. As he and his team write:

Compression is functionally important because low-dimensional representations can be more efficiently manipulated for a variety of neural computations. Consequently, learning or defining different compression/decompression operations provides a means of generating neural representations that are well suited to a variety of neural computations. “A Large-Scale Model of the Functioning Brain,” 1202

The human brain is rife with bottlenecks, which is why Eliasmith’s semantic pointers represent the signature contribution they do, a model for how the brain potentially balances its computational resources against the computational demands facing it. You could say that the brain is an evolutionary product of the Curse, since it is in the business of deriving behaviourally effective ‘representations’ from the near bottomless dimensionality of its environment.

Although Dennett doesn’t reference the Curse explicitly, it’s implicit in his combinatoric characterization of our engineering problem, the way our robot has to suss out adaptive patterns in the “combinatorial explosion,” as he puts it, of environmental variables. Each of the information pools he touches on, in other words, can be construed as solutions to the Curse of Dimensionality. So when Dennett famously writes:

I claim that the intentional stance provides a vantage point for discerning similarly useful patterns. These patterns are objective–they are there to be detected–but from our point-of-view they are not out there entirely independent of us, since they are patterns composed partly of our own “subjective” reactions to what is our there; they are the patterns made to order for our narcissistic concerns. The Intentional Stance, “Real Patterns, Deeper Facts, and Empty Questions,” 39

Dennett is discussing a problem solved. He recognizes that the solution is parochial, or ‘narcissistic,’ but it remains, he will want to insist, a solution all the same, a powerful way for us (or our robot) to predict, explain, and manipulate our natural and social environments as well as ourselves. Given this efficacy, and given that the patterns themselves are real, even if geared to our concerns, he sees no reason to give up on intentionality.

On BBT, however, the appeal of this argument is largely an artifact of its granularity. Though Dennett is careful to reference the parochialism of intentionality, he does not do it justice. In “The Last Magic Show,” I turned to the metaphor of shadows at several turns trying to capture something of the information loss involved in consciousness, unaware that researchers, trying to understand how systems preserve functionality despite massive reductions of dimensionality, had devised mathematical tools, ‘random projections,’ that take the metaphor quite seriously:

To understand the central concept of a random projection (RP), it is useful to think of the shadow of a wire-frame object in three-dimensional space projected onto a two dimensional screen by shining a light beam on the object. For poorly chosen angles of light, the shadow may lose important information about the wire-frame object. For example, if the axis of light is aligned with any segment of wire, that entire length of wire will have a single point as its shadow. However, if the axis of light is chosen randomly, it is highly unlikely that the same degenerate situation will occur; instead, every length of wire will have a corresponding nonzero length of shadow. Thus the shadow, obtained by this RP, generically retains much information about the wire-frame object. (Ganguli and Sompolinsky, “Sparsity and Dimensionality,” 487)

On the BBT account, mind is what the Curse of Dimensionality looks like from the inside. Consciousness and intentionality, as they appear to metacognition, can be understood as concatenations of idiosyncratic low-dimensional ‘projections.’ Why idiosyncratic? Because when it comes to ‘compression,’ evolution isn’t so much interested in the ‘veridical conservation’ as in scavenging effective information. And what counts as ‘effective information’? Whatever facilitates genetic replication–period. In terms of the wire-frame analogy, the angle may be poorly chosen, the projection partial, the light exceedingly dim, etc., and none of this would matter so long as the information projected discharged some function that increased fitness. One might suppose that only compression will serve in some instances, but to assume that only compression will serve in all instances is simply to misunderstand evolution. Think of ‘lust’ and the biological need to reproduce, or ‘love’ and the biological need to pair-bond. Evolution is opportunistic: all things being equal, the solutions it hits upon will be ‘quick and dirty,’ and utterly indifferent to what we intuitively assume (let alone want) to be the case.

Take memory research as a case in point. In the Theaetetus, Plato famously characterized memory as an aviary, a general store from which different birds, memories, could be correctly or incorrectly retrieved. It wasn’t until the late 19th century, when Hermann Ebbinghaus began tracking his own recall over time in various conditions, that memory became the object of scientific investigation. From there the story is one of greater and greater complication. William James, of course, distinguished between short and long term memory. Skill memory was distinguished from long term memory, which Endel Tulving famously decomposed into episodic and semantic memory. Skill memory, meanwhile, was recognized as one of several forms of nondeclarative or implicit memory, including classical conditioning, non-associative learning, and priming, which would itself be decomposed into perceptual and conceptual forms. As Plato’s grand aviary found itself progressively more subdivided, researchers began to question whether memory was actually a discrete system or rather part and parcel of some larger cognitive network, and thus not the distinct mental activity assumed by the tradition. Other researchers, meanwhile, took aim at the ‘retrieval assumption,’ the notion that memory is primarily veridical, adducing evidence that declarative memory is often constructive, more an attempt to convincingly answer a memory query than to reconstruct ‘what actually happened.’

The moral of this story is as simple as it should be sobering: the ‘memory’ arising out of casual introspection (monolithic and veridical) and the memory arising out of the scientific research (fractionate and confabulatory) are at drastic odds, to the point where some researchers suggest the term ‘memory’ is itself deceptive. Memory, like so many other cognitive capacities, seems to be a complex of specialized capacities arising out of non-epistemic and epistemic evolutionary pressures. But if this is the case, one might reasonably wonder how Plato could have gotten things so wrong. Well, obviously the information available to metacognition (in its ancient Greek incarnation) falls far short the information required to accurately model memory. But why would this be? Well, apparently forming accurate metacognitive models of memory was not something our ancestors needed to survive and reproduce.

We have enough metacognitive access to isolate memory as a vague capacity belonging to our brains and nothing more. The patterns accessed, in other words, are real patterns, but it seems more than a little hinky to take the next step and say they are “made to order for our narcissistic concerns.” For one, whatever those ‘concerns’ happen to be, they certainly don’t seem to involve any concern with self-knowledge, particularly when the ‘concerns’ at issue are almost certainly not the conscious sort–which is to say, concerns we could be said to be ‘ours’ in any straightforward way. The concerns, in fact, are evolutionary: Metacognition, for reasons Dennett touched on above and that I have considered at length elsewhere, is a computational nightmare, more than enough to necessitate the drastic informatic compromises that underwrite Plato’s Aviary.

And as memory goes, I want to suggest, so goes intentionality. The fact is, intentional patterns are not “made to order for our narcissistic concerns.” This is a claim that, while appearing modest, characterizes intentionality as an instrument of our agency, and so ‘narcissistic’ in a personal sense. Intentional patterns, rather, are ad hoc evolutionary solutions to various social or natural environmental problems, some perhaps obvious, others obscure. And this simply refers to the ‘patterns’ accessed by the brain. There is the further question of metacognitive access, and the degree to which the intentionality we all seem to think we have might not be better explained as a kind of metacognitive illusion pertaining to neglect.

Asymptotic. Bottomless. Rules hanging with their interpretations.

All the low-dimensional projections bridging pool to pool are evolutionary artifacts of various functional requirements, ‘fixes,’ multitudes of them, to some obscure network of ancestral, environmental problems. They are parochial, not to our ‘concerns’ as ‘persons,’ but to the circumstances that saw them selected to the exclusion of other possible fixes. To return to Dennett’s categories, the information ‘beneath notice,’ or neglected, may be out-and-out crucial for understanding a given capacity, such as ‘memory’ or ‘agency’ or what have you, even though metacognitive access to this information was irrelevant to our ancestor’s survival. Likewise, what is ‘trackable’ may be idiosyncratic, information suited to some specific, practical cognitive function, and therefore entirely incompatible with and so refractory to theoretical cognition–philosophy as the skeptics have known it.

Why do we find the notion of a fractionate, non-veridical memory surprising? Because we assume otherwise, namely, that memory is whole and veridical. Why do we assume otherwise? Because informatic neglect leads us to mistake the complex for the simple, the special purpose for the general purpose, and the tertiary for the primary. Our metacognitive intuitions are not reliable; what we think we do or undergo and what the sciences of the brain reveal need only be loosely connected. Why does it seem so natural to assume that intentional patterns are “made to order for our narcissistic concerns”? Well, for the same reason it seems so natural to assume that memory is monolithic and veridical: in the absence of information to the contrary, our metacognitive intuitions carry the day. Intentionality becomes a personal tool, as opposed to a low-dimensional projection accessed via metacognitive deliberation (for metacognition), or a heuristic device possessing a definite evolutionary history and a limited range of applications (for cognition more generally).

So to return to our diagram of ‘information pools’:

Levels of information privation

we can clearly see how the ‘Curse of Dimensionality’ is compounded when it comes to theoretical metacognition. Thus the ‘blind brain’ moniker. BBT argues that the apparent perplexities of consciousness and intentionality that have bedevilled philosophy for millennia are artifacts of cognitive and metacognitive neglect. It agrees with Dennett that the relationship between all these levels is an adaptive one, that low-dimensional projections must earn their keep, but it blocks the assumption that we are the keepers, seeing this intuition as the result of metacognitive neglect (sufficiency, to be precise). It’s no coincidence, it argues, that all intentional concepts and phenomena seem ‘acausal,’ both in the sense of seeming causeless, and in the sense of resisting causal explanation. Metacognition has no access whatsoever to the neurofunctional context of any information broadcast or integrated in consciousness, and so finds itself ‘encapsulated,’ stranded with a profusion of low-dimensional projections that it cannot cognize as such, since doing so would require metacognitive access to the very neurofunctional contexts that are occluded. Our metacognitive sense of intentionality, in other words, depends upon making a number of clear mistakes–much as in the case of memory.

The relations between ‘pools’ it should be noted, are not ‘vehicles’ in the sense of carrying ‘information about.’ All the functioning components in the system would have to count as ‘vehicles’ if that were the case, insofar as the whole is required for that information that does find itself broadcast or integrated. The ‘information about’ part is simply an artifact of what BBT calls medial neglect, the aggregate blindness of the system to its ongoing operations. Since metacognition can only neglect the neural functions that make a given conscious experience possible–since it is itself invisible to itself–it confuses an astronomically complex systematic effect for a property belonging to that experience.

The very reason theorists like Dretske or Fodor insist on semantic interpretations of information is the same reason those interpretations will perpetually resist naturalistic explanation: they are attempting to explain a kind of ‘perspectival illusion,’ the way the information broadcast or integrated exhausts the information available for deliberative cognition, so generating the ‘only-game-in-town-effect’ (or sufficiency). ‘Thoughts’ (or the low-dimensional projections we confuse for them) must refer to (rather than reliably covary with) something in the world because metacognition neglects all the neurofunctional and environmental machinery of that covariance, leaving only Brentano’s famous posit, intentionality, as the ‘obvious’ explanandum–one rendered all the more ‘obvious’ by thousands of largely fruitless years of intentional conceptual toil.

Aboutness is magic, in the sense that it requires the neglect of information to be ‘seen.’ It is an illusion of introspection, a kind of neural camera obscura effect, ‘obvious’ only because metacognition is a captive of the information it receives. This is why our information pool diagram can be so easily retooled to depict the prevailing paradigm in the cognitive sciences today:

Levels of intentionality

The vertical arrows represent medial functions (sound, light, neural activity) that are occluded and so are construed acausally. The ‘mind’ (or the network of low-dimensional projections we confuse as such) is thought to be ‘emergent from’ or ‘functionally irreducible to’ the brain, which possesses both conscious and nonconscious ‘representations of’ or ‘intentional relations to’ the world. No one ever pauses to ask what kind of cognitive resources the brain could bring to bear upon itself, what it would take to reliably model the most complicated machinery known from within that machinery using only cognitive systems adapted to modelling external environments. The truth of the brain, they blithely assume, is available to the brain in the form of the mind.

Or thought.

But this is little more than wishful ‘thinking,’ as the opaque, even occult, nature of the intentional concepts used might suggest. Whatever emergence the brain affords, why should metacognition possess the capacity to model it, let alone be it? Whatever function the broadcasting or integration of a given low-dimensional projection provides, why should metacognition, which is out-and-out blind to neurofunctionality, possess the capacity to reliably model it, as opposed to doing what cognition always does when confronted with insufficient information it cannot flag as insufficient, leap to erroneous conclusions?

All of this is to say that the picture is both more clear and yet less sunny than Dennett’s ultimately abortive interrogation of information privation would lead us to believe. Certainly in an everyday sense it’s obvious that we take perspectives, views, angles, standpoints, and stances vis a vis various things. Likewise, it seems obvious that we have two broad ways in which to explain things, either by reference to what causes an event, or by virtue of what rationalizes an event. As a result, it seems natural to talk of two basic explanatory perspectives or stances, one pertaining to the causes of things, the other pertaining to the reasons for things.

The question is one of how far we can trust our speculations regarding the latter beyond this platitudinous observation. One might ask, for instance, if intentionality is a heuristic, which is to say, a specialized problem solver, then what are its conditions of applicability? The mere fact that this is an open question means that things like the philosophical question of knowledge, to give just one example, should be divided into intentional and mechanical incarnations–at the very least. Otherwise, given the ‘narcissistic idiosyncrasy’ of the former, we need to consider whether the kinds of conundrums that have plagued epistemology across the ages are precisely what we should expect. Chained to the informatic bottleneck of metacognition, epistemology has been trading in low-dimensional projections all along, attempting time and again to wring universality out of what amount to metacognitive glimpses of parochial cognitive heuristics. There’s a very real chance the whole endeavour has been little more than a fool’s errand.

The real question is one of why, as philosophers, we should bother entertaining the intentional stance. If the aim of philosophy really is, as Sellars has it, “to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term,” if explanatory scope is our goal, then understanding intentionality amounts understanding it in functional terms, which is to say, as something that can only be understood in terms of the information it neglects. What is the adaptive explanatory ecology of any given intentional concept? What was it selected for? And if it is ‘specialized,’ would that not suggest incompatibility with different (i.e., theoretical) cognitive contexts? Given what little information we have, what arbitrates our various metacognitive glimpses, our perpetually underdetermined interpretations, allowing us to discriminate between any stage on the continuum of the reliable and the farcical?

Short of answers to these questions, we cannot even claim to be engaging in educated as opposed to mere guesswork. So to return to “The Normal Well-tempered Mind,” what does Dennett mean when he says that neurons are best seen as agents? Does he mean that cellular machinery is complicated machinery, and so ill-served when conceptualized as a ‘mere switch’? Or does he mean they really are like little people, organized in little tribes, battling over little hopes and little crimes? I take it as obvious that he means the former, and that his insistence on the latter is more the ersatz product of a commitment he made long ago, one he has invested far too much effort in to relinquish.

‘Feral neurons’ are a metaphoric conceit, an interesting way to provoke original thought, perhaps, a convenient facon de parler in certain explanatory contexts, but more an attempt to make good on an old and questionable argument than anything, one that would have made a younger Dennett, the one who wrote “Mechanism and Responsibility,” smile and scowl as he paused to conjure some canny and critical witticism. Intentionality, as the history of philosophy should make clear, is an invitation to second-order controversy and confusion. Perhaps what we have here is a potential empirical basis for the infamous Wittgensteinian injunction against philosophical language games. Attributing intentionality in first-order contexts is not only well and fine, it’s unavoidable. But as soon as we make second-order claims on the basis of metacognitive deliberation, say things like, ‘Knowledge is justified, true belief,’ we might as well be playing Monopoly using the pieces of Risk, ‘deriving’ theoretical syntaxes constrained–at that point–by nothing ‘out there.’

On BBT, ‘knowledge’ simply is what it has to be if we agree that the life science paradigm cuts reality as close to the joints as anything we have ever known: a system of mechanical bets, a swarm of secondary asteroids following algorithmic trajectories, ‘miraculously’ averting disaster time and again.

Breathtakingly complex.

Alien.

The Introspective Peepshow: Consciousness and the ‘Dreaded Unknown Unknowns’

by rsbakker

Aphorism of the Day: That it feels so unnatural to conceive ourselves as natural is itself a decisive expression of our nature.

.

This is a paper I finished a couple of months back, my latest attempt to ease those with a more ‘analytic’ mindset into the upside-down madness of my views. It definitely requires a thorough rewrite, so if you see any problems, or have any questions, or simply see a more elegant way of getting from A to B, please sound off. As for the fixation with ‘show’ in my titles, I haven’t the foggiest!

Oh, yes, the Abstract:

“Evidence from the cognitive sciences increasingly suggests that introspection is unreliable – in some cases spectacularly so – in a number of respects, even though both philosophers and the ‘folk’ almost universally assume the complete opposite. This draft represents an attempt to explain this ‘introspective paradox’ in terms of the ‘unknown unknown,’ the curious way the absence of explicit information pertaining to the reliability of introspectively accessed information leads to the implicit assumption of reliability. The brain is not only blind to its inner workings, it’s blind to this blindness, and therefore assumes that it sees everything there is to see. In a sense, we are all ‘natural anosognosiacs,’ a fact that could very well explain why we find the consciousness we think we have so difficult to explain.”

More generally I want to apologize for neglecting the comments of late. Routine is my lifeblood, and I’m just getting things back online after a particularly ‘noro-chaotic’ holiday. The more boring my life is, the more excited I become.

Brassier’s Divided Soul

by rsbakker

Aphorism of the Day: If science is the Priest and nature is the Holy Spirit, then you, my unfortunate friend, are Linda Blair.

.

And Jesus asked him, “What is your name?” He replied, “My name is Legion, for we are many.”  – Mark 5:9

.

For decades now the Cartesian subject–whole, autonomous and diaphanous–has been the whipping-boy of innumerable critiques turning on the difficulties that beset our intuitive assumptions of metacognitive sufficiency. A great many continental philosophers and theorists more generally consider it the canonical ‘Problematic Ontological Assumption,’ the conceptual ‘wrong turn’ underwriting any number of theoretical confusions and social injustices. Thinkers across the humanities regularly dismiss whole theoretical traditions on the basis of some perceived commitment to Cartesian subjectivity.

My long time complaint with this approach lies in its opportunism. I entirely agree that the ‘person’ as we intuit it is ‘illusory’ (understood in some post-intentional sense). What I’ve never been able to understand, especially given post-structuralism’s explicit commitment to radical contextualism, was the systematic failure to think through the systematic consequences of this claim. To put the matter bluntly: if Descartes’ metacognitive subject is ‘broken,’ an insufficient fragment confused for a sufficient whole, then how do we know that everything subjective isn’t likewise broken?

The real challenge, as the ‘scientistic’ eliminativism of someone like Alex Rosenberg makes clear, is not so much one of preserving sufficient subjectivity as it is one of preserving sufficient intentionality more generally. The reason the continental tradition first lost faith with the Cartesian and Kantian attempts to hang the possibility of intentional cognition from a subjective hook is easy enough to see from a cognitive scientific standpoint. Nietzsche’s ‘It thinks’ is more than pithy, just as his invocation of the physiological is more than metaphorical. The more we learn about what we actually do, let alone how we are made, the more fractionate the natural picture–or what Sellars famously called the ‘scientific image’–of the human becomes. We, quite simply, are legion. The sufficient subject, in other words, is easily broken because it is the most egregious illusion.

But it is by no means the only one. The entire bestiary of the ‘subjective’ is on the examination table, and there’s no turning back. The diabolical possibility has become fact.

Zipper Back

Let’s call this the ‘Intentional Dissociation Problem,’ the problem of jettisoning the traditional metacognitive subject (person, mind, consciousness, being-in-the-world) while retaining some kind of traditional metacognitive intentionality–the sense-making architecture of the ‘life-world’–that goes with it. The stakes of this problem are such, I would argue, that you can literally use it to divide our philosophical present from our past. In a sense, one can forgive the naivete of the 20th century critique of the subject simply because (with the marvellous exception of Nietzsche) it had no inkling of the mad cognitive scientific findings confronting us. What is willful ignorance or bad faith for us was simply innocence for our teachers.

It is Wittgenstein, perhaps not surprisingly, who gives us the most elegant rendition of the problem, when he notes, almost in passing (see Tractatus, 5.542), the way so-called propositional attitudes such as desires and beliefs only make sense when attributed to whole persons as opposed to subpersonal composites. Say that Scott believes p, desires p, enacts p, and is held responsible for believing, desiring, and enacting. One night he murders his neighbour Rupert, shouting that he believes him a threat to his family and desires to keep his family safe. Scott is, one would presume, obviously guilty. But afterward, Scott declares he remembers only dreaming of the murder, and that while awake he has only loved and respected Rupert, and couldn’t imagine committing such a heinous act. Subsequent research reveals that Scott suffers from somnambulism, the kind associated with ‘homicidal sleepwalking’ in particular, such that his brain continually tries to jump from slow-wave sleep to wakefulness, and often finds itself trapped between with various subpersonal mechanisms running on ‘wake mode’ while others remain in ‘sleep mode.’ ‘Whole Scott’ suddenly becomes ‘composite Scott,’ an entity that clearly should not be held responsible for the murder of his neighbour Rupert. Thankfully, our legal system is progressive enough to take the science into account and see justice is done.

The problem, however, is that we are fast approaching the day where any scenario where Scott murders Rupert could be parsed in subpersonal terms and diagnosed as a kind of ‘malfunction.’ If you have any recent experience teaching public school you are literally living this process of ‘subpersonalization’ on a daily basis, where more and more the kinds of character judgements that you would thoughtlessly make even a decade or so ago are becoming inappropriate. Try calling a kid with ADHD ‘lazy and irresponsible,’ and you have identified yourself as lazy and irresponsible. High profile thinkers like Dennett and Pinker have the troubling tendency of falling back on question-begging pragmatic tropes when considering this ‘spectre of creeping exculpation’ (as Dennett famously terms it in Freedom Evolves). In How the Mind Works, for instance, Pinker claims “that science and ethics are two self-contained systems played out among the same entities in the world, just as poker and bridge are different games played with the same fifty-two-card deck” (55)–even though the problem is precisely that these two systems are anything but ‘self-contained.’ Certainly it once seemed this way, but only so long as science remained stymied by the material complexities of the soul. Now we find ourselves confronted by an accelerating galaxy of real world examples where we think we’re playing personal bridge, only to find ourselves trumped by an ever-expanding repertoire of subpersonal poker hands.

The Intentional DissociationProblem, in other words, is not some mere ‘philosophical abstraction;’ it is part and parcel of an implacable science-and-capital driven process of fundamental subpersonalization that is engulfing society as we speak. Any philosophy that ignores it, or worse yet, pretends to have found a way around it, is Laputan in the most damning sense. (It testifies, I think, to the way contemporary ‘higher education’ has bureaucratized the tyranny of the past, that at such a time a call to arms has to be made at all… Or maybe I’m just channelling my inner Jeremiah–again!)

In continental circles, the distinction of recognizing both the subtlety and the severity of the Intentional Dissociation Problem belongs to Ray Brassier, one of but a handful of contemporary thinkers I know of who’ve managed to turn their back on the apologetic impulse and commit themselves to following reason no matter where it leads–to thinking through the implications of an institutionalized science truly indifferent to human aspiration, let alone conceit. In his recent “The View from Nowhere,” Brassier takes as his task precisely the question of whether rationality, understood in the Sellarsian sense as the ‘game of giving and asking for reasons,’ can survive the neuroscientific dismantling of the ontological self as theorized in Thomas Metzinger’s magisterial Being No One.

The bulk of the article is devoted to defending Metzinger’s neurobiological theory of selfhood as a kind of subreptive representational device (the Phenomenal Self Model, or PSM) from the critiques of Jurgen Habermas and Dan Zahavi, both of whom are intent on arguing the priority of the transcendental over the merely empirical–asserting, in other words, that playing normative (Habermas) or phenomenological (Zahavi) bridge is the condition of playing neuroscientific poker. But what Brassier is actually intent on showing is how the Sellarsian account of rationality is thoroughly consistent with ‘being no one.’

As he writes:

Does the institution of rationality necessitate the canonization of selfhood? Not if we learn to distinguish the normative realm of subjective rationality from the phenomenological domain of conscious experience. To acknowledge a constitutive link between subjectivity and rationality is not to preclude the possibility of rationally investigating the biological roots of subjectivity. Indeed, maintaining the integrity of rationality arguably obliges us to examine its material basis. Philosophers seeking to uphold the privileges of rationality cannot but acknowledge the cognitive authority of the empirical science that is perhaps its most impressive offspring. Among its most promising manifestations is cognitive neurobiology, which, as its name implies, investigates the neurobiological mechanisms responsible for generating subjective experience. Does this threaten the integrity of conceptual rationality? It does not, so long as we distinguish the phenomenon of selfhood from the function of the subject. We must learn to dissociate subjectivity from selfhood and realize that if, as Sellars put it, inferring is an act – the distillation of the subjectivity of reason – then reason itself enjoins the destitution of selfhood. (“The View From Nowhere,” 6)

The neuroscientific ‘destitution of selfhood’ is only a problem for rationality, in other words, if we make the mistake of putting consciousness before content. The way to rescue normative rationality, in other words, is to find some way to render it compatible with the subpersonal–the mechanistic. This is essentially Daniel Dennett’s perennial argument, dating all the way back to Content and Consciousness. And this, as followers of TPB know, is precisely what I’ve been arguing against for the past several months, not out of any animus to the general view–I literally have no idea how one might go about securing the epistemic necessity of the intentional otherwise–but because I cannot see how this attempt to secure meaning against neuroscientific discovery amounts to anything more than an ingenious form of wishful thinking, one that has the happy coincidence of sparing the discipline that devised it. If neuroscience has imperilled the ‘person,’ and the person is plainly required to make sense of normative rationality, then an obvious strategy is to divide the person: into an empirical self we can toss to the wolves of cognitive science and into a performative subject that can nevertheless guarantee the intentional.

Let’s call this the Soul-Soul strategy’ in contradistinction to the Soul-First strategies of Habermas and Zahavi (or the Separate-but-Equal strategy suggested by Pinker above). What makes this option so attractive, I think, anyway, is the problem that so cripples the Soul-First and the Separate-but-Equal options: the empirical fact that the brain comes first. Gunshots to the head put you to sleep. If you’ve ever wondered why ‘emergence’ is so often referenced in philosophy of mind debates, you have your answer here. If Zahavi’s ‘transcendental subject,’ for instance, is a mere product of brain function, then the Soul-First strategy becomes little more than a version of Creationism and the phenomenologist a kind of Young-Earther. But if it’s emergent, which is to say, a special product of brain function, then he can claim to occupy an entirely natural, but thoroughly irreducible ‘level of explanation’–the level of us.

This is far and away the majority position in philosophy, I think. But for the life of me, I can’t see how to make it work. Cognitive science has illuminated numerous ways in which our metacognitive intuitions are deceptive, effectively relieving deliberative metacognition of any credibility, let alone its traditional, apodictic pretensions. The problem, in other words, is that even if we are somehow a special product of brain function, we have no reason to suppose that emergence will confirm our traditional, metacognitive sense of ‘how it’s gotta be.’ ‘Happy emergence’ is a possibility, sure, but one that simply serves to underscore the improbability of the Soul-First view. There’s far, far more ways for our conceits to be contradicted than confirmed, which is likely why science has proven to be such a party crasher over the centuries.

Splitting the soul, however, allows us to acknowledge the empirically obvious, that brain function comes first, without having to relinquish the practical necessity of the normative. Therein lies its chief theoretical attraction. For his part, Brassier relies on Sellars’ characterization of the relation between the manifest and the scientific images of man: how the two images possess conceptual parity despite the explanatory priority of the scientific image. Brain function comes first, but:

The manifest image remains indispensable because it provides us with the necessary conceptual resources we require in order to make sense of ourselves as persons, that is to say, concept-governed creatures continually engaged in giving and asking for reasons. It is not privileged because of what it describes and explains, but because it renders us susceptible to the force of reasons. It is the medium for the normative commitments that underwrite our ability to change our minds about things, to revise our beliefs in the face of new evidence and correct our understanding when confronted with a superior argument. In this regard, science itself grows out of the manifest image precisely insofar as it constitutes a self-correcting enterprise. (4)

Now this is all well and fine, but the obvious question from a relentlessly naturalistic perspective is simply, ‘What is this ‘force’ that ‘reasons’ possess?’ And here it is that we see the genius of the Soul Soul strategy, because the answer is, in a strange sense, nothing:

Sellars is a resolutely modern philosopher in his insistence that normativity is not found but made. The rational compunction enshrined in the manifest image is the source of our ability to continually revise our beliefs, and this revisability has proven crucial in facilitating the ongoing expansion of the scientific image. Once this is acknowledged, it seems we are bound to conclude that science cannot lead us to abandon our manifest self-conception as rationally responsible agents, since to do so would be to abandon the source of the imperative to revise. It is our manifest self-understanding as persons that furnishes us, qua community of rational agents, with the ultimate horizon of rational purposiveness with regard to which we are motivated to try to understand the world. Shorn of this horizon, all cognitive activity, and with it science’s investigation of reality, would become pointless. (5)

Being a ‘subject’ simply means being something that can act in a certain way, namely, take other things as intentional. Now I know first hand how convincing and obvious this all sounds from the inside: it was once my own view. When the traditional intentional realist accuses you of reducing meaning to a game of make-believe, you can cheerfully agree, and then point out the way it nevertheless allows you to predict, explain, and manipulate your environment. It gives everyone what the they want: You can yield explanatory priority to the sciences and yet still insist that philosophy has a turf. Wither science takes us, we need not move, at least when it comes to those ‘indispensable, ultimate horizons’ that allow us to make sense of what we do. It allows the philosopher to continue speaking in transcendental terms without making transcendental commitments, rendering it (I think anyway) into a kind of ‘performative first philosophy,’ theoretically innoculating the philosopher against traditional forms of philosophical critique (which require ontological commitment to do any real damage).

The Soul-Soul strategy seems to promise a kind of materialism without intentional tears. The problem, however, is that cognitive science is every bit as invested in understanding what we do as in describing what we are. Consider Brassier’s comment from above: “It is our manifest self-understanding as persons that furnishes us, qua community of rational agents, with the ultimate horizon of rational purposiveness with regard to which we are motivated to try to understand the world.” From a cognitive science perspective one can easily ask: Is it? Is it our ‘manifest understanding of ourselves’ that ‘motivates us,’ and so makes the scientific enterprise possible?

Well, there’s a growing body of research that suggests we (whatever we may be) have no direct access to our motives, but rather guess with reference to ourselves using the same cognitive tools we use to guess at the motives of others. Now, the Soul-Soul theorist might reply, ‘Exactly! We only make sense to ourselves against a communal background of rational expectations…’ but they have actually missed the point. The point is, our motivations are occluded, which raises the possibility that our explanatory guesswork has more to do with social signalling than with ‘getting motivations right.’ This effectively blocks ‘motivational necessity’ as an argument securing the ineliminability of the intentional. It also raises the question of what kind of game are we actually playing when we play the so-called ‘game of giving and asking for reasons.’ All you need consider is the ‘spectre’ neuromarketing in the commercial or political arena, where one interlocutor secures the assent of the other by treating that other subpersonally (explicitly, as opposed to implicitly, which is arguably the way we treat one another all the time).

Any number of counterarguments can be adduced against these problems, but the crucial thing to appreciate is that these concerns need only be raised to expose the Soul-Soul strategy as mere make-believe. Sure, our brains are able to predict, explain, and manipulate certain systems, but the anthropological question requiring scientific resolution is one of where ‘we’ fit in this empirical picture, not just in the sense of ‘destitute selves,’ but in every sense. Nothing guarantees an autonomous ‘level of persons,’ not incompatibility with mechanistic explanation, and least of all speculative appraisals (of the kind, say, Dennett is so prone to make) of its ‘performative utility.’

To sharpen the point: If we can’t even say for sure that we exist the way we think, how can we say that our brains nevertheless do the things we think they do, things like ‘inferring’ or ‘taking-as intentional.’

Brassier writes:

The concept of the subject, understood as a rational agent responsible for its utterances and actions, is a constraint acquired via enculturation. The moral to be drawn here is that subjectivity is not a natural phenomenon in the way in which selfhood is. (32)

But as a doing it remains a ‘natural phenomenon’ nonetheless (what else would it be?). As such, the question arises, Why should we expect that ‘concepts’ will suffer a more metacognitive-intuition friendly fate than ‘selves’? Why should we think the sciences of the brain will fail to revolutionize our traditional normative understanding of concepts, perhaps relegate it to a parochial, but ineliminable shorthand forced upon us by any number of constraints or confounds, or so contradict our presumed role in conceptual thinking as to make ‘rationality’ as experienced a kind of in inter fiction. What we cognize as the ‘game of giving and asking for reasons,’ for all we know, could be little more than the skin of plotting beasts, an illusion foisted on metacognition for the mere want of information.

Brassier writes:

It forces us to revise our concept of what a self is. But this does not warrant the elimination of the category of agent, since an agent is not a self. An agent is a physical entity gripped by concepts: a bridge between two reasons, a function implemented by causal processes but distinct from them. (32)

Is it? How do we know? What ‘grips’ what how? Is the function we attribute to this ‘gripping’ a cognitive mirage? As we saw in the case of homicidal somnambulism above, it’s entirely unclear how subpersonal considerations bear on agency, whether understood legally or normatively more generally. But if agency is something we attribute, doesn’t this mean the sleepwalker is a murderer merely if we take him to be? Could we condemn personal Scott to death by lethal injection in good conscience knowing we need only think him guilty for him to be so? Or are our takings-as constrained by the actual function of his brain? But then how can we scientifically establish ‘degrees of agency’ when the subpersonal, the mechanistic, has the effect of chasing out agency altogether?

These are living issues. If it weren’t for the continual accumulation of subpersonal knowledge, I would say we could rely on collective exhaustion to eventually settle the issue for us. Certainly philosophical fiat will never suffice to resolve the matter. Science has raised two spectres that only it can possibly exorcise (while philosophy remains shackled on the sidelines). The first is the spectre of Theoretical Incompetence, the growing catalogue of cognitive shortcomings that probably explain why it is only science can reliably resolve theoretical disputes. The second is Metacognitive Incompetence, the growing body of evidence that overthrows our traditional and intuitive assumptions of self-transparency. Before the rise of cognitive science, philosophy could continue more or less numb to the pinch of the first and all but blind to the throttling possibility of the latter. Now however, we live in an age where massive, wholesale self-deception, no matter what logical absurdities it seems to generate, is a very real empirical possibility.

What we intuit regarding reason and agency is almost certainly the product of compound neglect and cognitive illusion to some degree. It could be the case that we are not intentional in such a way that we must (short of the posthuman, anyway) see ourselves and others as intentional. Or even worse, it could be the case that we are not intentional in such a way that we can only see ourselves and others as intentional whenever we deliberate on the scant information provided by metacognition–whenever we ‘make ourselves explicit.’ Whatever the case, whether intentionality is a first or second-order confound (or both), this means that pursuing reason no matter where it leads could amount to pursuing reason to the point where reason becomes unrecognizable to us, to the point where everything we have assumed will have to be revised–corrected. And in a sense, this is the argument that does the most damage to Sellar’s particular variant of the Soul-Soul strategy: the fact that science, having obviously run to the limits of the manifest image’s intelligibility, nevertheless continues to run, continues to ‘self-correct’ (albeit only in a way that we can understand ‘under erasure’), perhaps consigning its wannabe guarantor and faux-motivator to the very dust-bin of error it once presumed to make possible.

Battery Wrist

In his recent After Nature interview, Brassier writes:

[Nihil Unbound] contends that nature is not the repository of purpose and that consciousness is not the fulcrum of thought. The cogency of these claims presupposes an account of thought and meaning that is neither Aristotelian—everything has meaning because everything exists for a reason—nor phenomenological—consciousness is the basis of thought and the ultimate source of meaning. The absence of any such account is the book’s principal weakness (it has many others, but this is perhaps the most serious). It wasn’t until after its completion that I realized Sellars’ account of thought and meaning offered precisely what I needed. To think is to connect and disconnect concepts according to proprieties of inference. Meanings are rule-governed functions supervening on the pattern-conforming behaviour of language-using animals. This distinction between semantic rules and physical regularities is dialectical, not metaphysical.

Having recently completed Rosenberg’s The Atheist’s Guide to Reality, I entirely concur with Brassier’s diagnosis of  Nihil Unbound’s problem: any attempt to lay out a nihilistic alternative to the innumerable ‘philosophies of meaning’ that crowd every corner of intellectual life without providing a viable account of meaning is doomed to the fringes of humanistic discourse. Rosenberg, for his part, simply bites the bullet, relying on the explanatory marvels of science and its obvious incompatibilities with meaning to warrant dispensing with the latter. The problem, however, is that his readers can only encounter his case through the lense of meaning, placing Rosenberg in the absurd position of using argumentation to dispel what, for his interlocutors, lies in plain sight.

Brassier, to his credit, realizes that something must be said about meaning, that some kind of positive account must be given. But in the absence of any positive, nihilistic alternative–any means of explaining meaning away–he opts for something deflationary, he turns to Sellars (as did Dennett), and the presumption that meaning pertains to a different, dialectical order of human community and interaction. This affords him the appearance of having it both ways (like Dennett): deference to the priority of mechanism, while insisting on the parity of meaning and reason, arguing, in effect, that we have two souls, one a neurobiological illusion, the other a ‘merely functional’ instrument of enormous purport and power…

Or so it seems.

What I’ve tried to show is that cognitive science cares not a whit whether we characterize our commitments as metaphysical or dialectical, that it is just as apt to give lie to metacognitively informed accounts of what we do as to metacognitively informed accounts of what we are. ‘Inferring’ is no more immune to radical scientific revision than is ‘willing’ or ‘believing’ or ‘taking as’ or what have you. So for example, if the structures underwriting consciousness in the brain were definitively identified, and the information isolated as ‘inferring’ could be shown to be, say, distorted low-dimensional projections, jury-rigged ‘fixes’ to far different evolutionary pressures, would we not begin, in serious discussions of cognition or what have you, to continually reference these limitations to the degree they distort our understanding of the actual activity involved? If it becomes a scientific fact that we are a far different creature in a far different environment than what we take ourselves to be, will that not radically transform any discourse that aspires to be cognitive?

Of course it will.

Perhaps the post-intentional philosophy of the future will see the ‘game of giving and asking for reasons’ as a fragmentary shadow, a comic strip version of our actual activity, more distortion than distillation because neither the information nor the heuristics available for deliberative metacognition are adapted to the needs of deliberative metacognition.

This is one reason why I think ‘natural anosognosia’ is such an apt way to describe our straits. We cannot get past the ‘only game in town sense’, or agency, primarily because there’s nothing else to be got. This is the thing about positing ‘functions’: the assumption is that what we experience does what we think it does the way we think it should. There is no reason to assume this must be the case once we appreciate the ubiquity and the consequences of informatic neglect (and our resulting metacognitive incompetence). We have more than enough in the way of counterintuitive findings to worry that we are about to plunge over a cliff–that the soul, like the sky, might simply continue dropping into an ever deeper abyss. The more we learn about ourselves, the more post hoc and counterintuitive we become. Perhaps this is astronomically the case.

Button Gut

Here’s the funny thing: the naturalistic fundamentals are exceedingly clear. Humans are information systems that coordinate via communicated information. The engineering (reverse or forward) challenges posed by this basic picture are enormous, but conceptually, things are pretty clear–so long as you keep yourself off-screen.

We are the only ‘fundamental mystery’ in the room. The problem of meaning is the problem of us.

In addition to Rosenberg’s Atheist’s Guide to Reality I also recently completed reading Plato’s Camera by Churchland and The Cognitive Science of Science by Thagard and I found the contrast… bracing, I guess. Rosenberg made stark the pretence (or more charitably, promise) marbled throughout Churchland and Thagard, the way they ceaselessly swap between the mechanistic and the intentional as if their descriptions of the first, by the mere fact of loosely correlating to our assumptions regarding the latter, somehow explained the latter. Thagard, for instance, goes so far as to claim that the ‘semantic pointer’ model of concepts that he adapts from Eliasmith (of recent SPAUN fame) solves the symbol grounding problem without so much as mentioning how, when, or where semantic pointers (which are eminently amenable to BBT) gain their hitherto inexplicable normative/intentional properties. In other words, they simply pretend there’s no real problem of meaning–even Churchland! “Ach!” they seem to imply, “Details! Details!”

Rosenberg will have none of it. But since he has no way of explaining ‘us,’ he attempts the impossible: he tries to explain us away without explaining us at all, arguing that we are a problem for neuroscience, not for scientism (the philosophical hyper-naturalism that he sees following from the sciences). He claims ‘we’ are philosophically irrelevant because ‘we’ are inconsistent with the world as described by science, not realizing the ease with which this contention can be flipped into the claim that the sciences are philosophically irrelevant so long as they remain inconsistent with us…

Theoretical dodge-ball will not do. Brassier understands this more clearly than any other thinker I know. The problem of meaning has to be tackled. But unlike Jesus, we have cannot cast the subpersonal out into two thousand suicidal swine. ‘Going dialectical,’ abandoning ‘selves’ for the perceived security of ‘rational agency’ ultimately underestimates the wholesale nature of the revisionary/eliminative threat posed by the cognitive sciences, and the degree to which our intentional self-understanding relies on ignorance of our mechanistic nature. Any scientific account of physical regularities that explains semantic rules in terms that contradict our metacognitive assumptions will revolutionize our understanding of ‘rational agency,’ no matter what definitional/theoretical prophylactics we have in place.

Habermas’ analogy of “a consciousness that hangs like a marionette from an inscrutable criss-cross of strings” (“The Language Game or Responsible Agency and the Problem of Free Will,” 24) seems more and more likely to be the case, even at the cost of our ability to make metacognitive sense of our ‘selves’ or our ‘projects.’ (Evolution, to put the point delicately, doesn’t give a flying fuck about our ability to ‘accurately theorize’). This is the point I keep hammering via BBT. Once deliberative theoretical metacognition has been overthrown, it’s anybody’s guess how the functions we attribute to ourselves and others will map across the occluded, orthogonal functions of our brain. And this simply means that the human in its totality stands exposed to the implacable indifference of science…

I think we should be frightened–and exhilarated.

Our capacity to cognize ourselves is an evolutionary shot in the neural dark. Could anyone have predicted that ‘we’ have no direct access to our beliefs and motives, that ‘we’ have to interpret ourselves the way we interpret others? Could anyone have predicted the seemingly endless list of biases discovered by cognitive psychology? Or that the ‘feeling of willing’ might simply be the way ‘we’ take ownership of our behaviour post hoc? Or that ‘moral reasoning’ is primarily a PR device? Or that our brains regularly rewrite our memories? Think, Hume, the philosopher-prophet, and his observation that Adam could never deduce that water drowns or fire burns short of worldly experience. What we do, like what we are, is a genuine empirical mystery simply because our experience of ourselves, like our experience of earth’s motionless centrality, is the product of scant and misleading information.

The human in its totality stands exposed to the implacable indifference of science, and there’s far, far more ways for our intuitive assumptions to be wrong as opposed to right. I sometimes imagine I’m sitting around this roulette wheel, with fairly everyone in the world ‘going with their gut’ and stacking all their chips on the zeros, so there’s this great teetering tower swaying on intentional green, leaving the rest of the layout empty… save for solitary corner-betting contrarians like me and, I hope, Brassier.

The Second Room: Phenomenal Realism as Grammatical Violation

by rsbakker

Aphorism of the Day: Atheist or believer, we all get judged by God. The one that made us, or the one we make.

neuro skull

So just what the hell did Wittgenstein mean when he wrote this?

“And yet you again and again reach the conclusion that the sensation itself is a nothing.” Not at all. It is not a something, but not a nothing either! The conclusion was only that a nothing would serve just as well as a something about which nothing could be said.” (1953, 304)

I can remember attempting to get a handle on this section of Philosophical Investigations in a couple of graduate seminars, contributing nothing more than once stumping my professor with the question of fraudulent workplace injury claims. But now, at long last, I (inadvertently) find myself in a position to explain what Wittgenstein was onto, and perhaps where he went wrong.

My view is simply that the mental and the environmental are pretty much painted in the same informatic brush, and pretty much comprehended using the same cognitive tools, the difference being that the system as a whole is primarily evolved to the track and exploit the environmental, and as a result has great difficulty attempting to track and leverage the ‘mental’ so-called.

If you accept the mechanistic model of the life sciences, then you accept that you are an environmentally situated, biomechanical, information processing system. Among the features that characterize you as such a system is what might be called ‘structural idiosyncrasy,’ the fact that the system is the result of innumerable path dependencies. As a bottom-up designer, evolution relies on the combination of preexisting capacities and happenstance to provide solutions, resulting in an vast array of ad hoc capacities (and incapacities). Certainly the rigours of selection will drive various functional convergences, but each of those functions will bear the imprimatur of the evolutionary twists that led it there.

Another feature that characterizes you as such a system is medial neglect. Given that the resources of the system are dedicated to modelling and exploiting your environments, the system itself constitutes a ‘structural blindspot’: it is the one part of your environment that you cannot readily include in your model of the environment. The ‘medial’ causality of the neural, you could say, must be yoked to the ‘lateral’ causality of the environmental to adequately track and respond to opportunities and threats. To system must be blind to itself to see the world.

A third feature that characterizes you as such a system is heuristic specificity. Given the combination of environmental complexity, structural limitations, and path dependency, cognition is situation-specific, fractionate, and non-optimal. The system solves environmental problems by neglecting forms of information that are either irrelevant or not accessible. So, to give what is perhaps the most dramatic example, one can suggest that intentionality, understood as aboutness, possesses a thoroughly heuristic structure. Given medial neglect, the system has no access to information pertaining to anything but the grossest details of its causal relationship to its environments. It is forced, therefore, to model that relationship in coarse-grained, acausal terms–or put differently, in terms that occlude the neurofunctionality that makes the relationship possible. As a result, you experience apples in your environment, oblivious to any of the machinery this makes possible. This ‘occlusion of the neurofunctional’ generates efficiencies (enormous ones, given the system’s complexity) so long as the targets tracked are not themselves causally perturbed by (medial) tracking. Since the system is blind to the medial, any interference it produces will generate varying degrees of ‘lateral noise.’

A final feature that characterizes you as such a system might be called internal access invariability, the fact that cognitive subsystems receive information via fixed neural channels. All this means is that cognitive subsystems are ‘hardwired’ into the rest of the brain.

Given a handful of caveats, I don’t think any of the above should be all that controversial.

Now, the big charge against Wittgenstein regarding sensation is some version of crypto-behaviourism, the notion that he is impugning the reality of sensation simply because only pain behaviour is publicly observable, while the pain itself remains a ‘beetle in a box.’ The problem people have with this characterization is as clear as pain itself. One could say that nothing is more real than pain, and yet here’s this philosopher telling you that it is ‘neither a something nor a nothing.’

Now I also think nothing is more real than pain, but I also agree with Wittgenstein, at long last, that pain is ‘neither a something or a nothing.’ The challenge I face is one of finding some way to explain this without sounding insane.

The thing to note about the four features listed above is how each, in its own way, compromises human cognition. This is no big news, of course, but my view takes the approach that the great philosophical conundrums can be seen as diagnostic clues to the way cognition is compromised, and that conversely, the proper theoretical account of our cognitive shortcomings will allow us to explain or explain away the great philosophical conundrums. And Wittgenstein’s position certainly counts as one of the most persistent puzzles confronting philosophers and cognitive scientists today: the question of the ontological status of our sensations.

Another way of putting my position is this: Everyone agrees you’re are a biomechanism possessing myriad relationships with your environment. What else would humans (qua natural) be? The idea that understanding the specifics of how human cognition fits into that supercomplicated causal picture will go a long way to clearing up our myriad, longstanding confusions is also something most everyone would agree with. What I’m proposing is a novel way of seeing how those confusions fall out of our cognitive limitations–the kinds of information and capacities that we lack, in effect.

So what I want to do, in a sense, is turn the problem of sensation in Wittgenstein upside down. The question I want to ask is this: How could the four limiting features described above, structural idiosyncrasy (the trivial fact that out of all the possible forms of cognition we evolved this one), medial neglect (the trivial fact that the brain is structurally blind to itself as a brain), heuristic specificity (the trivial fact that cognition relies on a conglomeration of special purpose tools), and access invariability (the trivial fact that cognition accesses information via internally fixed channels) possibly conspire to make Wittgenstein right?

Well, let’s take a look at what seems to be the most outrageous part of the claim: the fact that pain is ‘neither a something or a nothing.’ This, I think, points rather directly at heuristic specificity. The idea here would be that the heuristic or heuristic systems we use to identify entities are simply misapplied with reference to sensations. As extraordinary as this claim might seem, it really is old hat scientifically speaking. Quantum Field Theory forced us quite some time ago to abandon the assumption that our native understanding of entities and existence extends beyond the level of apples and lions we evolved to survive in. That said, sensation most certainly belongs the ‘level’ of apples and lions: eating apples causes pleasure as reliably as lion attacks cause pain.

We need some kind of account, in other words, of how construing sensations as extant things might count as a heuristic misapplication. This is where medial neglect enters the picture. First off, medial neglect explains why heuristic misapplications are inevitable. Not only can’t we intuit the proper scope of application for the various heuristic devices comprising cognition, we can’t even intuit the fact that cognition consists of multiple heuristic devices at all! In other words, cognition is blind to both its limits and its constitution. This explains why misapplications are both effortless and invisible–and most importantly, why we assume cognition to be universal, why quantum and cosmological violations of intuition come as a surprise. (This also motivates taking a diagnostic approach to classic philosophical problems: conundrums such as this indirectly reveal something of the limitations and constitution of cognition).

But medial neglect can explain more than just the possibility of such a misapplication; it also provides a way to explain why it constitutes a misapplication, as well as why the resulting conundrums take the forms they do. Consider the ‘aboutness heuristic’ considered above. Given that the causal structure of the brain is dedicated to tracking the causal structure of its environment, that structure cannot itself be tracked, and so must be ‘assumed.’ Aboutness is forced upon the system. This occlusion of the causal intricacies of the system’s relation to its environment is inconsequential. So long as the medial tracking of  targets in no way interferes with those targets, medial neglect simply relieves the system of an impossible computational load.

But despite it’s effectiveness, aboutness remains heuristic, remains a device (albeit a ‘master device’) that solves problems via information neglect. This simply means that aboutness possesses a scope of applicability, that it is not universal. It is adapted to a finite range of problems, namely, those involving functionally independent environmental entities and events. The causal structure of the system, again, is dedicated to modelling the causal structure of its environment (thus the split between medial (modelling) and lateral (modelled) functionality). This insures the system will encounter tremendous difficulty whenever it attempts to model its own modelling. Why? I’ve considered a number of different reasons (such a neural complexity) in a number of different contexts, but the primary, heuristic culprit is that the targets to be tracked are all functionally entangled in these ‘metacognitive’ instances.

The basic structure of human cognition, in other words, is environmental, which is to say, adapted to things out there functioning independent of any neural tracking. It is not adapted to the ‘in here,’ to what we are prone to call the mental. This is why the introspective default assumption is to see the ‘mental’ as a ‘secondary environment,’ as a collection of functionally independent events and entities tracked by some kind of mysterious ‘inner eye.’ Cognition isn’t magical. To cognize something requires cognitive resources. Keeping in mind that the point of this exercise is to explain how Wittgenstein could be right, we could postulate (presuming evolutionary parsimony) that second-order reflection possesses no specially adapted ‘master device,’ no dedicated introspective cognitive system, but instead relies on its preexisting structure and tools. This is why the ‘in here’ is inevitably cognized as a ‘little out there,’ a kind of peculiar secondary environment.

A sensation–or quale to the use the philosophy of mind term–is the product of an occurrent medial circuit, and as such impossible to laterally model. This is what Wittgenstein means when he says pain is ‘neither a something nor a nothing.’ The information required to accurately cognize ‘pain’ is the very information systematically neglected by human cognition. Second-order deliberative cognition transforms it into something ‘thinglike,’ nevertheless, because it is designed to cognize functionally independent entities. The natural question then becomes, What is this thing? Given the meagre amount of information available and the distortions pertaining to cognitive misapplication, it necessarily becomes the most baffling thing we can imagine.

Given structural idiosyncrasy (again, the path dependence of our position in ‘design space’), it simply ‘is what is it is,’ a kind of astronomically coarse-grained ‘random projection’ of higher dimensional neural space perhaps. Why is pain like pain? Because it dangles from all the same myriad path dependencies as our brains do. Given internal access invariability (again, the fact that cognition possesses fixed channels to other neural subsystems) it is also all that there is as well: cognition cannot inspect or manipulate a quale the way it can actual things in its environment via exploratory behaviours, so unlike other objects they necessarily appear to be ‘irreducible’ or ‘simple.’ On top of everything, qualia will also seem causally intractable given the utter occlusion of neurofunctionality that falls out of medial neglect, as well the distortions pertaining to heuristic specificity.

As things therefore, qualia strike as ineffable, intrinsic, and etiologically opaque. Strange ‘somethings’ indeed!

Given our four limiting features, then, we can clearly see that Wittgenstein’s hunch is grammatical and not behaviouristic. The problem with sensations isn’t so much epistemic privacy as it is information access and processing: when we see qualia as extant things requiring explanation like other things we’re plugging them into a heuristic regime adapted to discharge functional independent environmental challenges. Wittgenstein himself couldn’t see it as such, of course, which is perhaps why he takes the number of runs at the problem as he does.

Okay, so much for Wittgenstein. The real question, at this point, is one of what it all means. After all, despite what might seem like fancy explanatory footwork, we still find ourselves stranded with a something that is neither a something nor a nothing! Given that absurd conclusions generally mean false premises, why shouldn’t we simply think Wittgenstein was off his rocker?

Well, for one, given the conundrums posed by ‘phenomenal realism,’ you could argue that the absurdity is mutual. For another, the explanatory paradigm I’ve used here (the Blind Brain Theory) is capable of explaining away a great number of such conundrums (at the cost of our basic default assumptions, typically).

The question then becomes whether a general gain in intelligibility warrants accepting one flagrant absurdity–a something that is neither a something nor a nothing.

The first thing to recall is that this situation isn’t new. Apparent absurdity is alive and well at the cosmological and quantum levels of physical explanation. The second thing to recall is that human cognition is the product of myriad evolutionary pressures. Much as we did not evolve to be ideal physicists, we did not evolve to be ideal philosophers. Structural idiosyncrasy, in other words, gives us good reason to expect cognitive incapacities generally. And indeed, cognitive psychology has spent several decades isolating and identifying numerous cognitive foibles. The only real thing that distinguishes this particular ‘foible’ is the interpretative centrality (not to mention cherished status) of its subject matter–us!

‘Us,’ indeed. Once again, if you accept the mechanistic model of the life sciences (if you’re inclined to heed your doctor before your priest), then you accept that you are an environmentally situated, biomechanical information processing system. Given this, perhaps we should add a fifth limiting feature that characterizes you: ‘informatic locality,’ the way your system has to make due with the information it can either store or sense. Your particular brain-environment system, in other words, is its own ‘informatic frame of reference.’

Once again, given the previous four limiting features, the system is bound to have difficulty modelling itself. Consider another famous head-scratcher from the history of philosophy, this one from William James:

“The physical and the mental operations form curiously incompatible groups. As a  room, the experience has occupied that spot and had that environment for thirty years. As your field of consciousness it may never have existed until now. As a room, attention will go on to discover endless new details in it. As your mental state merely, few new ones will emerge under attention’s eye. As a room, it will take an earthquake, or a gang of men, and in any case a certain amount of time, to destroy it. As your subjective state, the closing of your eyes, or any instantaneous play of your fancy will suffice. In the real world, fire will consume it. In your mind, you can let fire play over it without effect. As an outer object, you must pay so much a month to inhabit it. As an inner content, you may occupy it for any length of time rent-free. If, in short, you follow it in the mental direction, taking it along with events of personal biography solely, all sorts of things are true of it which are false, and false of it which are true if you treat it as a real thing experienced, follow it in the physical direction, and relate it to associates in the outer world. (“Does ‘Consciousness’ Exist?“)

The genius of this passage, as I take it, is the way refuses the relinquish the profound connection between the third person and the first, rather alternating from the one to other, as if it were a single, inexplicable lozenge that tasted radically different when held against the back or front of the tongue–the room as empirically indexed versus the room as phenomenologically indexed. Wittgenstein’s problem, expressed in these terms, is simply one of how the phenomenological room fits into the empirical. From a brute mechanistic perspective, the system is first modelling the room absent any model of its occurrent modelling, then modelling its modelling of the room–and here’s the thing, absent any model of its occurrent modelling. The aboutness heuristic, as we saw, turns on medial neglect. This is what renders the second target, ‘room-modelling,’ so difficult to square with the ‘grammar’ of the first, ‘room,’ perpetually forcing us to ask, What the hell is this second room?

The thing to realize at this juncture is that there is no way to answer this question so long as we allow the apparent universality of the aboutness heuristic get the better of us. ‘Room-modelling’ will never fit the grammar of ‘room’ simply because it is–clearly, I would argue–the product of informatic privation (due to medial neglect) and heuristic misapplication (due to heuristic specificity).

On the contrary, the only way to solve this ‘problem’ (perhaps the only way to move beyond the conundrums that paralyze philosophy of mind and consciousness research as a whole) is to bracket aboutness, to finally openly acknowledge that our apparent baseline mode of conceptualizing truth and reality is in fact heuristic, which is to say, a mode of problem-solving that turns on information neglect and so possesses a limited scope of effective application. So long as we presume the dubious notion that cognitive subsystems adapted to trouble-shooting external environments absent various classes of information are adequate to the task of trouble-shooting the system of which they are a part, then we will find ourselves trapped in this grammatical (algorithmic) impasse.

In other words, we need to abandon our personal notion of the ‘knower’ as a kind of ‘anosognosiac fantasy,’ and begin explaining our inability to resolve these difficulties in subpersonal terms. We are an assemblage of special purpose cognitive tools, not whole, autonomous knowers attempting to apprehend the fundamental nature of things. We are machines attempting to model ourselves as such, and consistently failing because of a variety of subsystemic functional limitations.

You could say what we need is a whole new scientific subdiscipline: the cognitive psychology of philosophy. I realize that this sounds like anathema to many–it certainly strikes me as such! But no matter what one thinks of the story above, I find it hard to fathom how philosophy can avoid this fate now that the black box of the brain has been cracked open. In other words, we need to see the inevitability of this picture or something like it. As a natural result of the kind of system that we happen to be, the perennial conundrums of consciousness (and perhaps philosophy more generally) are something that science will eventually explain. Only ignorance or hubris could convince us otherwise.

We affirm the cosmological and quantum ‘absurdities’ we do because of the way science allows us to transcend our heuristic limitations. Science, you could say, is a kind of ‘meta-heuristic,’ a way to organize systems such that their individual heuristic shortcomings can be overcome. The Blind Brain picture sketched above bets that science will sketch the traditional metaphysical problem of consciousness in fundamentally mechanistic terms. It predicts that the traditional categorical bestiary of metaphysics will be supplanted by categories of information indexed according to their functions. It argues that the real difficulty of consciousness lies in the cognitive illusions secondary to informatic neglect.

One can conceive this different ways I think: You could keep your present scientifically informed understanding of the universe as your baseline, and ‘explain away’ the mental (and much of the lifeworld with it) as a series of cognitive illusions. Qualia can be conceived as ‘phenomemes,’ combinatorial constituents of conscious experience, but no more ‘existential’ than phonemes are ‘meaningful.’  This view takes the third-person brain revealed by science as canonical, and the first-person brain (you!) as a ‘skewed and truncated low-dimensional projection’ of that brain. The higher-order question as to the ontological status of that ‘skewed and truncated low-dimensional projection’ is diagnostically blocked as a ‘grammatical violation,’ by the recognition that such a move constitutes a clear heuristic misapplication.

Or one could envisage a new kind of scientific realism, where the institutions are themselves interpreted as heuristic devices, and we can get to the work of describing the nonsemantic nature of our relation to each other and the cosmos. This would require acknowledging the profundity of our individual theoretical straits, to embrace our epistemic dependence on the actual institutional apparati of science–to see ourselves as glitchy subsystems in larger social mechanisms of ‘knowing.’ On this version, we must be willing to detach our intellectual commitments from our commonsense intuitions wholesale, to see the apparent sufficiency and universality of aboutness as a cognitive illusion pertaining to heuristic neglect, first person or third.

Either way, consciousness, as we intuit it, can at best be viewed as virtual.

The Philosopher and the Cuckoo’s Nest

by rsbakker

Definition of Day – Introspection: A popular method of inserting mental heads up neural asses.

.

Question: How do you get a philosopher to shut up?

Answer: Pay for your pizza and tell him to get the hell off your porch.

I’ve told this joke at public speaking engagements more times than I can count, and it works: the audience cracks up every single time. It works because it turns on a near universal cultural presumption of  philosophical impracticality and cognitive incompetence. This presumption, no matter how much it rankles, is pretty clearly justified. Whitehead’s famous remark that all European philosophy is “a series of footnotes to Plato” is accurate so far as we remain as stumped regarding ourselves as were the ancient Greeks. Twenty-four centuries! Keeping in mind that I happen to be one of those cognitive incompetents, I want to provide a sketch of how we theorists of the soul could have found ourselves in these straits, as well as why the entire philosophical tradition as we know it is almost certainly about to be swept away.

In a New York Times piece entitled “Don’t Blink! The Hazards of Confidence,” Daniel Kahneman writes of his time in the Psychology Branch of the Israeli Army, where he was tasked with evaluating candidates for officer training by observing them in a variety of tests designed to isolate soldiers’ leadership skills. His evaluations, as it turned out, were almost entirely useless. But what surprised him was the way knowing this seemed to have little or no impact on the confidence with which he and his fellows submitted their subsequent evaluations, time and again. He was so struck by the phenomenon that he would go on to study it as the ‘illusion of validity,’ a specific instance of the general role the availability of information seems to plays in human cognition–or as he would later term it, What-You-See-Is-All-There-Is, or WYSIATI.

The idea, quite simply, is that because you don’t know what you don’t know, you tend, in many contexts, to think you know all that you need to know. As he puts it in Thinking, Fast and Slow:

An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. [Our automatic cognitive system] excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have. (2011, 85)

As Kahneman shows, this leads to myriad errors in reasoning, including our peculiar tendency in certain contexts to be more certain about our interpretations the less information we have available. The idea is so simple as to be platitudinal: only the information available for cognition can be cognized. Other information, as Kahneman says, “might as well not exist” for the systems involved. Human cognition, it seems, abhors a vacuum.

The problem with platitudes, however, is that they are all too often overlooked, even when, as I shall argue in this case, their consequences are spectacularly profound. In the case of informatic availability, one need only look to clinical cases of anosognosia to see the impact of what might be called domain specific informatic neglect, the neuropathological loss of specific forms of information. Given a certain, complex pattern of neural damage, many patients suffering deficits as profound as lateralized paralysis, deafness, even complete blindness, appear to be entirely unaware of the deficit. Perhaps because of the informatic bandwidth of vision, visual anosognosia, or ‘Anton’s Syndrome,’ is generally regarded as the most dramatic instance of the malady. Prigatano (2010) enumerates the essential features of the syndrome as following:

First, the patient is completely blind secondary to cortical damage in the occipital regions of the brain. Second, these lesions are bilateral. Third, the patient is not only unaware of her blindness; she rejects any objective evidence of her blindness. Fourth, the patient offers plausible, but at times confabulatory responses to explain away any possible evidence of her failure to see (e.g., “The room is dark,” or “I don’t have my glasses, therefore how can I see?”). Fifth, the patient has an apparent lack of concern (or anosodiaphoria) over her neurological condition. (456)

Obviously, the blindness stems from the occlusion of raw visual information. The second-order ‘blindness,’ the patient’s inability to ‘see’ that they cannot see, turns, one might suppose, on the unavailability of information regarding the unavailability of visual information. At some crucial juncture, the information required to process the lack of visual information has gone missing. As Kahneman might say, since our automatic cognitive system is dedicated to the construction of ‘the best possible story’ given only the information it has, the patient confabulates, utterly convinced they can see even though they are quite blind.

Anton’s Syndrome, in other words, can be seen as a neuropathological instance of WYSIATI. And WYSIATI, conversely, can be seen as a non-neuropathological version of anosognosia. What I want to suggest is that philosophers all the way back to the ancient Greeks have in fact suffered from their own version of Anton’s Syndrome–their own, non-neuropathological version of anosognosia. Specifically, I want to argue that philosophy has been systematically deluded into thinking their intuitions regarding the soul in any of its myriad incarnations–mind, consciousness, being-in-the-world, and so on–actually provides a reliable basis for second-order claim-making. The uncanny ease with which one can swap the cognitive situation of the Anton’s patient for that of the philosopher may be no coincidence:

First, the philosopher is introspectively blind secondary to various developmental and structural constraints. Second, the philosopher is not aware of his introspective blindness, and is prone to reject objective evidence of it. Third, the philosopher offers plausible, but at times confabulatory responses to explain away evidence of his inability to introspect. And fourth, the philosopher often exhibits an apparent lack of concern for his less than ideal neurological constitution.

What philosophers call ‘introspection,’ I want to suggest, provides some combination of impoverished information, skewed information, or (what amounts to the same) information matched to cognitive systems other than those employed in deliberative cognition, without–and here’s the crucial twist–providing information to this effect. As a result, what we think we see becomes all there is to be seen, as per WYSIATI. If the informatic and cognitive limits of introspection are not available for introspection (and how could they be?), then introspection will seem, curiously, limitless, no matter how severe the actual limits may be.

Now the stakes of this claim are so far-reaching that I’m sure it will have to seem preposterous to anyone with the slightest sympathy for philosophers and their cognitive plight. Accusing philosophers of suffering introspective anosognosia is basically accusing them of suffering a cognitive disability (as opposed to mere incompetence). So, in the interests of making my claim somewhat more palatable, I will do what philosophers typically do when they get into trouble: offer an analogy.

The lowly cuckoo, I think, provides an effective, if peculiar, way to understand this claim. Cuckoos are ‘obligate brood parasites,’ which is to say, they exclusively lay their eggs in the nests of other birds, relying on them to raise their chick (who generally kills the host bird’s own offspring) to reproductive age. The entire species, in other words, relies on exploiting the cognitive limitations of birds like the reed warbler. They rely on the inability of the unwitting host to discriminate between the cuckoo’s offspring and their own offspring. From a reed warbler’s standpoint, the cuckoo chick just is its own chick. Lacking any ‘chick imposter detection device,’ it simply executes its chick rearing program utterly oblivious to the fact that it is perpetuating another species’ genes. The fact that it does lack such a device should come as no surprise: so long as the relative number of reed warblers thus duped remains small enough, there’s no evolutionary pressure to warrant the development of one.

What I’m basically saying here is that humans lack a corresponding ‘imposter detection device’ when it comes to introspection. There is no doubt that we developed the capacity to introspect to discharge any number of adaptive behaviours. But there is also no doubt that ‘philosophical reflection on the nature of the soul’ was not one of those adaptive behaviours. This means that it is entirely possible that our introspective capacity is capable of discharging its original adaptive function while duping ‘philosophical reflection’ through and through. And this possibility, I hope to show, puts more than a little heat on the traditional philosopher.

‘Metacognition’ refers to our ability to know our knowledge and our skills, or “cognition about cognitive phenomena,” as Flavell puts it. One can imagine that the ability of an organism to model certain details of its own neural functions and thus treat itself as another environmental problem requiring solution would provide any number of evolutionary benefits. It pays to assess and revise our approaches to problems, to ask what it is we’re doing wrong. It likewise pays to ‘watch what we say’ in any number of social contexts. (I’m sure everyone has that one friend or family member who seems to lack any kind of self-censor). It pays to be mindful of our moods. It pays to be mindful of our actions, particularly when trying to learn some new skill.

The issue here isn’t whether we possess the information access or the cognitive resources required to do these things: obviously we do. The question is whether the information and cognitive resources required to discharge these metacognitive functions comes remotely close to providing us with what we need to answer theoretical questions regarding mind, consciousness, or being-in-the-world.

This is where the shadow cast by the mere possibility of introspective anosognosia becomes long indeed. Why? Because it demonstrates the utter insufficiency of our intuition of introspective sufficiency. It demonstrates that what we conceptualize as ‘mind’ or ‘consciousness’ or ‘being-in-the-world’ could very well be a ‘theoretical cuckoo,’ even if the information it accesses is ‘warbler enough’ for the type of metacognitive practices described above. Is a theoretically accurate conception of ‘consciousness’ required to assess and revise our approaches to problems, to self-censor, to track or communicate our moods, to learn some new skill?

Not at all. In fact, for all we know, the grossest of distortions will do.

So how might we be able to determine whether the consciousness we think we introspect is a theoretical cuckoo as opposed to a theoretical warbler? Since relying on introspection simply begs the question, we have to turn to indirect evidence. We might consider, for instance, the typical symptoms of insufficient information or cognitive misapplication. Certainly the perennial confusion, conundrum, and intractable debate that characterize traditional philosophical speculation on the soul suggest that something is missing. You have to admit the myriad explananda of philosophical reflection on the soul smack more than a little of Rorschach blots: everybody sees something different–astoundingly so, in some cases. And the few experiential staples that command any reasonable consensus, like intentionality or nowness, continue to resist analysis, let alone naturalization. One need only ask, What would the abject failure of transcendental philosophy look like? A different kind of perennial confusion, conundrum, and intractable debate? Sounds pretty fishy.

In other words, it’s painfully obvious that something has gone wrong. And yet, like the Anton’s patient, the philosopher insists they can still see! “What of the apriori?” they cry. “What of conditions of possibility?” Shrug. A kind of low-dimensional projection, neural interactions minus time and space? But then that’s the point: Who knows?

Meanwhile it seems very clear that something is rotten. The audience’s laughter is too canny to be merely ignorant. If you’re a philosopher, you feel it I suspect. Somehow, somewhere… something…

But the truly decisive fact is that the spectre of introspective anosognosia need only be plausible to relieve traditional philosophy of its transcendental ambitions. This particular skeptical ‘How do you know?’ unlike those found in the tradition, is not a product of the philosopher’s discursive domain. It’s an empirical question. Like it or not, we have been relegated to the epistemological lobby: Only cognitive neuroscience can tell us whether the soul we think we see is a cuckoo or not.

For better or worse, this happens to be the time we live in. Post-transcendental. The empirical quiet before the posthuman storm.

In retrospect, it will seem obvious. It was only a matter of time before they hung us from hooks with everything else in the packing plant.

Fuck it. The pizza tastes just as good, either way.