Rethinking Jesse Butler’s Rethinking Introspection

by rsbakker

Noocentric Nostalgia

Everyone but everyone claims to be a physicalist, nowadays, which means that everyone but everyone accepts that it’s all mechanisms: that what we call ‘knowledge,’ for instance, boils down to some kind of dynamic, mechanical interrelationship with their environment. Given this, it becomes hard to fathom why knowledge is anything other than a scientific problem–why, in other words, it remains philosophical. If what we call knowledge is nothing more than another natural phenomena, then we need only wait for science to isolate and explain the mechanisms behind it. This, after all, is what science does.

The problem, however, is twofold: 1) most everyone wants the mechanical details of this picture to somehow vindicate the received view, which is to say, the intentional picture painted by prescientific traditional theoretical speculation and metacognitive intuition; and 2) this intentional picture seems all but impossible to understand in mechanical terms.

The easiest way to solve this problem is to simply abandon the received view as our primary desideratum–to relinquish the siren song of Vindication. And good riddance! Certainly we expect science to confirm what we experience, but why should we expect it to confirm what we intuitively believe, especially knowing, as we do, the informatic penury that necessarily underwrites all our received views. Consider the way Plato likened memory to an aviary, or how Aristotle likened the cosmos to a sphere: given the information and problem-solving resources available such theoretical characterizations make a good deal of sense. But as the relevant sciences accumulated ever more information, as the picture revealed became more and more dimensional, the more obviously parochial these theoretical likenings became. And how could it be otherwise? It simply makes no sense, from a naturalist’s perspective at least, to presume that the sciences will vindicate any set of traditional beliefs.

And yet, despite all the naturalist avowals you encounter in cognitive science and philosophy, one finds the stubborn insistence on Vindication. This, in a nutshell, summarizes my critique of Jesse Butler’s Rethinking Introspection: A Pluralist Approach to the First-Person Perspective. Despite all the received assumptions and claims Butler relinquishes, his project ultimately remains, I think, an exercise in Vindication.

The genius of science, you might say, lies in its long-term institutional indifference to received views. It finds what it finds, and as the information pertaining to a particular domain accumulates, the problems with the corresponding received view as a rule become more and more glaring. The process, however, is slow. This generates opportunities for what might be called ‘theoretical accommodation,’ recharacterizations that concede as little as possible to the science while salvaging as much of the received view as possible. Butler casts Rethinking Introspection in precisely this mould, dispensing with those elements of the received view that are simply no longer tenable given a wide spectrum of empirical findings relevant to introspection, while resisting, at every turn, the eliminative abyss suggested by the overall trend of this research.

Now anywhere else in the natural world, theoretical accommodation of this sort would be obviously suspicious. But not so when it comes to the ‘mind’ in general or ‘introspection’ more narrowly. Why this is the case is something I have considered in detail here in the past. But in lieu of rehearsing this account, I would suggest that at least three unavoidable questions confront any contemporary, philosophical account of introspection:

1) What information is accessed in introspection?

2) What cognitive resources are deployed in introspection?

3) Are (1) and (2) adequate to the kinds of questions we are asking of introspection?

Simply asking these questions, I think, turns the bulk of traditional philosophy on its head. Why? Because throughout history philosophers have implicitly assumed both the sufficiency of the information accessed and the adequacy of the cognitive resources deployed. More and more the sciences of the brain are suggesting they were profoundly mistaken on both counts.

Blind Brain Theory (BBT) constitutes an attempt to answer these two questions using a number sober empirical assumptions and contemporary scientific evidence. Both the information accessed and the resources deployed, it contends, fall woefully short of the ‘default sufficiency’ assumed by the tradition. It then takes the further step of showing how numerous, longstanding philosophical impasses can be dissolved once interpreted in terms of metacognitive incapacity. Ultimately, it explains away the famous conundrums presented by ‘phenomenality,’ ‘intentionality,’ and the ‘first-person,’ by characterizing them as artifacts of informatic neglect.

When judged in terms of what is actually the case–what our brains happen to be doing–what we call ‘first-person experience’ consists of various cognitive incapacities turning on various resolution and dimensional deficits. So tradition characterized memory as a single, veridical faculty simply because human metacognition, left to its own devices, lacked the information and cognitive resources to characterize it any other way. Before the cognitive revolution, we were stranded with a cartoon conception of memory, a low-dimensional glimpse of what our brains are actually doing. BBT simply generalizes this picture. The so-called ‘first-person,’ it argues, is a concatenation of such cartoons, a series of cognitive illusions and simplifications forced on metacognition by profound constraints on the brain’s ability to solve itself the way it solves its environments. Since this cartoon complex is all the brain has, and since it is anchored in (as yet, largely unknown) actual functions, we have no other recourse (short of the sciences of the brain) but to make due as well as we can, understanding that, like memory, all our traditional conceptualizations will be shown to be low-dimensional parochialisms.

Throughout the course of Rethinking Introspection, Butler wanders to the tantalizing verge of this insight only to retreat into the safety of various received philosophical views time and again. He devotes the first chapters of the book to the demolition of the traditional ‘inner eye’ conception of introspection. Butler realizes that introspection is fractionate, that it is a complex consisting of a number of different cognitive operations, not all of them veridical. The subsequent chapters, accordingly, lay out a bestiary of introspective kinds, speculative considerations of what might be called the ‘introspective cognitive toolbox,’ the wide variety of ways we seem to gain metacognitive purchase on our experiences, thoughts, traits, and activities. No matter what one thinks about any given interpretation he gives of any given component, Butler makes it very difficult to suppose that introspection can be thought in anything remotely resembling the singular, veridical faculty assumed by the tradition.

And indeed, this commitment to pluralism is where the value of the book lies–what makes it worthwhile reading, I think. Nevertheless, I want to argue that Butler’s account is nowhere near as radical as it needs to be to ‘rethink’ introspection in a forward looking manner. When interpreted through the lens of BBT, it becomes clear that Rethinking Introspection is actually a recuperative exercise, an attempt to rescue the introspection we want from the introspection the sciences of the brain seem to be revealing…

.

The Inner Wall-Eye

What will science make of introspection?

BBT provides one possibility. Butler’s account offers another. But these theories are just that, theories. Not surprisingly, I think BBT holds far and away more promise, but no matter how compelling the arguments I adduce may seem, it simply remains another speculative bet awaiting empirical arbitration. But there is one thing we can claim with some certainty: All things being equal, we can assume that science will complicate and contradict our traditional and/or intuitive preconceptions. It complicates because it provides more and more information–which is to say, systematic differences making systematic differences. It contradicts because the complication of any given phenomena inevitably reveals information crucial to understanding what is actually the case. A signature theoretical virtue of BBT, by the way, lies in its ability to explain why this is so. It can explain why we find our traditional and/or intuitive assumptions so convincing, no matter how wildly wrong they may be (via what might be called ignorance anchored certainty effects), and it can explain why the accumulation of scientific information inevitably ‘disenchants’ these assumptions (via the provision of the very information our traditional and/or intuitive assumptions are adapted to function without). But even if one is inclined to reject these explanations, the basic observation they turn on remains: All our assumptions depend on some combination of the information and the cognitive resources available. Thus the importance of the three questions above.

Everything in our metacognitive canon, all the conceptual verities that philosophers have relied upon for millennia, now stands perched on an informatic continuum–or abyss as the case might be. The primary and most pervasive problem afflicting Rethinking Introspection lies in its failure to systematically consider the implications of this platitudinal insight.

Nowhere is this failure more evident than in Butler’s critique of ‘perceptual accounts’ of introspection, the famous, traditional understanding of introspection as some kind of ‘inner eye.’ In physiological terms, he argues, no one has ever discovered any organ of inner sense. In functional terms, he argues that introspection, unlike perception more generally, operates recursively. In phenomenological terms, he primarily argues that the mind offers no objects to be perceived.  And lastly, in evolutionary terms, he argues that the development of some kind of inner eye simply makes no evolutionary sense. Ultimately he concludes that the inner eye posited by tradition is simply a cognitive convenience, a useful but problematic metaphoric extension of our environmentally oriented understanding.

Now, even though I largely agree with his conclusion, I fail to see how any one of these arguments is supposed to work, especially given the way he ultimately pushes his account. As we shall see, not only does his own account lack any empirically confirmed physiological basis, it’s actually difficult to understand how introspection as he conceives it could be accomplished by any mechanism whatsoever. Moreover, he seems to forget that the whole point of positing ‘scanning mechanisms’ and the like is to streamline the scientific process, to give those who do the actual research some idea of what to look for. In this sense, he’s doing little more than accusing speculative accounts (like his own) of speculation.

A similar problem haunts his functional disanalogy argument, the much ado he makes over the fact that introspection is recursive whereas environmental perception is not. One cannot ‘see seeing’ or ‘hear hearing’ the way one can scrutinize scrutiny, or think thought. This basically boils down to the argument that introspection cannot be inner perception simply because it is, well, inner. But he never makes clear why this implies anything more than the fact that introspection, like other forms of perception, involves tracking a particular species of natural event–namely, the tracking itself. If introspection were a kind of perception this is the only kind of perception it could be. Moreover, why should recursion disqualify introspection as perception, especially given the imprecision of Butler’s definition of perception, to the point where it seems any secondary mechanism engaged in metacognition might count as ‘perceptual’?

All of this, of course, begs the question of just what does the tracking, if not some kind of mechanism, something I will return to in due course.

His phenomenological critique fares no better. Here, he opts for another argument from disanalogy: Introspection cannot be a kind of ‘inner perception’ simply because its objects in no way resemble the objects of environmental perception. The relevant passage is worth quoting in full:

If the supposed internal perceptual faculty perceives brain states, then these brain states must be occluded or ‘scrambled’ in some way or other, as they do not appear to us in introspection as brain states… Brain states are incredibly complicated electro-chemical events among virtually innumerable neural networks encased inside one’s skull. However, this is definitely not what we perceive, if we perceive anything at all, through introspection. The thought ‘I am thinking,’ for instance, does not appear in experience as a particular neural event, or even as any discernible physical thing at all, as Descartes noted and made (too) much of several centuries ago. So, if we perceive a brain state when we are aware of having such a thought, then that brain state must be filtered through some sort of process that transforms it into something that appears quite different, to such an extent it is unrecognizable as such. Otherwise, philosophers would not have had spilled so much ink over the mind/body problem all these years, and Descartes himself could have readily identified mental states with brain states. So, if we have the capacity to perceive brain states, it must be through some mechanism that alters their appearance so radically that they do not seem like brain processes at all.

The idea of a perceptual brain scrambler might fly in a Philip K. Dick novel, but not as a literal account of introspection. (22)

On BBT, of course, this represents a text-book case of the ‘Accomplishment Fallacy,’ the assumption that any identifiable feature of our phenomenology must possess some kind of neural correlate, some mechanism that ‘brings it about.’ So where Butler (following Lyons) posits the necessity of some kind of ‘scrambler,’ BBT simply posits the loss of information. A good deal of our phenomenology, it asserts, is a kind of ‘flicker fusion,’ the product of default identifications made in the absence of the information required to make accurate discriminations. The difference between the mental and the environmental no more requires a special mechanism than the difference between geocentrism and heliocentrism requires some “planetary immobilization device.” In both cases, the relevant cognitive systems simply lack the information required for accuracy. And as I have argued in detail elsewhere, this is precisely what we should expect, given the way complexity and structural complicity confound the brain’s ability to cognize its own functions. Metacognition necessarily neglects far more information than does environmental cognition. It relies on effective shortcuts, heuristics keyed to exploit various information structures in the organism’s environment–which, one must remember, happens to include our own brain. Since the information neglected is neglected in the full sense of the word, no discontinuities appear (save indirectly, at those junctures attended by perennial controversy), and so we assume that no information is missing. Thus the perennial illusion of ‘sufficiency,’ why it is we are so prone to assume introspective infallibility, or ‘self-transparency’ as Carruthers calls it in The Opacity of the Mind.

Here we clearly see Butler’s failure to consider questions (1), (2), and (3). The mechanisms discovered by neuroscience–or ‘brain states’ as he calls them–are discoveries of what is the case (or failing that, the level at which understanding means effective manipulation), the natural basis of our every thought and action. Given that Butler’s stated aim is to elucidate the epistemic statuses of our various introspective modalities, one might assume that the findings of cognitive neuroscience would provide him with the very yardstick he needs to assess the accuracy of any given modality. But such is not the case. Despite all the qualifications he uses to innoculate his use of ‘mind’ and the ‘mental,’ he nevertheless proceeds under the traditional assumption that they indeed exist, that they comprise a functionally distinct ‘level of description’ and so provide him with the very baseline or yardstick he needs to make his assessments. And this saddles him with the dilemma that dogs fairly every page of this book: the continual need to fix and hedge his yardsticks. Time and again you find him acknowledging the controversies pertaining to this or that mentalistic concept (including the concept of ‘concept’ itself), and trying to stake out some kind of neutral or maximally inoffensive interpretative ground. Time and again, in other words, he is forced to philosophically argue his baseline.

By turning his back on the yardsticks afforded by science, it seems he is forced to evaluate the epistemic status of the various introspective modalities he considers using yardsticks largely provided by–you guessed it–introspection. So where BBT parsimoniously theorizes metacognition in terms continuous with cognition more generally, conceiving it simply as the brain’s neuromechanistic attempt to cognize its neuromechanistic complexities in drastically simplified and therefore computationally tractable and domain specific ways, Butler theorizes metacognition–at root, at least–as something different from neuromechanistic cognition entirely. The objects of metacognition–the yardsticks Butler needs–are nothing other than the ‘primitive’ what-is-it-likeness of phenomenality and the functional abstractions revealed by deliberative theoretical reflection. What allows him to assess the epistemic status of introspection, in other words, clearly seems to be introspection itself. Where else would we access non-neuroscientific information pertaining to experience and the mind?

But it is his evolutionary argument against the perceptual interpretation of introspection that is arguably the most baffling. Arguing that “[t]here appears to be no identifiable functional/adaptive process that serves the purpose of perceiving one’s own mental states,” he suggests that our introspective capacities “are by-products (i.e., spandrels) of other adaptive processes that make them possible” (32). Introspecting mental states serves no adaptive process, he claims, because the mental state observed itself somehow monopolizes any adaptive benefit to be had. As he writes:

Knowing that I am a cooperative person, for instance, would not add anything beneficial to my interpersonal interaction. Any benefit would already be conferred by my actual cooperativeness as I engage with others in the world, regardless of whether I accurately represent that feature to myself.

Similar reasoning could apply to other types of mental states, such as beliefs, desires, and pains. (33)

I have to admit, this argument strikes me as so bad as to be mystifying. Certainly, not all cooperation is equal. Certainly some individuals are too cooperative, while others are too little. Certainly the ability to introspect cooperativeness would have allowed our ancestors to make refinements that could potentially affect reproductive success. And certainly ‘similar reasoning applies’ to beliefs, desires, or even pains. Status imperiling beliefs can be modified. Illicit desires can be identified and suppressed before being expressed. And the ability to self-identify different kinds of pain can facilitate recovery. And yet, Butler concludes:

So if there is an identifiable adaptive benefit here concerning knowledge of minds, it is in regard to our understanding of others, and not ourselves. In other words, the evolutionary pressures for perceptual and cognitive adaptations are geared toward an ability to represent and think about things in one’s external environment. (33)

I quote this not simply to underscore the degree to which Butler runs afoul what Dennett calls the ‘Philosopher’s Syndrome,’ the tendency to mistake a failure of imagination for necessity, but also to highlight the degree to which he mischaracterizes the very phenomena he is attempting to explicate. Consider, just for instance, Robert Triver’s ‘cognitive load thesis’ regarding self-deception, the claim that “[w]e hide reality from our conscious minds the better to hide it from onlookers” (The Folly of Fools, 9). One need not buy into Triver’s account (which makes self-transparency a default that evolution selected against, when it is far more likely that the estimable computational challenges pertaining to introspectively cognizing that ‘reality’ simply dovetailed with evolutionary pressure in this case) to see that “understanding others,” as Butler puts it, quite literally means understanding ourselves as well. Not only do our brains belong to our environment, they are, from an evolutionary perspective, the single most important component–one that is every bit as opaque as the brains of others. Solving problems requires information. Our brains (which can be seen as mechanisms that transform environmental risk into onboard complexity) constitute a vast store of empirical information. There are, as a matter of brute principle, an infinite number of problematic circumstances that can only be solved via access to that information. Another way of putting this is to say that there is literally no ‘out there’ distinct from some ‘in here’ when it comes to evolution, only information that may or may not enhance an organism’s fitness.

What Butler simply assumes must be an essential ‘self-other’ boundary, BBT explicates as a contingent result of various constraints on neuromechanical problem-solving. It is the case, as Butler contends, that human cognition is primarily ‘externally directed.’ Likewise, it is the case that metacognition is an evolutionary late-comer. But this has everything to do with neurophysiological constraints on information processing and nothing to do with any enigmatic or essential difference between ‘self’ and ‘other.’ It just so happens that the neural complexity required to incorporate external environmental items into effective sensorimotor loops makes the incorporation of that selfsame neural complexity into further sensorimotor loops computationally prohibitive. Trouble-shooting our external environments requires brains too complicated to likewise trouble-shoot, plain and simple. On the evolutionary scenario suggested by BBT, it was the evolutionary pressure pertaining to mindreading and collective coordination–the complexities of human social fitness–that gave our brains the computational wherewithal to make problem-solving requiring internal environmental information feasible. Once this window of adaptive potential opened up, our metacognitive toolbox became more and more crowded.

As I mentioned above, I actually agree with Butler that ‘perception’ is a metaphoric malapropism, a problematic way to understand the metacognitive toolbox constituting introspection. But where I see perception as a information access wrinkle in a larger natural account of cognition, he seems to think it can be understood in isolation. In Bayesian models of neural function, for instance, perception is scarce distinguishable from conception. It’s mediation all the way down. Butler, however, needs perception to be something different, something possessing the low resolution of the modern tradition. Thus the peculiar, opportunistic ambiguity in his usage of the term, the way he trades between the bad ‘perceptual introspection’ and the good ‘introspective capacities’ with nary an explanation of the distinction. One might ask, for instance, why any kind of ‘internal brain scanner’ necessarily counts as ‘perceptual.’ Is it because the function of the scanner is to access information otherwise not available for cognition? If so, then this means the bulk of the metacognitive tools that Butler posits are ‘perceptual’ in nature. The information, after all, has to be accessed somehow, whether referencing our affects or our beliefs.

.

The Enchanted First-Person

I say the ‘bulk’ of his metacognitive tools because his entire account is in fact raised upon what he considers a fundamental exception to the way the brain typically cognizes information: the phenomenality or what-it-is-likeness of experience. His vague usages of perception, as well as his problematic physiological, functional, phenomenological, and evolutionary arguments, are all motivated by his primary desideratum: an understanding of introspection, in its most primitive form, as a kind of ontological cognition, a knowledge possessed in virtue of being a given experience at a given time. As he writes:

I am willing to grant Nagel and Jackson the point that, given our current understanding of physical reality, it is indeed puzzling how conscious experiences can come about through physical processes. However, it is just as likely (if not more likely) that this puzzlement is due to problems in our understanding of physicality as it is that consciousness is a non-physical event. Conscious experiences in themselves, however mysterious they may seem, simply do not preclude the possibility that they are physical events. Perhaps that is just what physical reality is like, when known from the unique perspective of being a particular kind of physical event. (60)

The question, obviously, is one of just what this ‘unique perspective’ is. And indeed, this is the very question Butler takes himself to be answering. BBT, for its part, explains the apparent incompatibilities between the natural and the experiential, and thereby demystifies consciousness-as-it-appears in terms of the kinds of information privation and metacognitive error one might expect given the kind of ‘unique perspective’ the human brain has on itself. The ‘puzzling’ features of experience that render the ‘supernaturalization’ of conscious experience so seductive turn out, on BBT, to be the very features we might expect, given the notorious ‘curse of dimensionality’ and the evolutionary imperative to economize metabolically expensive neurocomputations. Everything is empirical on BBT, given that the scientific cognition of the natural provides the greatest informatic dimensionality. “The unique perspective of being a particular kind of physical event,” in other words, amounts to a limited view on some higher dimensional scientific picture. Thus the ‘blindness’ of the ‘blind brain.’

Butler, however, has something quite different in mind. On his account, “the unique perspective of being a particular kind of physical event” does not lie on the same informatic continuum as the scientific perspective on those physical events. Despite his naturalism, the perspective is not any ‘perspective on’ anything natural in any straightforward sense. He is convinced rather, that conscious experience constitutes a ‘special’ domain of knowledge, one that is fundamentally different in kind from scientific knowledge, namely, knowledge of what it is like to experience x, or what he calls ‘existential constitution model of knowledge.’

He defines this special knowledge by distinguishing it from the three primary philosophical approaches to the question of knowledge and phenomenality: the standard propositional account, the ability account, and the acquaintance account. He does a fair job of explaining why each of these approaches fail to deliver on phenomenal knowledge, why our knowledge of what x is like constitutes a distinctive brand of ‘special knowledge.’ But he has an enormous problem: he has no way of explaining this knowledge in the common idiom of the brain, which is to say, in terms of neuromechanistic information processing. The problem, in other words, is that he never actually poses questions (1), (2), and (3). He never asks what, physiologically speaking, would something like the existential constitution model of knowledge require.

Thus his miniature ‘via negativa’: Butler needs to argue what his account of existential knowledge is not because he has no plausible way to argue what it is. He makes gestures toward aligning his account with the existential and phenomenological traditions in continental philosophy, as well as with more recent work on ‘embodied cognition’ in philosophy of mind, but he adduces nothing more than the common recognition of  “the primacy of our subjective experience as embodied creatures in the world…” (65). In his consideration to various possible objections to his account he adverts to the fact that we regularly refer to ‘knowledge of our experiences’ in everyday life. This is a powerful consideration to be sure, but that one begs explanation far more than it evidences his account. In fact, aside from continually appealing to the tautological assumption that some kind of knowledge has to be involved in knowing experience, he really offers nothing in the way of positive, naturalistic characterizations of his model–that is, until he turns to Bermudez and the notion of ‘self-specifying content,’ the way an organism’s perception and proprioception bears tacit information regarding itself: the way, for instance, seeing a portion of a ball around a corner implicitly means you are standing around the corner from a ball.

To be clear, I am not concerned with the informational content of these states here. Instead, the key point is that the informational content is self-specifying in nature and that phenomenal states themselves have a similar self-specifying nature that results from being embodied and situated in the world. The experience itself provides immediate and intimate knowledge about the experiencing agent to that same agent, in a direct non-dichotomous and non-mediated manner. By its very nature, such a phenomenal state confers self-understanding in the most primitive manner possible to an experiencing subject. (69)

The problem with this apparent elaboration of his account, however, is that ‘self-specifying content’ in no way requires conscious experience. In fact, all of any complex organism’s systematic environmental interactions require ‘self-specifying’ perceptual and proprioceptive ‘content,’ insofar as they need to ‘know’ their position and capabilities to do anything at all. This is simply a boilerplate assumption of the embodied cognition/ecological psychology crowd. And of course, very few of these organisms know anything, at least not in any nontendentious sense of the word ‘know.’ They just happen to ‘be’ these organisms. If this is what Butler means by “self-understanding in the most primitive manner possible” then he is plainly not talking about ‘understanding’ at all.

In fact, it becomes very difficult to understand precisely what he is talking about. On the one hand we have knowledge as intentionally understood–the very kind of relational knowledge that Butler’s account seeks to disqualify. On the other hand, we have the famous ‘triviality’ of mechanistic cognition, the way all life, as the product of evolution, represents solutions to various problems–the sense in which biology, in other words, is ‘cognitive all the way down.’ In this trivial or wide sense of cognition, then of course conscious experience is cognitive in some respect. What else could it be?

If mechanistic or ‘wide cognition’ is what indeed underwrites Butler’s case, then the ‘in some respect’ is what becomes relevant to inquiry. To simply say that this some respect is ‘phenomenal’ or ‘existential’ does nothing but confound the mystery.

But then, this is just what the question has been all along: If phenomenal experience is cognitive, then what kind of cognition is it, and why the hell does it baffle us so? The most Butler can do, it seems, is provide us with an account of what kind of cognition conscious experience is not. Aside from eliminating propositional, ability, and acquaintance accounts, his existential constitution model really doesn’t provide any kind of answer at all, let alone one that suggests future avenues of research. And the reason for this, I think, lies in his failure to pose, let alone address, our questions of information access and cognitive resources. What, neuromechanistically speaking, would something like the existential constitution model of knowledge require? What kind of information access and what kind of cognitive resources does the human brain need to ‘know what an experience is like’?

I think this question plainly reveals the spookiness of Butler’s account. Why? If he claims an experience is cognitive in the trivial or biomechanical sense, then he’s telling us nothing about the very ‘in some respect’ at issue. If ‘knowing what experience x is like’ involves some kind of spontaneous ‘cognition ex nihilo,’ then he owes us some kind of story: By virtue of what is experience x cognitive in your spontaneous first-personal sense? Otherwise he has simply found a clever way of gaming the problem into something that merely sounds like a solution. (He explicitly defines introspection as, “the process of seeking and/or acquiring knowledge of one’s own mind, from one’s own subjective first-person standpoint” (46, italics my own)). If phenomenal states ‘by their very nature confer self-understanding in the most primitive manner possible,’ as he claims, then just what is that ‘nature?’ If simply ‘being a first-person’ is sufficient for ‘first-person knowledge,’ Butler only has a workable, natural account of first person knowledge–knowledge of what an experience is like–to the extent that he has a workable, natural account of the first-person. And not surprisingly, he has none.

Call this the ‘metacognitive baseline problem.’ There is no way to gauge the epistemic virtues of our metacognitive toolbox short of some kind of yardstick, some reliable way of judging the reliability of a given introspective capacity. The irony is that Butler is actually very concerned with the question of cognitive resources (2). His existential constitution model of introspective knowledge is meant to account for what might be thought of as an ‘introspective baseline,’ the basis upon which various other kinds of ‘higher level’ introspective are based. “The central idea,” as he writes, “is that we engage in higher-level introspection by utilizing the mind’s cognitive capacities to represent and think about our own minds” (75). Accordingly, he devotes the rest of the book to considerations of what might be called the ‘introspective cognitive toolbox.’ But once again, since it all amounts to introspection boot-strapping introspection – using interpretations of ‘mind’ to anchor estimations of our ability to interpret the mind – I just don’t understand how it’s supposed to work.

BBT takes the brain as described by science as its yardstick for ‘introspective accuracy,’ the degree to which the brain does or does not get its own activities right. To this extent, it argues that introspection (and the philosophical tradition raised upon it) is plagued by a number of profound cognitive illusions pertaining information privation and heuristic misapplication. The complexity the brain requires to accurately and comprehensively track its external environments is such that it cannot accurately and comprehensively track its internal environment. The brain can, at best, efficaciously track itself, which it to say, cognize limited amounts of information keyed to very specific problems. Perhaps this information can be efficaciously applied ‘out of school,’ perhaps not. (No doubt, spandrels abound in metacognition). Either way, this information cannot provide the basis for an accurate and comprehensive account of anything. And this quite simply means there is no such thing as ‘mind.’ There is only the brain, splintered and occluded by the heuristics populating our metacognitive toolbox, a hodgepodge of specific capacities adapted to a hodgepodge of specific problem ecologies, which theoretical reflection, utterly blind to its myopia, fuses and confounds and reifies into the ‘mind.’

With his ontological account, Butler essentially offers us his own, localized version of Descartes’ cogito, one taking experience as a self-evident foundation. In a sense, his ‘first-person experience’ constitutes the self-interpreting rule, or transcendental signified, or whatever tradition-specific term you want to apply to such Munchausenesque formulations.

In contrast, the signature theoretical virtue of BBT lies in its ability to account for the apparent structure of this first-personal sense in biomechanically continuous terms. It can’t tell us what consciousness is, but it can offer a parsimonious and fairly comprehensive account of why it appears the way it does, and why we find it so baffling as a result. Briefly, it diagnoses the more puzzling aspects of the first-person in terms of various forms of neglect, informatic lacunae that are invisible as such, resulting in a series of what might be called ‘identity illusions,’ which in turn form the basis of our intuitions regarding the first-person. Since these are, ultimately, kinds of cognitive illusion, they resist explanation in natural terms, as well as lack any fact of the matter to arbitrate between interpretations, thus generating endless grist for what we call philosophy. In essence, it explains apparently fundamental structural features of the first-person such as the now and intentionality in terms of a kind of ‘ontological ignorance,’ the trivial fact that information that is not broadcast or integrated into consciousness does not exist for conscious cognition. You could say that it explains the apparent structure of consciousness by turning it upside down.

Since BBT effectively explains away the first-person in the course of accounting for it, there really is no need to posit any spooky knowledge specific to it. On BBT, there is no ‘first-person knowledge’ so much as there is proximate, low-dimensional (and so highly heuristic) cognition of various brain activities (self and other), and there is distal, high-dimensional cognition of everything else. The apparent peculiarities of the first-person are the product of a variety of severe heuristic ‘compromises,’ particularly those involving structurally occluded dimensions of information. Many of its perplexing structural aspects, its nowness or aboutness, for example, it explains away as metacognitive artifacts of medial neglect. The famous problems pertaining to ‘what-is-it-likeness’ are likewise resolved by considering varieties of ‘brain blindness.’ The mystery of consciousness remains, of course, only relieved of the numerous conceptual confounds that presently render it so intractable as an explanandum. The so-called Hard Problem becomes a bad dream.

For Butler, I suspect, this approach simply has to amount to throwing the baby out with the bathwater. I can only shrug, offer that the baby was never really ‘there’ anyway, commiserate because, yeah, it really, really sucks, then challenge him to conjure his baby without simply compounding his reliance on magic. BBT, at least, can explain what it is the metacognizing brain is doing in terms continuous with what neuroscience has hitherto learned. With BBT the assumption that consciousness is some explicable natural phenomena remains, but as an inferentially inert posit. No empirical longshots are required to explain the general cognitive situation of introspection.