Three Pound Brain

No bells, just whistling in the dark…

Month: September, 2013

Cognition Obscura

by rsbakker

The Amazing Complicating Grain

On July 4th, 1054, Chinese astronomers noticed the appearance of a ‘guest star’ in the proximity of Zeta Tauri lasting for nearly two years before becoming too faint to be detected by the naked eye. The Chaco Canyon Anasazi also witnessed the event, leaving behind this famous petroglyph:


Centuries would pass before John Bevis would rediscover it in 1731, as would Charles Messier in 1758, who initially confused it with Halley’s Comet, and decided to begin cataloguing ‘cloudy’ celestial objects–or ‘nebulae’–to help astronomers avoid his mistake. In 1844, William Parsons, the Earl of Rosse, made the following drawing of the guest star become comet become cloudy celestial object:


It was on the basis of this diagram that he gave what has since become the most studied extra-solar object in astronomical history its contemporary name: the ‘Crab Nebula.’ When he revisited the object with his 72-inch reflector telescope in 1848, however, he saw something quite different:

william-parsons-crab-nebula-2 In 1921, John Charles Duncan was able to discern the expansion of the Crab Nebula using the revolutionary capacity of the Mount Wilson Observatory to produce images like this:


And nowadays, of course, we are regularly dazzled not only by photographs like this:


produced by Hubble, but those produced by a gallery of other observational platforms as well:

600px-800crabThe tremendous amount of information produced has provided astronomers with an incredibly detailed understanding of supernovae and nebula formation.

What I find so interesting about this progression lies in what might be called the ‘amazing complicating grain.’ What do I mean by this? Well, there’s the myriad ways the accumulation of data feeds theory formation, of course, how scientific models tend to become progressively more accurate as the kinds and quantities of information accessed increases. But what I’m primarily interested in is what happens when you turn this structure upside down, when you look at the Chinese ‘guest star’ or Anasazi petroglyph against the baseline of what we presently know. What assumptions were made and why? How were those assumptions overthrown? Why were those assumptions almost certain to be wrong?

Why, for instance, did the Chinese assume that SN1054 was simply another star, notable only for its ‘guest-like’ transience? I’m sure a good number of people might think this is a genuinely stupid question: the imperialistic nature of our preconceptions seems to go without saying. The medieval Chinese thought SN1054 was another star rather than a supernova simply because points of light in the sky, stars, were pretty much all they knew. The old provides our only means of understanding the new. This is arguably why Messier first assumed the Crab Nebula was another comet in 1758: it was only when he obtained information distinguishing it (the lack of visible motion) from comets that he realized he was looking at something else, a cloudy celestial object.

But if you think about it, these ‘identification effects’–the ways the absence of systematic differences making systematic differences (or information) underwrite assumptions of ‘default identity’–are profoundly mysterious. Our cosmological understanding has been nothing if not a process of continual systematic differentiation or ever increasing resolution in the polydimensional sense of the natural. In a peculiar sense, our ignorance is our fundamental medium, the ‘stuff’ from which the distinctions pertaining to actual cognition are hewn.


The Superunknown

Another way to look at this transformation of detail and understanding is in terms of ‘unknown unknowns,’ or as I’ll refer to it here, the ‘superunknown’ (cue crashing guitars). The Hubble image and the Anasazi petroglyph not only provide drastically different quantities of information organized in drastically different ways, they anchor what might be called drastically different information ecologies. One might say that they are cognitive ‘tools,’ meaningful to the extent organize interests and practices, which is to say, possess normative consequences. Or one might say they are ‘representations,’ meaningful insofar as they ‘correspond’ to what is the case. The perspective I want to take here, however, is natural, that of physical systems interacting with physical systems. On this perspective, information our brain cannot access makes no difference to cognition. All the information we presently possess regarding supernova and nebula formulation simply was not accessible to the ancient Anasazi or Chinese. As a result, it simply could not impact their attempts to cognize SN-1054. More importantly, not only did they lack access to this information, they also lacked access to any information regarding this lack of information. Their understanding was their only understanding, hedged with portent and mystery, certainly, but sufficient for their practices nonetheless.

The bulk of SN-1054 as we know it, in other words, was superunknown to our ancestors. And, the same as the spark-plugs in your garage make no difference to the operation of your car, that information made no cognizable difference to the way they cognized the skies. The petroglyph understanding of the Anasazi, though doubtless hedged with mystery and curiosity, was for them the entirety of their understanding. It was, in a word, sufficient. Here we see the power–if it can be called such–exercised by the invisibility of ignorance. Who hasn’t read ancient myths or even contemporary religious claims and wondered how anyone could have possibly believed such ‘nonsense’? But the answer is quite simple: those lacking the information and/or capacity required to cognize that nonsense as nonsense! They left the spark-plugs in the garage.

Thus the explanatory ubiquity of ‘They didn’t know any better.’ We seem to implicitly understand, if not the tropistic or mechanistic nature of cognition, then at least the ironclad correlation between information availability and cognition. This is one of the cornerstones of what is called ‘mindreading,’ our ability to predict, explain, and manipulate our fellows. And this is how the superunknown, information that makes no cognizable difference, can be said to ‘make a difference’ after all–and a profound one at that. The car won’t run, we say, because the spark-plugs are in the garage. Likewise, medieval Chinese astronomers, we assume, believed SN-1054 was a novel star because telescopes, among other things, were in the future. In other words, making no difference makes a difference to the functioning of complex systems attuned to those differences.

This is the implicit foundational moral of Plato’s Allegory of the Cave: How can shadows come to seem real? Well, simply occlude any information telling you otherwise. Next to nothing, in other words, can strike us as everything there is, short of access to anything more–such as information pertaining to the possibility that there is something more. And this, I’m arguing, is the best way of looking at human metacognition at any given point in time, as a collection of prisoners chained inside the cave of our skull assuming they see everything there is to see for the simple want of information–that the answer lies in here somehow! On the one hand we have the question of just what neural processing gets ‘lit up’  in conscious experience (say, via information integration or EMF effects) given that an astronomical proportion of it remains ‘dark.’ What are the contingencies underwriting what accesses what for what function? How heuristically constrained are those processes? On the other hand we have the problem of metacognition, the question of the information and cognitive resources available for theoretical reflection on the so-called ‘first-person.’ And, once again, what are the contingencies underwriting what accesses what for what function? How heuristically constrained are those processes?

The longer one mulls these questions, the more the concepts of traditional philosophy of mind come to resemble Anasazi petroglyphs–which is to say, an enterprise requiring the superunknown. Placed on this continuum of availabilty, the assumption that introspection, despite all the constraints it faces, gets enough of the information it needs to at least roughly cognize mind and consciousness as they are becomes at best, a claim crying out for justification, and at worst, wildly implausible. To say philosophy lacks the information and/or cognitive resources it requires to resolve its debates is a platitude, one so worn as not chafe any contemplative skin whatsoever. No enemy is safer or more convenient as an old enemy, and skepticism is as ancient as philosophy itself. But to say that science is showing that metacognition lacks the information and/or cognitive resources philosophy requires to resolve its debates is to say something quite a bit more prickly.

Cognitive science is revealing the superunknown of the soul as surely as astronomy and physics are revealing the superunknown of the sky. Whether we move inward or outward the process is pretty much the same, as we should suspect, given that the soul is simply more nature. The tie-dye knot of conscious experience has been drawn from the pot and is slowly being unravelled, and we’re only now discovering the fragmentary, arbitrary, even ornamental nature of what we once counted as our most powerful and obvious ‘intuitions.’

This is how the Blind Brain Theory treats the puzzles of the first-person: as artifacts of illusion and neglect. The informatic and heuristic resources available for cognition at any given moment constrains what can be cognized. We attribute subjectivity to ourselves as well as to others, not because we actually have subjectivity, but because it’s the best we can manage given the fragmentary information we got.  Just as the medieval Chinese and Anasazi were prisoners of their technical limitations, you and I are captives of our metacognitive neural limitations.

As straightforward as this might sound, however, it turns out to be far more difficult to conceptualize in the first-person than in astronomy. Where the incorporation of the astronomical superunknown into our understanding of SN-1054 seems relatively intuitive, the incorporation of the neural superunknown into our understanding of ourselves threatens to confound intelligibility altogether. So why the difference?

The answer lies in the relation between the information and the cognitive resources we have available. In the case of SN-1054, the information provided happens to be the very information that our cognitive systems have evolved to decipher, namely, environmental information. The information provided by Hubble, for instance, is continuous with the information our brain generally uses to mechanically navigate and exploit our environments—more of the same. In the case of the first-person, however, the information accessed in metacognition falls drastically short what our cognitive systems require to conceive us in an environmentally continuous manner. And indeed, given the constraints pertaining to metacognition, the inefficiencies pertaining to evolutionary youth, the sheer complexity of its object, not to mention its structural complicity with its object, this is precisely what we should expect: selective blindness to whole dimensions of information.

So one might visualize the difference between the Anasazi and our contemporary astronomical understanding of SN-1054 as progressive turns of a screw:

Partial and Full Spiral (1)

where the contemporary understanding can be seen as adding more and more information, ‘twists,’ to the same set of dimensions. The difference between our intuitive and our contemporary neuroscientific understanding of ourselves, on the other hand, is more like:

Circle and Spiral

where our screw is viewed on end instead of from the side, occluding the dimensions constitutive of the screw. The ‘O’ is actually a screw, obviously so, but for the simple want of information appears to be something radically different, something self-continuous and curiously flat, completely lacking empirical depth. Since these dimensions remain superunknown, they quite simply make no metacognitive difference. In the same way additional environmental information generally always complicates prior experiential and cognitive unities, the absence of information can be seen as simplifying experiential unities. Paint can become swarming ants, and swarming ants can look like paint. The primary difference with the first-person, once again, is that the experiential simplification you experience, say, watching a movie scene fade to white is ‘bent’ across entire dimensions of missing information—as is the case with our ‘O’ and our screw. The empirical depth of the latter is folded into the flat continuity of the former. On this line of interpretation, the first-person is best understood as a metacognitive version of the ‘flicker fusion’ effect in psycho-physics, or the way sleep can consign an entire plane flight to oblivion. You might say that neglect is the sleep of identity.

As the only game in information town, the ‘O’ intuitively strikes us as essentially what we are, rather than a perspectival artifact of information scarcity and heuristic inapplicability. And since this ‘O’ seems to frame the possibility of the screw, things are apt to become more confusing still, with proponents of ‘O’-ism claiming the ontological priority of  an impoverished cognitive perspective over ‘screwism’ and its embarrassment of informatic riches, and with proponents of screwism claiming the reverse, but lacking any means to forcefully extend and demonstrate their counterintuitive positions.

One can analogically visualize this competition of framing intuitions as the difference between,

Spiral in Circlewhere the natural screw takes the ‘O’ as its condition, and,

Circle in Spiralwhere the ‘O’ takes the natural screw as its condition, with the caveat that one understands the margins of the ‘O’ asymptotically, which is to say, as superunknown. A better visualize analogy lies in the margins of your present visual field, which is somehow bounded without possessing any visible boundary. Since the limits of conscious cognition always outrun the possibility of conscious cognition, conscious cognition, or ‘thought,’ seems to hang ‘nowhere,’ or at the very least ‘beyond’ the empirical, rendering the notion of ‘transcendental constraint’ an easy-to-intuit metacognitive move.

In this way, one might diagnose the constitutive transcendental as a metacognitive artifact of neglect. A symptom of brain blindness.

This is essentially the ambit of the Blind Brain Theory: to explain the incompatibility of the intentional with the natural in terms of what information we should expect to be available to metacognition. Insofar as the whole of traditional philosophy turns on ‘reflection,’ BBT amounts to a wholesale reconceptualization of the philosophical tradition as well. It is, without any doubt, the most radical parade of possibilities to ever trammel my imagination—a truly post-intentional philosophy—and I feel as though I have just begun to chart the troubling extent of its implicature. The motive of this piece is to simply convey the gestalt of this undiscovered country with enough sideways clarity to convince a few daring souls to drag out their theoretical canoes.

To summarize then: Taking the mechanistic paradigm of the life sciences as our baseline for ontological accuracy (and what else would we take?), the mental can be reinterpreted in terms of various kinds of dimensional loss. What follows is a list of some of these peculiarities and a provisional sketch of their corresponding ‘blind brain’ explanation. I view each of these theoretical vignettes as nothing more than an inaugural attempt, pixillated petroglyphs that are bound to be complicated and refined should the above hunches find empirical confirmation. If you find yourself reading with a squint, I ask only that you ponder the extraordinary fact that all these puzzling phenomena are characterized by missing information. Given the relation between information availability and cognitive reliability, is it simply a coincidence that we find them so difficult to understand? I’ll attempt to provide ways to visualize these sketches to facilitate understanding where I can, keeping in mind the way diagrams both elide and add dimensions.


Concept/Intuition                                                               Kind of Informatic Loss/Incapacity

Nowness – Insufficient temporal information regarding the time of information processing is integrated into conscious awareness. Metacognition, therefore, cannot make second-order before-and-after distinctions (or, put differently, is ‘laterally insensitive’ to the ‘time of timing’), leading to the faulty assumption of second-order temporal identity, and hence the ‘paradox of the now’ so famously described by Aristotle and Augustine.

Circle and Spiral - Now and Environmental TimeSo again, metacognitive neglect means our brains simply cannot track the time of their own operations the way it can track the time of the environment that systematically engages it. Since the absence of information is the absence of distinctions, our experience of time as metacognized ‘fuses’ into the paradoxical temporal identity in difference we term the now.

Reflexivity – Insufficient temporal information regarding the time of information processing is integrated into conscious awareness. Metacognition, therefore, can only make granular second-order sequential distinctions, leading to the faulty metacognitive assumption of mental reflexivity, or contemporaneous self-relatedness (either intentional as in the analytic tradition, or nonintentional as well, as posited in the continental tradition), the sense that cognition can be cognized as it cognizes, rather than always only post facto. Thus, once again, the mysterious (even miraculous) appearance of the mental, since mechanically, all the processes involved in the generation of consciousness are irreflexive. Resources engaged in tracking cannot themselves be tracked. In nature the loop can be tightened, but never cinched the way it appears to be in experience.

Circle and Spiral - UntitledOnce again, experience as metacognized fuses, consigning vast amounts of information to the superunknown, in this case, the dimension of irreflexivity. The mental is not only flattened into a mere informatic shadow, it becomes bizarrely self-continuous as well.

Personal Identity – Insufficient information regarding the sequential or irreflexive processing of information integrated into conscious awareness, as per above. Metacognition attributes psychological continuity, even ontological simplicity, to ‘us’ simply because it neglects the information required to cognize myriad, and many cases profound, discontinuities. The same way sleep elides travel, making it seem like you simply ‘awaken someplace else,’ so too does metacognitive neglect occlude any possible consciousness of moment to moment discontinuity.

Conscious Unity – Insufficient information regarding the disparate neural complexities responsible for consciousness. Metacognition, therefore, cannot make the relevant distinctions, and so assumes unity. Once again, the mundane assertion of identity in the absence of distinctions is the culprit. So the character below strikes us as continuous,


even though it is actually composite,

X pixilated

simply for want of discriminations, or additional information.

Meaning Holism – Insufficient information regarding the disparate neural complexities responsible for conscious meaning. Metacognition, therefore, cannot make the high-dimensional distinctions required to track external relations, and so mistakes the mechanical systematicity of the pertinent neural structures and functions (such as the neural interconnectivity requisite for ‘winner take all’ systems) for a lower dimensional ‘internal relationality.’ ‘Meaning,’ therefore, appears to be differential in some elusive formal, as opposed to merely mechanical, sense.

Volition – Insufficient information regarding neural/environmental production and attenuation of behaviour integrated into conscious awareness. Unable to track the neurofunctional provenance of behaviour, metacognition posits ‘choice,’ the determination of behaviour ex-nihilo.

Circle and Spiral - VolitionOnce again, the lack of access to a given dimension of information forces metacognition to rely on an ad hoc heuristic, ‘choice,’ which only becomes a problem when theoretical metacognition, blind to its heuristic occlusion of dimensionality, feeds it to cognitive systems primarily adapted to high-dimensional environmental information.

Purposiveness – Insufficient information regarding neural/environmental production and attenuation of behaviour integrated into conscious awareness. Cognition thus resorts to noncausal heuristics keyed to solving behaviours rather than those keyed to solving environmental regularities—or ‘mindreading.’ Blind to the heuristic nature of these systems, theoretical metacognition attributes efficacy to predicted outcomes. Constraint is intuited in terms of the predicted effect of a given behaviour as opposed to its causal matrix. What comes after appears to determine what comes before, or ‘cranes,’ to borrow Dennett’s metaphor, become ‘skyhooks.’ Situationally adapted behaviours become ‘goal-directed actions.’

Value – Insufficient information regarding neural/environmental production and attenuation of behaviour integrated into conscious awareness. Blind to the behavioural feedback dynamics that effect avoidance or engagement, metacognition resorts to heuristic attributions of ‘value’ to effect further avoidance or engagement (either socially or individually). Blind to the radically heuristic nature of these attributions, theoretical metacognition attributes environmental reality to these attributions.

Normativity – Insufficient information regarding neural/environmental production and attenuation of behaviour integrated into conscious awareness. Cognition thus resorts to noncausal heuristics geared to solving behaviours rather than those geared to solving environmental regularities. Blind to these heuristic systems, deliberative metacognition attributes efficacy or constraint to predicted outcomes. Constraint is intuited in terms of the predicted effect of a given behaviour as opposed to its causal matrix. What comes after appears to determine what comes before. Situationally adapted behaviours become ‘goal-directed actions.’ Blind to the dynamics of those behavioural patterns producing environmental effects that effect their extinction or reproduction (that generate attractors), metacognition resorts to drastically heuristic attributions of ‘rightness’ and ‘wrongness,’ further effecting the extinction or reproduction of behavioural patterns (either socially or individually). Blind to the heuristic nature of these attributions, theoretical metacognition attributes environmental reality to them. Behavioural patterns become ‘rules,’ apparent noncausal constraints.

Aboutness (or Intentionality Proper) – Insufficient information regarding processing of environmental information integrated into conscious awareness. Even though we are mechanically embedded as a component of our environments, outside of certain brute interactions, information regarding this systematic causal interrelation is unavailable for cognition. Forced to cognize/communicate this relation absent this causal information, metacognition resorts to ‘aboutness’. Blind to the radically heuristic nature of aboutness, theoretical metacognition attributes environmental reality to the relation, even though it obviously neglects the convoluted circuit of causal feedback that actually characterizes the neural-environmental relation.

The easiest way to visualize this dynamic is to evince it as,

Spiral - Untitledwhere the screw diagrams the dimensional complexity of the natural within an apparent frame that collapses many of those dimensions—your present ‘first person’ experience of seeing the figure above. This allows us to complicate the diagram thus,

Spiral in Circlebearing in mind that the explicit limit of the ‘O’ diagramming your first-person experiential frame is actually implicit or asymptotic, which is to say, occluded from conscious experience as it was in the initial diagram. Since the actual relation between ‘you’ (or your ‘thought,’ or your ‘utterance,’ or your ‘belief,’ and ‘etc.’) and what is cognized/perceived—experienced—outruns experience, you find yourself stranded with the bald fact of a relation, an ineluctable coincidence of you and your object, or ‘aboutness,’

Spiral as intentional objectwhere ‘you’ simply are related to an independent object world. The collapse of the causal dimension of your environmental relatedness into the superunknown requires a variety of ‘heuristic fixes’ to adequately metacognize. This then provides the basis for the typically mysterious metacognitive intuitions that inform intentional concepts such as representation, reference, content, truth, and the like.

Representation – Insufficient information regarding processing of environmental information integrated into conscious awareness. Even though we are mechanically embedded as a component of our environments, outside of certain brute interactions, information regarding this systematic causal interrelation is unavailable for cognition. Forced to cognize/communicate this relation absent this causal information, metacognition resorts to ‘aboutness’. Blind to the radically heuristic nature of aboutness, theoretical metacognition attributes environmental reality to the relation, even though it obviously neglects the convoluted circuit of causal feedback that actually characterizes the neural-environmental relation. Subsequent theoretical analysis of cognition, therefore, attributes aboutness to the various components apparently identified, producing the metacognitive illusion of representation.

Our implicit conscious experience of some natural phenomenon,

Spiral - Untitledbecomes explicit,

Spiral - Representationreplacing the simple unmediated (or ‘transparent’) intentionality intuited in the former with a more complex mediated intentionality that is more easily shoehorned into our natural understanding of cognition, given that the latter deals in complex mechanistic mediations of information.

Truth – Insufficient information regarding processing of environmental information integrated into conscious awareness. Even though we are mechanically embedded as a component of our environments, outside of certain brute interactions, information regarding this systematic causal interrelation is unavailable for cognition. Forced to cognize/communicate this relation absent this causal information, metacognition resorts to ‘aboutness’. Since the mechanical effectiveness of any specific conscious experience is a product of the very system occluded from metacognition, it is intuited as given in the absence of exceptions—which is to say, as ‘true.’ Truth is the radically heuristic way the brain metacognizes the effectiveness of its cognitive functions. Insofar as possible exceptions remain superunknown, the effectiveness of any relation metacognized as ‘true’ will remain apparently exceptionless, what obtains no matter how we find ourselves environmentally embedded—as a ‘view from nowhere.’ Thus your ongoing first-person experience of,

Spiral - Untitledwill be implicitly assumed true period, or exceptionless, (sufficient for effective problem solving in all ecologies), barring any quirk of information availability (‘perspective’) that flags potential problem solving limitations, such as a diagnosis of psychosis, awakening from a nap, the use of a microscope, etc. This allows us to conceive the natural basis for the antithesis between truth and context: as a heuristic artifact of neglect, truth literally requires the occlusion of information pertaining to cognitive function to be metacognitively intuited.

So in terms of our visual analogy, truth can be seen as the cognitive aspect of ‘O,’ how the screw of nature appears with most of its dimensions collapsed, as apparently ‘timeless and immutable,’

Circlefor simple want of information pertaining to its concrete contingencies. As more and more of the screw’s dimensions are revealed, however, the more temporal and mutable—contingent—it becomes. Truth evaporates… or so we intuit.


Lacuna Obligata

Given the facility with which Blind Brain Theory allows these concepts to be naturally reconceptualised, my hunch is that many others may be likewise demystified. Aprioricity, for instance, clearly turns on some kind of metacognitive ‘priority neglect,’ whereas abstraction clearly involves some kind of ‘grain neglect.’ It’s important to note that these diagnoses do not impeach the implicit effectiveness of many of these concepts so much as what theoretical metacognition, or ‘philosophical reflection,’ has generally made of them. It is precisely the neglect of information that allows our naive employment of these heuristics to be effective within the limited sphere of those problem ecologies they are adapted to solve. This is actually what I think the later Wittgenstein was after in his attempts to argue the matching of conceptual grammars with language-games: he simply lacked the conceptual resources to see that normativity and aboutness were of a piece. It is only when philosophers, as reliant upon deliberative theoretical metacognition as they are, misconstrue what are parochial problem solvers for more universal ones, that we find ourselves in the intractable morass that is traditional philosophy.

To understand the difference between the natural and the first-person we need a positive way to characterize that difference. We have to find a way to let that difference make a difference. Neglect is that way. The trick lies in conceiving the way the neglect of various dimensions of information dupes theoretical metacognition into intuiting the various structural peculiarities traditionally ascribed to the first-person. So once again, where running the clock of astronomical discovery backward merely subtracts information from a fixed dimensional frame,

Photo to petroglyphexplaining the first-person requires the subtraction of dimensions as well,

origami turtlethat we engage in a kind of ‘conceptual origami,’ conceive the first-person, in spite of its intuitive immediacy, as what the brain looks like when whole dimensions of information are folded away.

And this, I realize, is not easy. Nevertheless, tracking environments requires resources which themselves cannot be tracked, thus occluding cognitive neurofunctionality from cognition—imposing ‘medial neglect.’ The brain thus becomes superunknown relative to itself. Its own astronomical complexity is the barricade that strands metacognitive intuitions with intentionality as traditionally conceived. Anyone disagreeing with this needs to explain how it is human metacognition overcomes this boggling complexity. Otherwise, all the provocative questions raised here remain: Is it simply a coincidence that intentional concepts exhibit such similar patterns of information privation? For instance, is it a coincidence that the curious causal bottomlessness that haunts normativity—the notion of ‘rules’ and ‘ends’ somehow ‘constraining’ via some kind of obscure relation to causal mechanism—also haunts the ‘aiming’ that informs conceptions of representation? Or volition? Or truth?

The Blind Brain theory says, Nope. If the lack of information, the ‘superunknown,’ is what limits our ability to cognize nature, then it makes sense to assume that it also limits our ability to cognize ourselves. If the lack of information is what prevents us from seeing our way past traditional conceits regarding the world, it makes sense to think it also prevents us from seeing our way past cherished traditional conceits regarding ourselves. If information privation plays any role in ignorance or misconception at all, we should assume that the grandiose edifice of traditional human self-understanding is about to founder in the ongoing informatic Flood…

That what we have hitherto called ‘human’ has been raised upon shades of cognition obscura.

Man the Meaning-Faker

by rsbakker

Ben has posted an excellent piece on Brassier’s Nihil Unbound and his position on nihilism more generally over at RWUG. “Nihilism,” Ben writes in a pithy summary of Ray’s view, “is the philosophy needed for living with intellectual integrity as one of the living dead.”

I remember when I first read Nihil Unbound what hooked me was Ray’s refusal to buy into any of the traditional Continental prophylactic moves, his insistence that truth trumps meaning no matter how cherished that meaning might be. The want of traditional Continental philosophy has been to adopt various preemptive theoretical attitudes vis a vis science, to insist that science presupposes some kind of x, whether it be an existential interpretation of the Lebenswelt, where experience is asserted as the ontological condition of possibility of science (understood as a mere ‘ontic’ discourse), or some normative interpretation of the institutional context of science, where thought is asserted as the practical condition of possibility of science (understood as one language game among others). I have espoused both of these positions in my day, and no longer find either even remotely convincing, simply because I finally realized that posing a mysterious, never-to-be-arbitrated speculative diagnosis of What Science Is as the grounds for appraising the status of scientific theoretical claims is to simply get things backward in a suspiciously self-serving way. It struck me as using Ted Bundy’s testimony to convict Mother Theresa, and to sentence her to never wave her empirical yardsticks anywhere near my oh-so grandiose and yet fantastically fragile speculative claims. Obviously so.

Nihil Unbound excited me so much because I had thought that Ray had actually managed to move past these prophylactic gestures. The biggest shortcoming of the book, I had thought, was simply the problem faced by all projects that attempt to move past meaning, all attempts at post-intentional philosophy: namely, the inability to account for meaning. It’s one thing to say meaning is bunk, but short of explaining why we find it so compelling, the best one can do is hang upon the perennial incompatibilities between science and meaning, knowledge and experience. Meaning either has to be explained or explained away before anyone can attempt to move on in any remotely convincing fashion. Otherwise, all the old and powerful arguments securing the apparent ineliminability of the semantic remain unanswered.

I was so excited by Nihil Unbound, you could say, because I thought I had the very thing it was missing: a parsimonious and comprehensive way to explain meaning away–the Blind Brain Theory. As it turns out, Ray himself came to the same conclusion regarding the book’s main shortcoming, the problem was (from my perspective at least) he felt the need to turn backward to address it: to seize on a positive account of meaning deflationary enough to seem consistent with disenchantment, but ultimately recuperative all the same–inferentialism. As he explains in his After Nature interview:

[Nihil Unbound] contends that nature is not the repository of purpose and that consciousness is not the fulcrum of thought. The cogency of these claims presupposes an account of thought and meaning that is neither Aristotelian—everything has meaning because everything exists for a reason—nor phenomenological—consciousness is the basis of thought and the ultimate source of meaning. The absence of any such account is the book’s principal weakness (it has many others, but this is perhaps the most serious). It wasn’t until after its completion that I realized Sellars’ account of thought and meaning offered precisely what I needed. To think is to connect and disconnect concepts according to proprieties of inference. Meanings are rule-governed functions supervening on the pattern-conforming behaviour of language-using animals. This distinction between semantic rules and physical regularities is dialectical, not metaphysical.

And so, like a scorned theoretical lover, I find myself writing the odd letter–or post–bent on showing him why his recuperative inferentialism simply will not work.

The irony is that this pretty accurately summarizes my long-standing debate with Ben as well! They both take themselves to be staring the Beast of abject meaninglessness in the eye, but they succumb to their own noocentric intuitions in the end–or so my desolate view has it. Both raise conceptual barricades against the terrifying prospect that they themselves are merely more nature, not nature + x, that the boundary between them and the bottomless universe they both acknowledge is meaningless is simply technical.

What I would like to show is how easily those conceptual barricades can be torn down.


“We should avoid scientism and nihilism, on the one hand,” Ben writes, “and delusion and irresponsible faith, on the other.” He wants our dilemma to be a false one, pines for some third way that is not scientific, but remains rational in some respect. Everything, however, hangs upon this ‘some respect.’ He thinks reason understood as instrument of truth is unworkable, because such reason collapses into scientific reason, which inevitably leads to nihilism. He thinks reason as instrument of interest is also unworkable, because he seems to recognize, as did Adorno, that instrumental rationality is incapable of providing meaning. It can only deliver the goods, never the Good–the how and not the why. You could say the whole of contemporary consumer society attests to the paradox of a rationality that can only serve appetite. Reason, as Ben likes to say, is ‘accursed.’

In this sense, he’s actually working through the classic Continental problematic in the classic Continental way: by positing a variant discursive mode while problematizing the ‘presuppositions’ of science. He’s at pains, for instance, to continually contextualize science, to emphasize the fact that it’s just one set of human practices out of many, then to assert that, as such, it’s adapted to its own institutional ecology. Thus, having characterized What Science Is to this minimal extent, he can then point to all the other ecologies out there, and it seems to simply follow that science simply isn’t applicable. With this picture in place, he can then lay the charge of ‘scientism’ any time anyone applies scientific cognitive standards outside what he deems the proper discursive ecology of science.

I can remember when I thought all this was just a no-brainer! As clear as yesterday…

Where he differs from most historical Continental approaches to this problem is that he maintains, as most Analytically trained thinkers do, a wary respect for the Cognitive Difference, the fact that science isn’t just another discursive institution, it is the objective discursive institution. This is what forces him to the brink of nihilism with Brassier: the fact that he must concede all of the natural world to science. This is what he means by ‘delusion and irresponsible faith’ above: those forms of theoretical claim-making that refuse to concede this ecology–one might say the ‘ecology ecology’–to science.

Now, back in the old days, it was easy for Continental thinkers to believe science to be ecologically constrained, to be necessarily limited to its domain, and to thus secure the cognitive legitimacy of their discourses against its boggling power. The days of that profound theoretical sleep, I fear, are over. As I said above, the hard fact is that science was really only ever technically constrained, that the complexities of the human–particularly those belonging to the brain–allowed the discourses of the human to carry on with business as usual. As cognitive science develops, however, the technical obstructions fall–it really is only a question of how far this process will go. I personally think ‘all the way’ is far and away the most probable answer.

Both Ben and Ray, however, want to draw two different types of lines in the sand. For Ray, the line lies in Sellarsian notion of ‘parity’ between the conceptual level of giving and asking for reasons and the ontological level of scientific explanation. Insofar as he recognizes the Cognitive Difference, he concedes the ontological priority of science. The possibility of parity lies in

the recognition that the manifest image furnishes us with the fundamental framework in terms of which we understand ourselves as ‘concept mongers,’ creatures continually engaged in giving and asking for reasons. But we are able to do things with concepts precisely insofar as concepts are able to do things to us. It is this capacity to be gripped by concepts that makes us answerable to conceptual norms. And it is this susceptibility to norms that makes us subjects. (“The View from Nowhere”)

The ontological priority of science over meaning flips into conceptual parity simply because meaning provides the condition of science understood as a self-correcting practice. Short of meaning, Ray contends, we can neither motivate nor make sense of our scientific practice. What prevents this account from lapsing into the traditional Continental mould is the refusal to give the conceptual superordinance of meaning an ontological interpretation. Meaning, on Ray’s Sellarsian account, is made. Science monopolizes cognition of the natural, and the natural exhausts ontology–the devil is given its due. Meaning arises out of practical necessity as an invented how that is conceptually incompatible to the natural what, but indispensable for the cognition of that what all the same.

Essentially, this is the great trick of pragmatic naturalism. And like many such tricks it unravels quickly if you simply ask the right questions. Since the vast majority of scientists don’t know what inferentialism is, we have to assume this inventing is implicit, that we play ‘the game of giving and asking for reasons’ without knowing. But why don’t we know? And if we don’t know, who’s to say that we’re ‘playing’ any sort of ‘game’ at all, let alone the one posited by Sellars and refined and utilized by the likes of Ray? Perhaps we’re doing something radically different that only resembles a ‘game’ for want of any substantive information. This has certainly been the case with the vast majority of our nonscientific theoretical claims.

This certainly provides ample ground to be skeptical of inferentialism. But how are we to know one way or another for sure?

This is where the wave flops up and washes Ray’s particular line in the sand away. The only way to know is to gather information and test our various interpretations–to do the science. Given that Ray has already conceded the incompatibility between the conceptual regimes of science and meaning, the prospects don’t look all that good. Science has a pesky tendency to revolutionize.

For Ben, on the other hand, the line in the sand lies more in the possibility of subjective capacity than in the necessity of normative constraint. Indeed, his primary issue with Nihil Unbound lies with how Ray, as he sees it, systematically denigrates this capacity. As he writes:

I agree with Brassier that rationality by itself leads to nihilism, disenchantment, angst, and so forth. Reason is accursed. But I don’t think the two perspectives are incommensurable so that the choice between them must be arbitrary. On the contrary, the perspectives are themselves naturally interrelated. We can speak of objective and subjective truth. The former is the trauma of learning that nature is fundamentally physical, that in itself, prior to our transformation of it, the universe is a harsh, mostly barren wasteland that’s doomed to destruction. By contrast, subjective truth is the feeling of rightness that results when instead of keeling over in horror after the world’s physicality slaps us in the face, we creatively undo that loathsome undeadness and surround ourselves with a more palatable version of the world that’s full of concrete vessels of purpose and ideality. So subjective truth is a salve for the trauma of objective truth, even as objective truth is a check on the vices of irrationality brought on by a wholesale escape into our fantasy worlds. The fact is we must live with both inclinations and we should avoid their opposite pitfalls.

Ben also thinks that science is inescapably wedded to meaning. Like Ray, he believes that its origins in human practice are important, but more as proof against lapsing into naive scientism than as the ‘fundamental (but fictional) frame’ that Ray makes of it. He realizes the difficulty of preempting the cognitive authority of science on speculative grounds in a way that Ray does not. For Ben, the key relation between science and meaning isn’t preemptive and authoritarian, it is consequential and creative. The important fiction, for him, lies in our response to the scientific monopolization of the natural–the Undead God, as he puts it.

Since the creativity simply follows from the straits imposed by the scientific monopolization of the natural, it’s the consequence that becomes the most crucial. Whimsey is creative, as is madness. Bigotry can be creative as well. Ben, in a sense, reverses the authority gradient posited by Ray, arguing that science needs to be the constraint on meaning, what prevents human meaning creation from lapsing into ‘delusion and irresponsible faith.’ Meaning, in other words, requires science to be rational.

But again, we bump into a simple question that seems to unravel the whole. The problem of meaning is primarily the problem of the incompatibility of meaning and science. Given this incompatibility, what kind of constraint is science supposed to provide? How can it constrain something it simply cannot cognize as real in any manner we find intuitively recognizable? The tempting answer, the one that certainly seems to accord with the way science is actually used in debates regarding meaning, is that such constraints are opportunistic at best.

For Ray, embracing meaning in this sense amounts to embracing irrationalism, and the corresponding inability to sort outright delusion from ‘meaning proper.’ But Ben can bite this bullet and defer, I think, acknowledge that it’s simply part and parcel of the collective debate on which meanings our society should aspire to. The fact that this debate is open-ended no way impugns the subjective truth of any given meaning, the fact that, as unreal as it may be for the universe, it remains ‘true for me.’ He can, in other words, continue to claim that “[i]f nihilism is the view that the universe is absolutely meaningless, nihilism is false because there is plenty of meaning on our planet.”

Can’t he? Not at all, really.

The first thing to note is that simply positing subjective truth as a solution to the problem of meaning is question-begging. The question of whether there is meaning in the universe is also the question of whether there is any such thing as ‘subjective truth.’ The only real warrant he could have for resorting to it is the notion that it is conceptually primitive, somehow, that it poses an inescapable boundary condition of intelligible thought.

But if it seems this way–and I appreciate that it does for great number of thinkers–then it is for the simple want of alternatives. On the Blind Brain Theory, for instance, meaning as both Ray and Ben theorize it is a metacognitive illusion through and through–which means that Ben’s subjective truth is also the product of our metacognitive incapacity. The argument for why this is the case is quite direct, no matter how counter-intuitive the conclusions may seem. Science tells us that human cognition is heuristic all the way down. This means that the subject-object dyad is also heuristic, which is to say, a way to make sense in the absence of certain kinds of information. As such, it necessarily relies on the information structure of a given problem ecology to effectively resolve problems. So the question immediately becomes: is the subject-object dyad applicable to the problem of meaning?

Well, as the problem of circularity I adduced above might suggest, we have good reason to think not. Once you appreciate the heuristic peculiarities of meaning concepts the explanation for the prevailing incompatibility between science and meaning that both Ray and Ben acknowledge becomes quite clear in naturalistic outline at least. So where science conceives the human as organic subsystems within larger environmental systems, the subject-object dyad conceives the human as a subject set over and against a world of objects. It occludes, and therefore problem solves, without the benefit of the very mechanical systematicity that science has revealed. Small wonder it suffers compatibility issues! The subject-object dyad elides the mechanistic facts of perception (the role played by sensory media), provides us with gross mechanical information regarding the ‘object,’ and yields next to no mechanical information about its own operations–we have to rely on metacognition for that! Both thoroughly occlude what we are in fact–which I fear is far more akin to the red dot on Jupiter than any notional ‘subject.’ If science is to exercise any substantive constraint, both subject and object have to be seen as cross-sections, lower dimensional projections, of something far more complicated than any Lebenswelt. Applying them as conceptual boundary conditions the way Ben does is not so different from using naive physics to argue quantum field theory.

The thing is, once you realize that the subject-object paradigm is heuristic, then it simply isn’t a matter of subjectivity versus objectivity, so much as systems which are neither. There is no ‘objective subjective,’ for instance: the binary simplicity of the formulation should tip us to the fact that something’s fishy. ‘Subjective truth’ is a heuristic misapplied twice. Now this is an admittedly difficult way to think: the problem-ecologies of our metacognitive heuristics are not intuitively available to us, let alone the fact that we swap between numerous varieties of heuristic tools whenever we tackle questions such as Ben’s and Ray’s. Only neglect makes our dim inklings seem ‘obvious.’ Only neglect makes ‘subjective truth’ seem universal and self-evidential. Only neglect lends normative contexts like ‘the game of giving and asking for reasons’ their veneer of preemptive necessity.

But as I keep saying: all of this is about to be revolutionized. The apparent universal applicability of these ways of thinking will be relegated to the scholastic dustbin soon enough.

The thing to realize about my argument is that it doesn’t need to be scientifically vindicated to have a powerful impact on Ben’s position. The subject-object paradigm is either heuristic, or… If it is heuristic it has an effective ecology. The onus accordingly falls on him to argue the applicability of his boundary conditions. Given the abject inability of philosophy to resolve any of its issues, something has to be holding things up. Could it be that traditional philosophy of meaning is planked with serial missapplications?

Well, it’s very possible! That’s the problem, the fact that this is so very possible. This is where reason bottoms out, consumes its own tail, and is remade as something alien to the metacognitive intuitions both Ray and Ben are seeking to preserve, even if in attenuated, deflationary forms.

And really, why should we think these particular prescientific inklings would end any other way? That Man the Meaning-Maker, the human we concocted in the absence of any substantial scientific information about ourselves, would be the one blinkered posit to be vindicated?


What Makes Any Biomechanism a Nihilistic Biomechanism?

by rsbakker

Peter at Conscious Entities has another fascinating post on the issue of machines and morality, this time in response to a paper by Joel Parthemore and Blay Whitby called “What Makes Any Agent a Moral Agent?” Since BLOG-PHARAU was hungry, I figured I would post a brief reworked version of my take here. I fear it does an end run around their argument, but there’s nothing much to be done when you disagree with an argument’s basic assumptions

My short answer to the question in their title is simply, ‘Whenever treating them as such reliably produces effective outcomes.’ Why? Because there is no fact of the matter when it comes to moral agency. It is a heuristic how, not an ontological what.

I find it interesting that they begin their abstract thus. “In this paper, we take moral agency to be that context in which a particular agent can, appropriately, be held responsible for her actions and their consequences.” Since this question is the question of when a system can responsibly be held responsible we need to pause and ask the question of the former ‘responsibility.’ When is it morally responsible to hold machines morally responsible. It’s worth noting that we do this very thing in ways small or large whenever we curse or punish machinery that fails us. One can assume that this is simply anthropomorphism for the most part, an example of the irresponsible holding of machines responsible. My wife, for instance, thinks I treat anything mechanical I’m attempting to fix abusively. So approached from this angle, Parthemore and Whitby’s argument can be looked at as laying out the conditions of responsible anthropomorphization.

So what are these conditions? A pragmatic naturalist like Dennett would simply answer, ‘Only so far as it serves our interests,’ the point being that there are no fixed necessary conditions demarcating the applicability of moral anthropomorphization. There’s nothing irresponsible about verbally upbraiding your iPhone, so long as it serves some need. Viewed this way, Parthemore and Whitby are clearly chasing something chimerical simply because the answer will always be, ‘Well, it depends…’ The context in which a machine can be responsibly held responsible will simply depend on the suite of pragmatic interests we bring to any given machine at any given time. If holding them responsible works to serve our interests, then it’s a go. If not, then it’s a no-go.

In my own terms, this is simply because our moral intuitions are heuristic kluges geared to the solution of domain specific problems regardless of the ‘facts on the ground.’ There are no fixed ontic finishing lines that can be laid out beforehand because the question of whether the application of any given moral heuristic works is always empirical. Only trial and error will provide the kinds of metaheuristics we need to govern the application of moral heuristics in a generally effective manner.

Otherwise, I can’t help but see all this machine ethics stuff as a way to shadow-box around the real problem, which is the question of when it is appropriate to treat humans like machines, as opposed to moral agents. More and more the corporate answer seems to be, ‘When it serves our interests…’

Then there’s the further question of whether it is even possible to treat people like moral agents once the mechanisms of morality are finally laid bare – because at that point, it seems pretty clear you’re treating people as moral agents for mechanistic ‘reasons.’

This is my bigger argument, anyway: That many things, such as morality, require the absence of certain kinds of information to function ‘responsibly.’

Rethinking Jesse Butler’s Rethinking Introspection

by rsbakker

Noocentric Nostalgia

Everyone but everyone claims to be a physicalist, nowadays, which means that everyone but everyone accepts that it’s all mechanisms: that what we call ‘knowledge,’ for instance, boils down to some kind of dynamic, mechanical interrelationship with their environment. Given this, it becomes hard to fathom why knowledge is anything other than a scientific problem–why, in other words, it remains philosophical. If what we call knowledge is nothing more than another natural phenomena, then we need only wait for science to isolate and explain the mechanisms behind it. This, after all, is what science does.

The problem, however, is twofold: 1) most everyone wants the mechanical details of this picture to somehow vindicate the received view, which is to say, the intentional picture painted by prescientific traditional theoretical speculation and metacognitive intuition; and 2) this intentional picture seems all but impossible to understand in mechanical terms.

The easiest way to solve this problem is to simply abandon the received view as our primary desideratum–to relinquish the siren song of Vindication. And good riddance! Certainly we expect science to confirm what we experience, but why should we expect it to confirm what we intuitively believe, especially knowing, as we do, the informatic penury that necessarily underwrites all our received views. Consider the way Plato likened memory to an aviary, or how Aristotle likened the cosmos to a sphere: given the information and problem-solving resources available such theoretical characterizations make a good deal of sense. But as the relevant sciences accumulated ever more information, as the picture revealed became more and more dimensional, the more obviously parochial these theoretical likenings became. And how could it be otherwise? It simply makes no sense, from a naturalist’s perspective at least, to presume that the sciences will vindicate any set of traditional beliefs.

And yet, despite all the naturalist avowals you encounter in cognitive science and philosophy, one finds the stubborn insistence on Vindication. This, in a nutshell, summarizes my critique of Jesse Butler’s Rethinking Introspection: A Pluralist Approach to the First-Person Perspective. Despite all the received assumptions and claims Butler relinquishes, his project ultimately remains, I think, an exercise in Vindication.

The genius of science, you might say, lies in its long-term institutional indifference to received views. It finds what it finds, and as the information pertaining to a particular domain accumulates, the problems with the corresponding received view as a rule become more and more glaring. The process, however, is slow. This generates opportunities for what might be called ‘theoretical accommodation,’ recharacterizations that concede as little as possible to the science while salvaging as much of the received view as possible. Butler casts Rethinking Introspection in precisely this mould, dispensing with those elements of the received view that are simply no longer tenable given a wide spectrum of empirical findings relevant to introspection, while resisting, at every turn, the eliminative abyss suggested by the overall trend of this research.

Now anywhere else in the natural world, theoretical accommodation of this sort would be obviously suspicious. But not so when it comes to the ‘mind’ in general or ‘introspection’ more narrowly. Why this is the case is something I have considered in detail here in the past. But in lieu of rehearsing this account, I would suggest that at least three unavoidable questions confront any contemporary, philosophical account of introspection:

1) What information is accessed in introspection?

2) What cognitive resources are deployed in introspection?

3) Are (1) and (2) adequate to the kinds of questions we are asking of introspection?

Simply asking these questions, I think, turns the bulk of traditional philosophy on its head. Why? Because throughout history philosophers have implicitly assumed both the sufficiency of the information accessed and the adequacy of the cognitive resources deployed. More and more the sciences of the brain are suggesting they were profoundly mistaken on both counts.

Blind Brain Theory (BBT) constitutes an attempt to answer these two questions using a number sober empirical assumptions and contemporary scientific evidence. Both the information accessed and the resources deployed, it contends, fall woefully short of the ‘default sufficiency’ assumed by the tradition. It then takes the further step of showing how numerous, longstanding philosophical impasses can be dissolved once interpreted in terms of metacognitive incapacity. Ultimately, it explains away the famous conundrums presented by ‘phenomenality,’ ‘intentionality,’ and the ‘first-person,’ by characterizing them as artifacts of informatic neglect.

When judged in terms of what is actually the case–what our brains happen to be doing–what we call ‘first-person experience’ consists of various cognitive incapacities turning on various resolution and dimensional deficits. So tradition characterized memory as a single, veridical faculty simply because human metacognition, left to its own devices, lacked the information and cognitive resources to characterize it any other way. Before the cognitive revolution, we were stranded with a cartoon conception of memory, a low-dimensional glimpse of what our brains are actually doing. BBT simply generalizes this picture. The so-called ‘first-person,’ it argues, is a concatenation of such cartoons, a series of cognitive illusions and simplifications forced on metacognition by profound constraints on the brain’s ability to solve itself the way it solves its environments. Since this cartoon complex is all the brain has, and since it is anchored in (as yet, largely unknown) actual functions, we have no other recourse (short of the sciences of the brain) but to make due as well as we can, understanding that, like memory, all our traditional conceptualizations will be shown to be low-dimensional parochialisms.

Throughout the course of Rethinking Introspection, Butler wanders to the tantalizing verge of this insight only to retreat into the safety of various received philosophical views time and again. He devotes the first chapters of the book to the demolition of the traditional ‘inner eye’ conception of introspection. Butler realizes that introspection is fractionate, that it is a complex consisting of a number of different cognitive operations, not all of them veridical. The subsequent chapters, accordingly, lay out a bestiary of introspective kinds, speculative considerations of what might be called the ‘introspective cognitive toolbox,’ the wide variety of ways we seem to gain metacognitive purchase on our experiences, thoughts, traits, and activities. No matter what one thinks about any given interpretation he gives of any given component, Butler makes it very difficult to suppose that introspection can be thought in anything remotely resembling the singular, veridical faculty assumed by the tradition.

And indeed, this commitment to pluralism is where the value of the book lies–what makes it worthwhile reading, I think. Nevertheless, I want to argue that Butler’s account is nowhere near as radical as it needs to be to ‘rethink’ introspection in a forward looking manner. When interpreted through the lens of BBT, it becomes clear that Rethinking Introspection is actually a recuperative exercise, an attempt to rescue the introspection we want from the introspection the sciences of the brain seem to be revealing…


The Inner Wall-Eye

What will science make of introspection?

BBT provides one possibility. Butler’s account offers another. But these theories are just that, theories. Not surprisingly, I think BBT holds far and away more promise, but no matter how compelling the arguments I adduce may seem, it simply remains another speculative bet awaiting empirical arbitration. But there is one thing we can claim with some certainty: All things being equal, we can assume that science will complicate and contradict our traditional and/or intuitive preconceptions. It complicates because it provides more and more information–which is to say, systematic differences making systematic differences. It contradicts because the complication of any given phenomena inevitably reveals information crucial to understanding what is actually the case. A signature theoretical virtue of BBT, by the way, lies in its ability to explain why this is so. It can explain why we find our traditional and/or intuitive assumptions so convincing, no matter how wildly wrong they may be (via what might be called ignorance anchored certainty effects), and it can explain why the accumulation of scientific information inevitably ‘disenchants’ these assumptions (via the provision of the very information our traditional and/or intuitive assumptions are adapted to function without). But even if one is inclined to reject these explanations, the basic observation they turn on remains: All our assumptions depend on some combination of the information and the cognitive resources available. Thus the importance of the three questions above.

Everything in our metacognitive canon, all the conceptual verities that philosophers have relied upon for millennia, now stands perched on an informatic continuum–or abyss as the case might be. The primary and most pervasive problem afflicting Rethinking Introspection lies in its failure to systematically consider the implications of this platitudinal insight.

Nowhere is this failure more evident than in Butler’s critique of ‘perceptual accounts’ of introspection, the famous, traditional understanding of introspection as some kind of ‘inner eye.’ In physiological terms, he argues, no one has ever discovered any organ of inner sense. In functional terms, he argues that introspection, unlike perception more generally, operates recursively. In phenomenological terms, he primarily argues that the mind offers no objects to be perceived.  And lastly, in evolutionary terms, he argues that the development of some kind of inner eye simply makes no evolutionary sense. Ultimately he concludes that the inner eye posited by tradition is simply a cognitive convenience, a useful but problematic metaphoric extension of our environmentally oriented understanding.

Now, even though I largely agree with his conclusion, I fail to see how any one of these arguments is supposed to work, especially given the way he ultimately pushes his account. As we shall see, not only does his own account lack any empirically confirmed physiological basis, it’s actually difficult to understand how introspection as he conceives it could be accomplished by any mechanism whatsoever. Moreover, he seems to forget that the whole point of positing ‘scanning mechanisms’ and the like is to streamline the scientific process, to give those who do the actual research some idea of what to look for. In this sense, he’s doing little more than accusing speculative accounts (like his own) of speculation.

A similar problem haunts his functional disanalogy argument, the much ado he makes over the fact that introspection is recursive whereas environmental perception is not. One cannot ‘see seeing’ or ‘hear hearing’ the way one can scrutinize scrutiny, or think thought. This basically boils down to the argument that introspection cannot be inner perception simply because it is, well, inner. But he never makes clear why this implies anything more than the fact that introspection, like other forms of perception, involves tracking a particular species of natural event–namely, the tracking itself. If introspection were a kind of perception this is the only kind of perception it could be. Moreover, why should recursion disqualify introspection as perception, especially given the imprecision of Butler’s definition of perception, to the point where it seems any secondary mechanism engaged in metacognition might count as ‘perceptual’?

All of this, of course, begs the question of just what does the tracking, if not some kind of mechanism, something I will return to in due course.

His phenomenological critique fares no better. Here, he opts for another argument from disanalogy: Introspection cannot be a kind of ‘inner perception’ simply because its objects in no way resemble the objects of environmental perception. The relevant passage is worth quoting in full:

If the supposed internal perceptual faculty perceives brain states, then these brain states must be occluded or ‘scrambled’ in some way or other, as they do not appear to us in introspection as brain states… Brain states are incredibly complicated electro-chemical events among virtually innumerable neural networks encased inside one’s skull. However, this is definitely not what we perceive, if we perceive anything at all, through introspection. The thought ‘I am thinking,’ for instance, does not appear in experience as a particular neural event, or even as any discernible physical thing at all, as Descartes noted and made (too) much of several centuries ago. So, if we perceive a brain state when we are aware of having such a thought, then that brain state must be filtered through some sort of process that transforms it into something that appears quite different, to such an extent it is unrecognizable as such. Otherwise, philosophers would not have had spilled so much ink over the mind/body problem all these years, and Descartes himself could have readily identified mental states with brain states. So, if we have the capacity to perceive brain states, it must be through some mechanism that alters their appearance so radically that they do not seem like brain processes at all.

The idea of a perceptual brain scrambler might fly in a Philip K. Dick novel, but not as a literal account of introspection. (22)

On BBT, of course, this represents a text-book case of the ‘Accomplishment Fallacy,’ the assumption that any identifiable feature of our phenomenology must possess some kind of neural correlate, some mechanism that ‘brings it about.’ So where Butler (following Lyons) posits the necessity of some kind of ‘scrambler,’ BBT simply posits the loss of information. A good deal of our phenomenology, it asserts, is a kind of ‘flicker fusion,’ the product of default identifications made in the absence of the information required to make accurate discriminations. The difference between the mental and the environmental no more requires a special mechanism than the difference between geocentrism and heliocentrism requires some “planetary immobilization device.” In both cases, the relevant cognitive systems simply lack the information required for accuracy. And as I have argued in detail elsewhere, this is precisely what we should expect, given the way complexity and structural complicity confound the brain’s ability to cognize its own functions. Metacognition necessarily neglects far more information than does environmental cognition. It relies on effective shortcuts, heuristics keyed to exploit various information structures in the organism’s environment–which, one must remember, happens to include our own brain. Since the information neglected is neglected in the full sense of the word, no discontinuities appear (save indirectly, at those junctures attended by perennial controversy), and so we assume that no information is missing. Thus the perennial illusion of ‘sufficiency,’ why it is we are so prone to assume introspective infallibility, or ‘self-transparency’ as Carruthers calls it in The Opacity of the Mind.

Here we clearly see Butler’s failure to consider questions (1), (2), and (3). The mechanisms discovered by neuroscience–or ‘brain states’ as he calls them–are discoveries of what is the case (or failing that, the level at which understanding means effective manipulation), the natural basis of our every thought and action. Given that Butler’s stated aim is to elucidate the epistemic statuses of our various introspective modalities, one might assume that the findings of cognitive neuroscience would provide him with the very yardstick he needs to assess the accuracy of any given modality. But such is not the case. Despite all the qualifications he uses to innoculate his use of ‘mind’ and the ‘mental,’ he nevertheless proceeds under the traditional assumption that they indeed exist, that they comprise a functionally distinct ‘level of description’ and so provide him with the very baseline or yardstick he needs to make his assessments. And this saddles him with the dilemma that dogs fairly every page of this book: the continual need to fix and hedge his yardsticks. Time and again you find him acknowledging the controversies pertaining to this or that mentalistic concept (including the concept of ‘concept’ itself), and trying to stake out some kind of neutral or maximally inoffensive interpretative ground. Time and again, in other words, he is forced to philosophically argue his baseline.

By turning his back on the yardsticks afforded by science, it seems he is forced to evaluate the epistemic status of the various introspective modalities he considers using yardsticks largely provided by–you guessed it–introspection. So where BBT parsimoniously theorizes metacognition in terms continuous with cognition more generally, conceiving it simply as the brain’s neuromechanistic attempt to cognize its neuromechanistic complexities in drastically simplified and therefore computationally tractable and domain specific ways, Butler theorizes metacognition–at root, at least–as something different from neuromechanistic cognition entirely. The objects of metacognition–the yardsticks Butler needs–are nothing other than the ‘primitive’ what-is-it-likeness of phenomenality and the functional abstractions revealed by deliberative theoretical reflection. What allows him to assess the epistemic status of introspection, in other words, clearly seems to be introspection itself. Where else would we access non-neuroscientific information pertaining to experience and the mind?

But it is his evolutionary argument against the perceptual interpretation of introspection that is arguably the most baffling. Arguing that “[t]here appears to be no identifiable functional/adaptive process that serves the purpose of perceiving one’s own mental states,” he suggests that our introspective capacities “are by-products (i.e., spandrels) of other adaptive processes that make them possible” (32). Introspecting mental states serves no adaptive process, he claims, because the mental state observed itself somehow monopolizes any adaptive benefit to be had. As he writes:

Knowing that I am a cooperative person, for instance, would not add anything beneficial to my interpersonal interaction. Any benefit would already be conferred by my actual cooperativeness as I engage with others in the world, regardless of whether I accurately represent that feature to myself.

Similar reasoning could apply to other types of mental states, such as beliefs, desires, and pains. (33)

I have to admit, this argument strikes me as so bad as to be mystifying. Certainly, not all cooperation is equal. Certainly some individuals are too cooperative, while others are too little. Certainly the ability to introspect cooperativeness would have allowed our ancestors to make refinements that could potentially affect reproductive success. And certainly ‘similar reasoning applies’ to beliefs, desires, or even pains. Status imperiling beliefs can be modified. Illicit desires can be identified and suppressed before being expressed. And the ability to self-identify different kinds of pain can facilitate recovery. And yet, Butler concludes:

So if there is an identifiable adaptive benefit here concerning knowledge of minds, it is in regard to our understanding of others, and not ourselves. In other words, the evolutionary pressures for perceptual and cognitive adaptations are geared toward an ability to represent and think about things in one’s external environment. (33)

I quote this not simply to underscore the degree to which Butler runs afoul what Dennett calls the ‘Philosopher’s Syndrome,’ the tendency to mistake a failure of imagination for necessity, but also to highlight the degree to which he mischaracterizes the very phenomena he is attempting to explicate. Consider, just for instance, Robert Triver’s ‘cognitive load thesis’ regarding self-deception, the claim that “[w]e hide reality from our conscious minds the better to hide it from onlookers” (The Folly of Fools, 9). One need not buy into Triver’s account (which makes self-transparency a default that evolution selected against, when it is far more likely that the estimable computational challenges pertaining to introspectively cognizing that ‘reality’ simply dovetailed with evolutionary pressure in this case) to see that “understanding others,” as Butler puts it, quite literally means understanding ourselves as well. Not only do our brains belong to our environment, they are, from an evolutionary perspective, the single most important component–one that is every bit as opaque as the brains of others. Solving problems requires information. Our brains (which can be seen as mechanisms that transform environmental risk into onboard complexity) constitute a vast store of empirical information. There are, as a matter of brute principle, an infinite number of problematic circumstances that can only be solved via access to that information. Another way of putting this is to say that there is literally no ‘out there’ distinct from some ‘in here’ when it comes to evolution, only information that may or may not enhance an organism’s fitness.

What Butler simply assumes must be an essential ‘self-other’ boundary, BBT explicates as a contingent result of various constraints on neuromechanical problem-solving. It is the case, as Butler contends, that human cognition is primarily ‘externally directed.’ Likewise, it is the case that metacognition is an evolutionary late-comer. But this has everything to do with neurophysiological constraints on information processing and nothing to do with any enigmatic or essential difference between ‘self’ and ‘other.’ It just so happens that the neural complexity required to incorporate external environmental items into effective sensorimotor loops makes the incorporation of that selfsame neural complexity into further sensorimotor loops computationally prohibitive. Trouble-shooting our external environments requires brains too complicated to likewise trouble-shoot, plain and simple. On the evolutionary scenario suggested by BBT, it was the evolutionary pressure pertaining to mindreading and collective coordination–the complexities of human social fitness–that gave our brains the computational wherewithal to make problem-solving requiring internal environmental information feasible. Once this window of adaptive potential opened up, our metacognitive toolbox became more and more crowded.

As I mentioned above, I actually agree with Butler that ‘perception’ is a metaphoric malapropism, a problematic way to understand the metacognitive toolbox constituting introspection. But where I see perception as a information access wrinkle in a larger natural account of cognition, he seems to think it can be understood in isolation. In Bayesian models of neural function, for instance, perception is scarce distinguishable from conception. It’s mediation all the way down. Butler, however, needs perception to be something different, something possessing the low resolution of the modern tradition. Thus the peculiar, opportunistic ambiguity in his usage of the term, the way he trades between the bad ‘perceptual introspection’ and the good ‘introspective capacities’ with nary an explanation of the distinction. One might ask, for instance, why any kind of ‘internal brain scanner’ necessarily counts as ‘perceptual.’ Is it because the function of the scanner is to access information otherwise not available for cognition? If so, then this means the bulk of the metacognitive tools that Butler posits are ‘perceptual’ in nature. The information, after all, has to be accessed somehow, whether referencing our affects or our beliefs.


The Enchanted First-Person

I say the ‘bulk’ of his metacognitive tools because his entire account is in fact raised upon what he considers a fundamental exception to the way the brain typically cognizes information: the phenomenality or what-it-is-likeness of experience. His vague usages of perception, as well as his problematic physiological, functional, phenomenological, and evolutionary arguments, are all motivated by his primary desideratum: an understanding of introspection, in its most primitive form, as a kind of ontological cognition, a knowledge possessed in virtue of being a given experience at a given time. As he writes:

I am willing to grant Nagel and Jackson the point that, given our current understanding of physical reality, it is indeed puzzling how conscious experiences can come about through physical processes. However, it is just as likely (if not more likely) that this puzzlement is due to problems in our understanding of physicality as it is that consciousness is a non-physical event. Conscious experiences in themselves, however mysterious they may seem, simply do not preclude the possibility that they are physical events. Perhaps that is just what physical reality is like, when known from the unique perspective of being a particular kind of physical event. (60)

The question, obviously, is one of just what this ‘unique perspective’ is. And indeed, this is the very question Butler takes himself to be answering. BBT, for its part, explains the apparent incompatibilities between the natural and the experiential, and thereby demystifies consciousness-as-it-appears in terms of the kinds of information privation and metacognitive error one might expect given the kind of ‘unique perspective’ the human brain has on itself. The ‘puzzling’ features of experience that render the ‘supernaturalization’ of conscious experience so seductive turn out, on BBT, to be the very features we might expect, given the notorious ‘curse of dimensionality’ and the evolutionary imperative to economize metabolically expensive neurocomputations. Everything is empirical on BBT, given that the scientific cognition of the natural provides the greatest informatic dimensionality. “The unique perspective of being a particular kind of physical event,” in other words, amounts to a limited view on some higher dimensional scientific picture. Thus the ‘blindness’ of the ‘blind brain.’

Butler, however, has something quite different in mind. On his account, “the unique perspective of being a particular kind of physical event” does not lie on the same informatic continuum as the scientific perspective on those physical events. Despite his naturalism, the perspective is not any ‘perspective on’ anything natural in any straightforward sense. He is convinced rather, that conscious experience constitutes a ‘special’ domain of knowledge, one that is fundamentally different in kind from scientific knowledge, namely, knowledge of what it is like to experience x, or what he calls ‘existential constitution model of knowledge.’

He defines this special knowledge by distinguishing it from the three primary philosophical approaches to the question of knowledge and phenomenality: the standard propositional account, the ability account, and the acquaintance account. He does a fair job of explaining why each of these approaches fail to deliver on phenomenal knowledge, why our knowledge of what x is like constitutes a distinctive brand of ‘special knowledge.’ But he has an enormous problem: he has no way of explaining this knowledge in the common idiom of the brain, which is to say, in terms of neuromechanistic information processing. The problem, in other words, is that he never actually poses questions (1), (2), and (3). He never asks what, physiologically speaking, would something like the existential constitution model of knowledge require.

Thus his miniature ‘via negativa’: Butler needs to argue what his account of existential knowledge is not because he has no plausible way to argue what it is. He makes gestures toward aligning his account with the existential and phenomenological traditions in continental philosophy, as well as with more recent work on ‘embodied cognition’ in philosophy of mind, but he adduces nothing more than the common recognition of  “the primacy of our subjective experience as embodied creatures in the world…” (65). In his consideration to various possible objections to his account he adverts to the fact that we regularly refer to ‘knowledge of our experiences’ in everyday life. This is a powerful consideration to be sure, but that one begs explanation far more than it evidences his account. In fact, aside from continually appealing to the tautological assumption that some kind of knowledge has to be involved in knowing experience, he really offers nothing in the way of positive, naturalistic characterizations of his model–that is, until he turns to Bermudez and the notion of ‘self-specifying content,’ the way an organism’s perception and proprioception bears tacit information regarding itself: the way, for instance, seeing a portion of a ball around a corner implicitly means you are standing around the corner from a ball.

To be clear, I am not concerned with the informational content of these states here. Instead, the key point is that the informational content is self-specifying in nature and that phenomenal states themselves have a similar self-specifying nature that results from being embodied and situated in the world. The experience itself provides immediate and intimate knowledge about the experiencing agent to that same agent, in a direct non-dichotomous and non-mediated manner. By its very nature, such a phenomenal state confers self-understanding in the most primitive manner possible to an experiencing subject. (69)

The problem with this apparent elaboration of his account, however, is that ‘self-specifying content’ in no way requires conscious experience. In fact, all of any complex organism’s systematic environmental interactions require ‘self-specifying’ perceptual and proprioceptive ‘content,’ insofar as they need to ‘know’ their position and capabilities to do anything at all. This is simply a boilerplate assumption of the embodied cognition/ecological psychology crowd. And of course, very few of these organisms know anything, at least not in any nontendentious sense of the word ‘know.’ They just happen to ‘be’ these organisms. If this is what Butler means by “self-understanding in the most primitive manner possible” then he is plainly not talking about ‘understanding’ at all.

In fact, it becomes very difficult to understand precisely what he is talking about. On the one hand we have knowledge as intentionally understood–the very kind of relational knowledge that Butler’s account seeks to disqualify. On the other hand, we have the famous ‘triviality’ of mechanistic cognition, the way all life, as the product of evolution, represents solutions to various problems–the sense in which biology, in other words, is ‘cognitive all the way down.’ In this trivial or wide sense of cognition, then of course conscious experience is cognitive in some respect. What else could it be?

If mechanistic or ‘wide cognition’ is what indeed underwrites Butler’s case, then the ‘in some respect’ is what becomes relevant to inquiry. To simply say that this some respect is ‘phenomenal’ or ‘existential’ does nothing but confound the mystery.

But then, this is just what the question has been all along: If phenomenal experience is cognitive, then what kind of cognition is it, and why the hell does it baffle us so? The most Butler can do, it seems, is provide us with an account of what kind of cognition conscious experience is not. Aside from eliminating propositional, ability, and acquaintance accounts, his existential constitution model really doesn’t provide any kind of answer at all, let alone one that suggests future avenues of research. And the reason for this, I think, lies in his failure to pose, let alone address, our questions of information access and cognitive resources. What, neuromechanistically speaking, would something like the existential constitution model of knowledge require? What kind of information access and what kind of cognitive resources does the human brain need to ‘know what an experience is like’?

I think this question plainly reveals the spookiness of Butler’s account. Why? If he claims an experience is cognitive in the trivial or biomechanical sense, then he’s telling us nothing about the very ‘in some respect’ at issue. If ‘knowing what experience x is like’ involves some kind of spontaneous ‘cognition ex nihilo,’ then he owes us some kind of story: By virtue of what is experience x cognitive in your spontaneous first-personal sense? Otherwise he has simply found a clever way of gaming the problem into something that merely sounds like a solution. (He explicitly defines introspection as, “the process of seeking and/or acquiring knowledge of one’s own mind, from one’s own subjective first-person standpoint” (46, italics my own)). If phenomenal states ‘by their very nature confer self-understanding in the most primitive manner possible,’ as he claims, then just what is that ‘nature?’ If simply ‘being a first-person’ is sufficient for ‘first-person knowledge,’ Butler only has a workable, natural account of first person knowledge–knowledge of what an experience is like–to the extent that he has a workable, natural account of the first-person. And not surprisingly, he has none.

Call this the ‘metacognitive baseline problem.’ There is no way to gauge the epistemic virtues of our metacognitive toolbox short of some kind of yardstick, some reliable way of judging the reliability of a given introspective capacity. The irony is that Butler is actually very concerned with the question of cognitive resources (2). His existential constitution model of introspective knowledge is meant to account for what might be thought of as an ‘introspective baseline,’ the basis upon which various other kinds of ‘higher level’ introspective are based. “The central idea,” as he writes, “is that we engage in higher-level introspection by utilizing the mind’s cognitive capacities to represent and think about our own minds” (75). Accordingly, he devotes the rest of the book to considerations of what might be called the ‘introspective cognitive toolbox.’ But once again, since it all amounts to introspection boot-strapping introspection – using interpretations of ‘mind’ to anchor estimations of our ability to interpret the mind – I just don’t understand how it’s supposed to work.

BBT takes the brain as described by science as its yardstick for ‘introspective accuracy,’ the degree to which the brain does or does not get its own activities right. To this extent, it argues that introspection (and the philosophical tradition raised upon it) is plagued by a number of profound cognitive illusions pertaining information privation and heuristic misapplication. The complexity the brain requires to accurately and comprehensively track its external environments is such that it cannot accurately and comprehensively track its internal environment. The brain can, at best, efficaciously track itself, which it to say, cognize limited amounts of information keyed to very specific problems. Perhaps this information can be efficaciously applied ‘out of school,’ perhaps not. (No doubt, spandrels abound in metacognition). Either way, this information cannot provide the basis for an accurate and comprehensive account of anything. And this quite simply means there is no such thing as ‘mind.’ There is only the brain, splintered and occluded by the heuristics populating our metacognitive toolbox, a hodgepodge of specific capacities adapted to a hodgepodge of specific problem ecologies, which theoretical reflection, utterly blind to its myopia, fuses and confounds and reifies into the ‘mind.’

With his ontological account, Butler essentially offers us his own, localized version of Descartes’ cogito, one taking experience as a self-evident foundation. In a sense, his ‘first-person experience’ constitutes the self-interpreting rule, or transcendental signified, or whatever tradition-specific term you want to apply to such Munchausenesque formulations.

In contrast, the signature theoretical virtue of BBT lies in its ability to account for the apparent structure of this first-personal sense in biomechanically continuous terms. It can’t tell us what consciousness is, but it can offer a parsimonious and fairly comprehensive account of why it appears the way it does, and why we find it so baffling as a result. Briefly, it diagnoses the more puzzling aspects of the first-person in terms of various forms of neglect, informatic lacunae that are invisible as such, resulting in a series of what might be called ‘identity illusions,’ which in turn form the basis of our intuitions regarding the first-person. Since these are, ultimately, kinds of cognitive illusion, they resist explanation in natural terms, as well as lack any fact of the matter to arbitrate between interpretations, thus generating endless grist for what we call philosophy. In essence, it explains apparently fundamental structural features of the first-person such as the now and intentionality in terms of a kind of ‘ontological ignorance,’ the trivial fact that information that is not broadcast or integrated into consciousness does not exist for conscious cognition. You could say that it explains the apparent structure of consciousness by turning it upside down.

Since BBT effectively explains away the first-person in the course of accounting for it, there really is no need to posit any spooky knowledge specific to it. On BBT, there is no ‘first-person knowledge’ so much as there is proximate, low-dimensional (and so highly heuristic) cognition of various brain activities (self and other), and there is distal, high-dimensional cognition of everything else. The apparent peculiarities of the first-person are the product of a variety of severe heuristic ‘compromises,’ particularly those involving structurally occluded dimensions of information. Many of its perplexing structural aspects, its nowness or aboutness, for example, it explains away as metacognitive artifacts of medial neglect. The famous problems pertaining to ‘what-is-it-likeness’ are likewise resolved by considering varieties of ‘brain blindness.’ The mystery of consciousness remains, of course, only relieved of the numerous conceptual confounds that presently render it so intractable as an explanandum. The so-called Hard Problem becomes a bad dream.

For Butler, I suspect, this approach simply has to amount to throwing the baby out with the bathwater. I can only shrug, offer that the baby was never really ‘there’ anyway, commiserate because, yeah, it really, really sucks, then challenge him to conjure his baby without simply compounding his reliance on magic. BBT, at least, can explain what it is the metacognizing brain is doing in terms continuous with what neuroscience has hitherto learned. With BBT the assumption that consciousness is some explicable natural phenomena remains, but as an inferentially inert posit. No empirical longshots are required to explain the general cognitive situation of introspection.