Three Pound Brain

No bells, just whistling in the dark…

Tag: Cognitive Science

Reading From Bacteria to Bach and Back I: On Cartesian Gravity

by rsbakker

ABDUCTION AND DIAGNOSIS

Problem resolution generally possesses a diagnostic component; sometimes we can find workarounds, but often we need to know what the problem consists in before we can have any real hope of advancing beyond it. This is what Daniel Dennett proposes to do in his recent From Bacteria to Bach and Back, to not only sketch a story of how human comprehension arose from the mindless mire of biological competences, but to provide a diagnostic account of why we find such developmental stories so difficult to credit. He hews to the slogan I’ve oft repeated here on Three Pound Brain: We are natural in such a way that we find it impossible to intuit ourselves as natural. It’s his account of this ‘in such a way,’ that I want to consider here. As I’ve said many times before, I think Dennett has come as close as any philosopher in history to unravelling the conjoined problems of cognition and consciousness—and I am obliged to his acumen and creativity in more ways than I could possibly enumerate—but I’m convinced he remains entangled, both theoretically and dialectically, by several vestigial commitments to intentionalism. He remains a prisoner of ‘Cartesian gravity.’ Nowhere is this clearer than in his latest book, where he sets out to show how blind competences, by hook, crook, and sheer, mountainous aggregation, can actually explain comprehension, which is to say, understanding as it appears to the intentional stance.

Dennett offers two rationales for braving the question of comprehension, the first turning on the breathtaking advances made in the sciences of life and cognition, the second anchored in his “better sense of the undercurrents of resistance that shackle our imaginations” (16). He writes:

I’ve gradually come to be able to see that there are powerful forces at work, distorting imagination—my own imagination included—pulling us first one way and then another. If you learn to see these forces too, you will find that suddenly things begin falling into place in a new way. 16-17

The original force, the one begetting subsequent distortions, he calls Cartesian gravity. He likens the scientific attempt to explain cognition and consciousness to a planetary invasion, with the traditional defenders standing on the ground with their native, first-person orientation, and the empirical invaders finding their third-person orientation continually inverted the closer they draw to the surface. Cartesian gravity, most basically, refers to the tendency to fall into first-person modes of thinking cognition and consciousness. This is a problem because of the various, deep incompatibilities between the first-person and third-person views. Like a bi-stable image (Dennett provides the famous Duck-Rabbit as an example), one can only see the one at the expense of seeing the other.

Cartesian gravity, in other words, refers to the intuitions underwriting the first-person side of the famed Explanatory Gap, but Dennett warns against viewing it in these terms because of the tendency in the literature to view the divide as an ontological entity (a ‘chasm’) instead of an epistemological artifact (a ‘glitch’). He writes:

[Philosophers] may have discovered the “gap,” but they don’t see it for what it actually is because they haven’t asked “how it got that way.” By reconceiving of the gap as a dynamic imagination-distorter that has arisen for good reasons, we can learn to traverse it safely or—what may amount to the same thing—make it vanish. 20-21

It’s important, I think, to dwell on the significance of what he’s saying here. First of all, taking the gap as a given, as a fundamental feature of some kind, amounts to an explanatory dereliction. As I like to put it, the fact that we, as a species, can explain the origins of nature down to the first second and yet remain utterly mystified by the nature of this explanation is itself a gobsmacking fact requiring explanation. Any explanation of human cognition that fails to explain why humans find themselves so difficult to explain is woefully incomplete. Dennett recognizes this, though I sometimes think he fails to recognize the dialectical potential of this recognition. There’s few better ways to isolate the sound of stomping feet from the speculative cacophony, I’ve found, than by relentlessly posing this question.

Secondly, the argumentative advantage of stressing our cognitive straits turns directly on its theoretical importance: to naturalistically diagnose the gap is to understand the problem it poses. To understand the problem it poses is to potentially resolve that problem, to find some way to overcome the explanatory gap. And overcoming the gap, of course, amounts to explaining the first-person in third-person terms—to seize upon what has become the Holy Grail of philosophical and scientific speculation.

The point being that the whole cognition/consciousness debate stands balanced upon some diagnosis of why we find ourselves so difficult to fathom. As the centerpiece of his diagnosis, Cartesian gravity is absolutely integral to Dennett’s own position, and yet surveying the reviews From Bacteria to Bach and Back has received (as of 9/12/2017, at least), you find the notion is mentioned either in passing (as in Thomas Nagel’s piece in The New York Review of Books), dismissively (as in Peter Hankin’s review in Conscious Entities), or not at all.

Of course, it would probably help if anyone had any clue as to what ‘first-person’ or ‘third-person’ actually meant. A gap between gaps often feels like no gap at all.

ACCUMULATING MASS

“The idea of Cartesian gravity, as so far presented, is just a metaphor,” Dennett admits, “but the phenomenon I am calling by this metaphorical name is perfectly real, a disruptive force that bedevils (and sometimes aids) our imaginations, and unlike the gravity of physics, it is itself an evolved phenomenon. In order to understand it, we need to ask how and why it arose on the planet earth” (21). Part of the reason so many reviewers seem to have overlooked its significance, I think, turns on the sheer length of the story he proceeds to tell. Compositionally speaking, it’s rarely a good idea to go three hundred pages—wonderfully inventive, controversial pages, no less—without substantially revisiting your global explanandum. By time Dennett tells us “[w]e are ready to confront Cartesian gravity head on” (335) it feels like little more than a rhetorical device—and understandably so.

The irony, of course, is that Dennett thinks that nothing less than Cartesian gravity has forced the circuitous nature of his route upon him. If he fails to regularly reference his metaphor, he continually adverts to its signature consequence: cognitive inversion, the way the sciences have taken our traditional, intuitive, ab initio, top-down presumptions regarding life and intelligence and turned them on their head. Where Darwin showed how blind, bottom-up processes can generate what appear to be amazing instances of design, Turing showed how blind, bottom-up processes can generate what appear to be astounding examples of intelligence, “natural selection on the one hand, and mindless computation on the other” (75). Despite some polemical and explanatory meandering (most all of it rewarding), he never fails to keep his dialectical target, Cartesian exceptionalism, firmly (if implicitly) in view.

A great number of the biological examples Dennett adduces in From Bacteria to Bach and Back will be familiar to those following Three Pound Brain. This is no coincidence, given that Dennett is both an info-junkie like myself, as well as constantly on the lookout for examples of the same kinds of cognitive phenomena: in particular, those making plain the universally fractionate, heuristic nature of cognition, and those enabling organisms to neglect, and therefore build-upon, pre-existing problem-solving systems. As he writes:

Here’s what we have figured out about the predicament of the organism: It is floating in an ocean of differences, a scant few of which might make a difference to it. Having been born to a long lineage of successful copers, it comes pre-equipped with gear and biases for filtering out and refining the most valuable differences, separating the semantic information from the noise. In other words, it is prepared to cope in some regards; it has built-in expectations that have served its ancestors well but may need revision at any time. To say that it has these expectations is to say that it comes equipped with partially predesigned appropriate responses all ready to fire. It doesn’t have to waste precious time figuring out from first principles what to do about an A or a B or a C. These are familiar, already solved problems of relating input to output, perception to action. These responses to incoming simulation of its sensory systems may be external behaviors: a nipple affords sucking, limbs afford moving, a painful collision affords retreating. Or they may be entirely covert, internal responses, shaping up the neural armies into more effective teams for future tasks. 166

Natural environments consist of regularities, component physical processes systematically interrelated in ways that facilitate, transform, and extinguish other component physical processes. Although Dennett opts for the (I think) unfortunate terminology of ‘affordances’ and ‘Umwelts,’ what he’s really talking about are ecologies, the circuits of selective sensitivity and corresponding environmental frequency allowing for niches to be carved, eddies of life to congeal in the thermodynamic tide. With generational turnover, risk sculpts ever more morphological and behavioural complexity, and the life once encrusting rocks begins rolling them, then shaping and wielding them.

Now for Dennett, the crucial point is to see the facts of human comprehension in continuity with the histories that make it possible, all the while understanding why the appearance of human comprehension systematically neglects these self-same conditions. Since his accounts of language and cultural evolution (via memes) warrant entire posts in their own right, I’ll elide them here, pointing out that each follow this same incremental, explanatory pattern of natural processes enabling the development of further natural processes, tangled hierarchies piling toward something recognizable as human cognition. For Dennett, the coincidental appearance of La Sagrada Familia (arguably a paradigmatic example of top-down thinking given Gaudi’s reputed micro-managerial mania) and Australian termite castles expresses a profound continuity as well, one which, when grasped, allows for the demystification of comprehension, and inoculation against the pernicious effects of Cartesian gravity. The leap between the two processes, what seems to render the former miraculous in a way the latter does not, lies in the sheer plasticity of the processes responsible, the way the neurolinguistic mediation of effect feedback triggers the adaptive explosion we call ‘culture.’ Dennett writes:

Our ability to do this kind of thinking [abstract reasoning/planning] is not accomplished by any dedicated brain structure not found in other animals. There is no “explainer nucleus” for instance. Our thinking is enabled by the installation of a virtual machine made of virtual machines made of virtual machines. The goal of delineating and explaining this stack of competences via bottom-up neuroscience alone (without the help of cognitive neuroscience) is as remote as the goal of delineating and explaining the collection of apps on your smart phone by a bottom-up deciphering of its hardware circuit design and the bit-strings in memory without taking a peek at the user interface. The user interface of an app exists in order to make the competence accessible to users—people—who can’t know, and don’t need to know, the intricate details of how it works. The user-illusions of all the apps stored in our brains exist for the same reason: they make our competences (somewhat) accessible to users—other people—who can’t know, and don’t need to know, the intricate details. And then we get to use them ourselves, under roughly the same conditions, as guests in our own brain. 341

This is the Dennettian portrait of the first-person, or consciousness as it’s traditionally conceived: a radically heuristic point of contact and calibration between endogenous and exogenous systems, one resting on occluded stacks of individual, collective, and evolutionary competence. The overlap between what can be experienced and what can be reported is no cosmic coincidence: the two are (likely) coeval, part of a system dedicated to keeping both ourselves and our compatriots as well informed/misinformed—and as well armed with the latest competences available—as possible.

We can give this strange idea an almost paradoxical spin: it is like something to be you because you have been enabled to tell us—or refrain from telling us—what it’s like to be you!

When we evolved into in us, a communicating community of organisms that can compare notes, we became the beneficiaries of a system of user-illusions that rendered versions of our cognitive processes—otherwise as imperceptible as our metabolic processes—accessible to us for purposes of communication. 344

Far from the phenomenological plenum the (Western) tradition has taken it to be, then, consciousness is a presidential brief prepared by unscrupulous lobbyists, a radically synoptic aid to specific, self-serving forms of individual and collective action.

our first-person point of view of our own minds is not so different from our second-person point of view of others’ minds: we don’t see, or hear, or feel, the complicated neural machinery turning away in our brains but have to settle for an interpreted, digested version, a user-illusion that is so familiar to us that we take it not just for reality but also for the most indubitable and intimately known reality of all. That’s what it is like to be us. 345

Thus, the astounding problem posed by Cartesian gravity. As a socio-communicative interface possessing no access whatsoever to our actual sources, we can only be duped by our immediate intuitions. Referring to John Searle’s Cartesian injunction to insist upon a first-person solution of meaning and consciousness, Dennett writes:

The price you pay for following Searle’s advice is that you get all your phenomena, the events and things that have to be explained by your theory, through a channel designed not for scientific investigation but for handy, quick-and-dirty use in the rough and tumble of time-pressured life. You can learn a lot about how the brain it—you can learn quite a lot about computers by always insisting on the desk-top point of view, after all—but only if you remind yourself that your channel is systematically oversimplified and metaphorical, not literal. That means you must resist the alluring temptation to postulate a panoply of special subjective properties (typically called qualia) to which you (alone) have access. Those are fine items for our manifest image, but they must be “bracketed,” as the phenomenologist’s say, when we turn to scientific explanation. Failure to appreciate this leads to an inflated list of things that need to be explained, featuring, preeminently, a Hard Problem that is nothing but an artifact of the failure to recognize that evolution has given us a gift that sacrifices literal truth for utility. 365-366

Sound familiar? Human metacognitive access and capacity is radically heuristic, geared to the solution of practical ancestral problems. As such, we should expect that tasking that access and capacity, ‘relying on the first-person,’ with solving theoretical questions regarding the nature of experience and cognition will prove fruitless.

It’s worth pausing here, I think, to emphasize just how much this particular argumentative tack represents a departure from Dennett’s prior attempts to clear intuitive ground for his views. Nothing he says here is unprecedented: heuristic neglect has always lurked in the background of his view, always found light of day in this or that corner of this or that argument. But at no point—not in Consciousness Explained, nor even in “Quining Qualia”—has it occupied the dialectical pride of place he concedes it in From Bacteria to Bach and Back. Prior to this book, Dennett’s primary strategy has been to exploit the kinds of ‘crashes’ brought about by heuristic misapplication (though he never explicitly characterizes them as such). Here, with Cartesian gravity, he takes a gigantic step toward theorizing the neurocognitive bases of the problematic ‘intuition pumps’ he has targeted over the years. This allows him to generalize his arguments against first-person theorizations of experience in a manner that had hitherto escaped him.

But he still hasn’t quite found his way entirely clear. As I hope to show, heuristic neglect is far more than simply another tool Dennett can safely store with his pre-existing commitments. The best way to see this, I think, is to consider one particular misreading of the new argument against qualia in Chapter 14.

GRAVITY MEETS REALITY

In “Dennett and the Reality of Red,” Tom Clark presents a concise and elegant account of how Dennett’s argument against the reality of qualia in From Bacteria to Bach and Back turns upon a misplaced physicalist bias. The extraordinary thing about his argument—and the whole reason we’re considering it here—lies in the way he concedes so much of Dennett’s case, only to arrive at a version of the very conclusion Dennett takes himself to be arguing against:

I’d suggest that qualia, properly understood, are simply the discriminable contents of sensory experience – all the tastes, colors, sounds, textures, and smells in terms of which reality appears to us as conscious creatures. They are not, as Dan correctly says, located or rendered in any detectable mental medium. They’re not located anywhere, and we are not in an observational or epistemic relationship to them; rather they are the basic, not further decomposable, hence ineffable elements of the experiences we consist of as conscious subjects.

The fact that ‘Cartesian gravity’ appears nowhere in his critique, however, pretty clearly signals that something has gone amiss. Showing as much, however, requires I provide some missing context.

After introducing his user-illusion metaphor for consciousness, Dennett is quick to identify the fundamental dialectical problem Cartesian gravity poses his characterization:

if (as I have just said) your individual consciousness is rather like the user-illusion on your computer screen, doesn’t this imply that there is a Cartesian theatre after all, where this portrayal happens, where the show goes on, rather like the show you perceive on the desktop? No, but explaining what to put in place of the Cartesian theatre will take some stretching of the imagination. 347

This is the point where he introduces a third ‘strange inversion of reasoning,’ this one belonging to Hume. Hume’s inversion, curiously enough, lies in his phenomenological observation of the way we experience causation ‘out there,’ in the world, even though we know given our propensity to get it wrong that it belongs to the machinery of cognition. (This is a canny move on Dennett’s part, but I think it demonstrates the way in which the cognitive consequences of heuristic neglect remain, as yet, implicit for him). What he wants is to ‘theatre-proof’ his account of conscious experience as a user-illusion. Hume’s inversion provides him a way to both thematize and problematize the automatic assumption that the illusion must itself be ‘real.’

The new argument for qualia eliminativism he offers, and that Clark critiques, is meant to “clarify [his] point, if not succeed in persuading everybody—as Hume says, the contrary notion is so riveted in our minds” (358). He gives the example of the red afterimage experienced in complementary colour illusions.

The phenomenon in you that is responsible for this is not a red stripe. It is a representation of a red stripe in some neural system of representation that we haven’t yet precisely located and don’t yet know how to decode, but we can be quite sure it is neither red nor a stripe. You don’t know exactly what causes you to seem to see a red stripe out in the world, so you are tempted to lapse into Humean misattribution: you misinterpret your sense (judgment, conviction, belief, inclination) that you are seeing a red stripe as arising from a subjective property (a quale, in the jargon of philosophy) that is the source of your judgment, when in fact, that is just about backward. It is your ability to describe “the red stripe,” your judgment, your willingness to make the assertions you just made, and your emotional reactions (if any) to “the red stripe” that is the source of your conviction that there is a subjective red stripe. 358-359

The problem, Dennett goes on to assert, lies in “mistaking the intentional object of a belief for its cause” (359). In normal circumstances, when we find ourselves in the presence of an apple, say, we’re entirely justified in declaring the apple the cause of our belief. In abnormal circumstances, however, this reflex dupes us into thinking that something extra-environmental—‘ineffable,’ supernatural—has to be the cause. And thus are inscrutable (and therefore perpetually underdetermined) theoretical posits like qualia born, giving rise to scholastic excesses beyond numbering.

Now the key to this argument lies in the distinction between normal and abnormal circumstances, which is to say the cognitive ecology occasioning the application of a certain heuristic regime—namely the one identified by Hume. For Clark, however, the salient point of Dennett’s argument is that the illusory red stripe lies nowhere.

Dan, a good, sophisticated physicalist, wants everything real to be locatable in the physical external world as vetted by science. What’s really real is what’s in the scientific image, right? But if you believe that we really have experiences, that experiences are specified in terms of content, and that color is among those contents, then the color of the experienced afterimage is as real as experiences. But it isn’t locatable, nor are any of the contents of experience: experiences are not observables. We don’t find them out there in spacetime or when poking around in the brain; we only find objects of various qualitative, quantitative and conceptual descriptions, including the brains with which experiences are associated. But since experiences and their contents are real, this means that not all of what’s real is locatable in the physical, external world.

Dennett never denies that we have experiences, and he even alludes to the representational basis of those experiences in the course of making his red stripe argument. A short time later, in his consideration of Cartesian gravity, he even admits that our ability to report our experiences turns on their content: “By taking for granted the content of your mental states, by picking them out by their content, you sweep under the rug all the problems of indeterminacy or vagueness of content” (367).

And yet, even though Clark is eager to seize on these and other instances of experience-talk, representation-talk, and content-talk, he completely elides the circumstances occasioning them, and thus the way Dennett sees all of these usages as profoundly circumstantial—‘normal’ or ‘abnormal.’ Sometimes they’re applicable, and sometimes they’re not. In a sense, the reality/unreality of qualia is actually beside the point; what’s truly at issue is the applicability of the heuristic tools philosophy has traditionally applied to experience. The question is, What does qualia-talk add to our ability to naturalistically explain colour, affect, sound, and so on? No one doubts our ability to correlate reportable metacognitive aspects of experience to various neural and environmental facts. No one doubts our sensory discriminatory abilities outrun our metacognitive discriminatory abilities—our ability to report. The empirical relationships are there regardless: the question is one of whether the theoretical paradigms we reflexively foist on these relationships lead anywhere other than endless disputation.

Clark not only breezes past the point of Dennett’s Red Stripe argument, he also overlooks the rather stark challenge it poses it his own position. Simply raising the spectre of heuristic metacognitive inadequacy, as Dennett does, obliges Clark to warrant his assumptive metacognitive claims. (Arguing, as Clark does, that we have no epistemic relation to our experiences simply defers the obligation to this second extraordinary claim: heaping speculation atop speculation generates more problems, not less). Dennett spends hundreds of pages amassing empirical evidence for the fractionate, heuristic nature of cognition. Given that our ancestors required only the solution of practical problems, the chances that human metacognition furnishes the information and capacity required to intuit the nature of experience (that it consists of representations consisting of contents consisting of qualia) is vanishingly small. What we should expect is that our metacognitive reflexes will do what they’ve always done: apply routines adapted to practical cognitive and communicative problem resolution to what amounts to radically unprecedented problem ecology. All things being equal, it’s almost certain that the so-called first-person can do little more than flounder before the theoretical question of itself.

The history of intentional philosophy and psychology, if nothing else, vividly illustrates as much.

In the case of content, it’s hard not to see Clark’s oversight as tendentious insofar as Dennett is referring to the way content talk exposes us to Cartesian gravity (“Reading your own mind is too easy” (367)) and the relative virtues of theorizing cognition via nonhuman species. But otherwise, I’m inclined to think Clark’s reading of Dennett is understandable. Clark misses the point of heuristic neglect entirely, but only because Dennett himself remains fuzzy on just how his newfound appreciation for the Grand Inversion—the one we’ve been exploring here on Three Pound Brain for years now—bears on his preexisting theoretical commitments. In particular, he has yet to see the hash it makes of his ‘stances’ and the ‘real patterns’ underwriting them. As soon as Dennett embraced heuristic neglect, opportunistic eliminativism ceased being an option. As goes the ‘reality’ of qualia, so goes the ‘reality’ supposedly underwriting the entire lexicon of traditional intentionalist philosophy. Showing as much, however, requires showing how Heuristic Neglect Theory arises out of the implications of Dennett’s own argument, and how it transforms Cartesian gravity into a proto-cognitive psychological explanation of intentional philosophy—an empirically tractable explanation for why humanity finds humanity so dumbfounding. But since I’m sure eyes are crossing and chins are nodding, I’ll save the way HNT can be directly drawn from the implicature of Dennett’s position for a second installment, then show how HNT both denies representation ‘reality,’ while explaining what makes representation talk so useful in my third and final post on what has been one the most exciting reading adventures in my life.

Advertisements

The Knowledge Illusion Illusion

by rsbakker

 

 

When academics encounter a new idea that doesn’t conform to their preconceptions, there’s often a sequence of three reactions: first dismiss, then reject, then finally declare it obvious. Steven Sloman and Philip Fernbach, The Knowledge Illusion, 255

 

The best example illustrating the thesis put forward in Steven Sloman and Philip Fernbach’s excellent The Knowledge Illusion: Why We Never Think Alone is one I’ve belaboured before, the bereft  ‘well-dressed man’ in Byron Haskin’s 1953 version of The War of the Worlds, dismayed at his malfunctioning pile of money, unable to comprehend why it couldn’t secure him passage out of Los Angeles. So keep this in mind: if all goes well, we shall return to the well-dressed man.

The Knowledge Illusion is about a great many things, everything from basic cognitive science to political polarization to educational reform, but it all comes back to how individuals are duped by the ways knowledge outruns individual human brains. The praise for this book has been nearly universal, and deservedly so, given the existential nature of the ‘knowledge problematic’ in the technological age. Because of this consensus, however, I’ll play the devil’s advocate and focus on what I think are core problems. For all the book’s virtues, I think Steven Sloman, Professor of Cognitive, Linguistic, and Psychological Sciences at Brown University, and Philip Fernbach, Assistant Professor at the University of Colorado, find themselves wandering the same traditional dead ends afflicting all philosophical and psychological discourses on the nature of human knowledge. The sad fact is nobody knows what knowledge is. They only think they do.

Sloman and Fernbach begin with a consideration of our universal tendency to overestimate our understanding. In a wide variety of tests, individuals regularly fail to provide first order evidence regarding second order reports of what they know. So for instance, they say they understand how toilets or bicycles work, yet find themselves incapable of accurately drawing the mechanisms responsible. Thus the ‘knowledge illusion,’ or the ‘illusion of explanatory depth,’ the consistent tendency to think our understanding of various phenomena and devices is far more complete than it in fact is.

This calves into two interrelated questions: 1) Why are we so prone to think we know more than we do? and 2) How can we know so little yet achieve so much? Sloman and Fernbach think the answer to both these questions lies in the way human cognition is embodied, embedded, and enactive, which is to say, the myriad ways it turns on our physical and social environmental interactions. They also hold the far more controversial position that cognition is extended, that ‘mind,’ understood as a natural phenomenon, just ain’t in our heads. As they write:

The main lesson is that we should not think of the mind as an information processor that spends its time doing abstract computation in the brain. The brain and the body and the external environment all work together to remember, reason, and make decisions. The knowledge is spread through the system, beyond just the brain. Thought does not take place on a stage inside the brain. Thought uses knowledge in the brain, the body, and the world more generally to support intelligent action. In other words, the mind is not in the brain. Rather, the brain is in the mind. The mind uses the brain and other things to process information. 105

The Knowledge Illusion, in other words, lies astride the complicated fault-line between cognitivism, the tendency to construe cognition as largely representational and brain-bound, and post-cognitivism, the tendency to construe cognition as constitutively dependent on the community and environment. Since the book is not only aimed at a general audience but also about the ways humans are so prone to confuse partial for complete accounts, it is more than ironic that Sloman and Fernbach fail to contextualize the speculative, and therefore divisive, nature of their project. Charitably, you could say The Knowledge Illusion runs afoul the very ‘curse of knowledge’ illusion it references throughout, the failure to appreciate the context of cognitive reception—the tendency to assume that others know what you know, and so will draw similar conclusions. Less charitably, the suspicion has to be that Sloman and Fernbach are actually relying on the reader’s ignorance to cement their case. My guess is that the answer lies somewhere in the middle, and that the authors, given their sensitivity to the foibles and biases built into human communication and cognition, would acknowledge as much.

But the problem runs deeper. The extended mind hypothesis is subject to a number of apparently decisive counter-arguments. One could argue a la Adams and Aizawa, for instance, and accuse Sloman and Fernbach, of committing the so-called ‘causal-constitutive fallacy,’ mistaking causal influences on cognition for cognition proper. Even if we do accept that external factors are constitutive of cognition, the question becomes one of where cognition begins and ends. What is the ‘mark of the cognitive’? After all, ‘environment’ potentially includes the whole of the physical universe, and ‘community’ potentially reaches back to the origins of life. Should we take a page from Hegel and conclude that everything is cognitive? If our minds outrun our brains, then just where do they end?

So far, every attempt to overcome these and other challenges has only served to complicate the controversy. Cognitivism remains a going concern for good reason: it captures a series of powerful second-order intuitions regarding the nature of human cognition, intuitions that post-cognitivists like Sloman and Fernbach would have us set aside on the basis of incompatible second-order intuitions regarding that self-same nature. Where the intuitions milked by cognitivism paint an internalist portrait of knowledge, the intuitions milked by post-cognitivism sketch an externalist landscape. Back and forth the arguments go, each side hungry to recruit the latest scientific findings into their explanatory paradigms. At some point, the unspoken assumption seems to be, the abductive weight supporting either position will definitively tip in favour of either one or the other. By time we return to our well-dressed man and his heap of useless money, I hope to show how and why this will never happen.

For the nonce, however, the upshot is that either way you cut it, knowledge, as the subject of theoretical investigation, is positively awash in illusions, intuitions that seem compelling, but just ain’t so. For some profound reason, knowledge and other so-called ‘intentional phenomena’ baffle us in way distinct from all other natural phenomena with the exception of consciousness. This is the sense in which one can speak of the Knowledge Illusion Illusion.

Let’s begin with Sloman and Fernbach’s ultimate explanation for the Knowledge Illusion:

The Knowledge Illusion occurs because we live in a community of knowledge and we fail to distinguish the knowledge that is in our heads from the knowledge outside of it. We think the knowledge we have about how things work sits inside our skulls when in fact we’re drawing a lot of it from the environment and from other people. This is as much a feature of cognition as it is a bug. The world and our community house most of our knowledge base. A lot of human understanding consists simply of awareness that the knowledge is out there. 127-128.

The reason we presume knowledge sufficiency, in other words, is that we fail to draw a distinction between individual knowledge and collective knowledge, between our immediate know-how and know-how requiring environmental and social mediation. Put differently, we neglect various forms of what might be called cognitive dependency, and so assume cognitive independence, the ability to answer questions and solve problems absent environmental and social interactions. We are prone to forget, in other words, that our minds are actually extended.

This seems elegant and straightforward enough: as any parent (or spouse) can tell you, humans are nothing if not prone to take things for granted! We take the contributions of our fellows for granted, and so reliably overestimate our own epistemic were-withal. But something peculiar has happened. Framed in these terms, the knowledge illusion suddenly bears a striking resemblance to the correspondence or attribution error, our tendency to put our fingers on our side of the scales when apportioning social credit. We generally take ourselves to have more epistemic virtue than we in fact possess for the same reason we generally take ourselves to have more virtue than we in fact possess: because ancestrally, confabulatory self-promotion paid greater reproductive dividends than accurate self-description. The fact that we are more prone to overestimate epistemic virtue given accessibility to external knowledge sources, on this account, amounts to no more than the awareness that we have resources to fall back on, should someone ‘call bullshit.’

There’s a great deal that could be unpacked here, not the least of which is the way changing demonstrations of knowledge into demonstrations of epistemic virtue radically impacts the case for the extended mind hypothesis. But it’s worth considering, I think, how this alternative explanation illuminates an earlier explanation they give of the illusion:

So one way to conceive of the illusion of explanatory depth is that our intuitive system overestimates what it can deliberate about. When I ask you how a toilet works, your intuitive system reports, “No problem, I’m very comfortable with toilets. They are part of my daily experience.” But when your deliberative system is probed by a request to explain how they work, it is at a loss because your intuitions are only superficial. The real knowledge lies elsewhere. 84

In the prior explanation, the illusion turns on confusing our individual with our collective resources. We presume that we possess knowledge that other people have. Here, however, the illusion turns on the superficiality of intuitive cognition. “The real knowledge lies elsewhere” plays no direct explanatory role whatsoever. The culprit here, if anything, lies with what Daniel Kahneman terms WYSIATI, or ‘What-You-See-Is-All-There-Is,’ effects, the way subpersonal cognitive systems automatically presume the cognitive sufficiency of whatever information/capacity they happen to have at their disposal.

So, the question is, do we confabulate cognitive independence because subpersonal cognitive processing lacks the metacognitive monitoring capacity to flag problematic results, or because such confabulations facilitated ancestral reproductive success, or because our blindness to the extended nature of knowledge renders us prone to this particular type of metacognitive error?

The first two explanations, at least, can be combined. Given the divide and conquer structure of neural problem-solving, the presumptive cognitive sufficiency (WYSIATI) of subpersonal processing is inescapable. Each phase of cognitive processing turns on the reliability of the phases preceding (which is why we experience sensory and cognitive illusions rather than error messages). If those illusions happen to facilitate reproduction, as they often do, then we end up with biological propensities to commit things like epistemic attribution errors. We both think and declare ourselves more knowledgeable than we in fact are.

Blindness to the ‘extended nature of knowledge,’ on this account, doesn’t so much explain the knowledge illusion as follow from it.

The knowledge illusion is primarily a metacognitive and evolutionary artifact. This actually follows as an empirical consequence of the cornerstone commitment of Sloman and Fernbach’s own theory of cognition: the fact that cognition is fractionate and heuristic, which is to say, ecological. This becomes obvious, I think, but only once we see our way past the cardinal cognitive illusion afflicting post-cognitivism.

Sloman and Fernbach, like pretty much everyone writing popular accounts of embodied, embedded, and enactive approaches to cognitive science, provide the standard narrative of the rise and fall of GOFAI, standard computational approaches to cognition. Cognizing, on this approach, amounts to recapitulating environmental systems within universal computational systems, going through the enormous expense of doing in effigy in order to do in the world. Not only is such an approach expensive, it requires prior knowledge of what needs to be recapitulated and what can be ignored—tossing the project into the infamous jaws of the Frame Problem. A truly general cognitive system is omni-applicable, capable of solving any problem in any environment, given the requisite resources. The only way to assure that ecology doesn’t matter, however, is to have recapitulated that ecology in advance.

The question from a biological standpoint is simply one of why we need to go through all the bother of recapitulating a problem-solving ecology when that ecology is already there, challenging us, replete with regularities we can exploit without needing to know whatsoever. “This assumption that the world is behaving normally gives people a giant crutch,” as Sloman and Fernbach put it. “It means that we don’t have to remember everything because the information is stored in the world” (95). All cognition requires are reliable interactive systematicities—cognitive ecologies—to steer organisms through their environments. Heuristics are the product of cognitive systems adapted to the exploitation of the correlations between regularities available for processing and environmental regularities requiring solution. And since the regularities happened upon, cues, are secondary to the effects they enable, heuristic systems are always domain specific. They don’t travel well.

And herein lies the rub for Sloman and Fernbach: If the failure of cognitivism lies in its insensitivity to cognitive ecology, then the failure of post-cognitivism lies in its insensitivity to metacognitive ecology, the fact that intentional modes of theorizing cognition are themselves heuristic. Humans had need to troubleshoot claims, to distinguish guesswork from knowledge. But they possessed no access whatsoever to the high-dimensional facts of the matter, so they made do with what was available. Our basic cognitive intuitions facilitate this radically heuristic ‘making do,’ allowing us to debug any number of practical communicative problems. The big question is whether they facilitate anything theoretical. If intentional cognition turns on systems selected to solve practical problem ecologies absent information, why suppose it possesses any decisive theoretical power? Why presume, as post-cognitivists do, that the theoretical problem of intentional cognition lies within the heuristic purview of intentional cognition?

Its manifest inapplicability, I think, can be clearly discerned in The Knowledge Illusion. Consider Sloman and Fernbach’s contention that the power of heuristic problem-solving turns on the ‘deep’ and ‘abstract’ nature of the information exploited by heuristic cognitive systems. As they write:

Being smart is all about having the ability to extract deeper, more abstract information from the flood of data that comes into our senses. Instead of just reacting to the light, sounds, and smells that surround them, animals with sophisticated large brains respond to deep, abstract properties of the that they are sensing. 46

But surely ‘being smart’ lies in the capacity to find, not abstracta, but tells, sensory features possessing reliable systematic relationships to deep environments. There’s nothing ‘deep’ or ‘abstract’ about the moonlight insects use to navigate at night—which is precisely why transverse orientation is so easily hijacked by bug-zappers and porch-lights. There’s nothing ‘deep’ or ‘abstract’ about the tastes triggering aversion in rats, which is why taste aversion is so easily circumvented by using chronic rodenticides. Animals with more complex brains, not surprisingly, can discover and exploit more tells, which can also be hijacked, cued ‘out of school.’ We bemoan the deceptive superficiality of political and commercial marketing for good reason! It’s unclear what ‘deeper’ or ‘more abstract’ add here, aside from millennial disputation. And yet Sloman and Fernbach continue, “[t]he reason that deeper, more abstract information is helpful is that it can be used to pick out what we’re interested in from an incredibly complex array of possibilities, regardless of how the focus of our interest presents itself” (46).

If a cue, or tell—be it a red beak or a prolonged stare or a scarlet letter—possesses some exploitable systematic relationship to some environmental problem, then nothing more is needed. Talk of ‘depth’ or ‘abstraction’ plays no real explanatory function, and invites no little theoretical mischief.

The term ‘depth’ is perhaps the biggest troublemaker, here. Insofar as human cognition is heuristic, we dwell in shallow information environments, ancestral need-to-know ecologies, remaining (in all the myriad ways Sloman and Fernbach describe so well) entirely ignorant of the deeper environment, and the super-complex systems comprising them. What renders tells so valuable is their availability, the fact that they are at once ‘superficial’ and systematically correlated to the neglected ‘deeps’ requiring solution. Tells possess no intrinsic mark of their depth or abstraction. It is not the case that “[a]s brains get more complex, they get better at responding to deeper, more abstract cues from the environment, and this makes them ever more adaptive to new situations” (48). What is the case is far more mundane: they get better at devising, combining, and collecting environmental tells.

And so, one finds Sloman and Fernbach at metaphoric war with themselves:

It is rare for us to directly perceive the mechanisms that create outcomes. We experience our actions and we experience the outcomes of those actions; only by peering inside the machine do we see the mechanism that makes it tick. We can peer inside when the components are visible. 73

As they go on to admit, “[r]easoning about social situations is like reasoning about physical objects: pretty shallow” (75).

The Knowledge Illusion is about nothing if not the superficiality of human cognition, and all the ways we remain oblivious to this fact because of this fact. “Normal human thought is just not engineered to figure out some things” (71), least of all the deep/fundamental abstracta undergirding our environment! Until the institutionalization of science, we were far more vulture than lion, information scavengers instead of predators. Only the scientific elucidation of our deep environments reveals how shallow and opportunistic we have always been, how reliant on ancestrally unfathomable machinations.

So then why do Sloman and Fernbach presume that heuristic cognition grasps things both abstract and deep?

The primary reason, I think, turns on the inevitably heuristic nature of our attempts to cognize cognition. We run afoul these heuristic limits every time we look up at the night sky. Ancestrally, light belonged to those systems we could take for granted; we had no reason to intuit anything about its deeper nature. As a result, we had no reason to suppose we were plumbing different pockets of the ancient past whenever we paused to gaze into the night sky. Our ability to cognize the medium of visual cognition suffers from what might be called medial neglect. We have to remind ourselves we’re looking across gulfs of time because the ecological nature of visual cognition presumes the ‘transparency’ of light. It vanishes into what it reveals, generating a simultaneity illusion.

What applies to vision applies to all our cognitive apparatuses. Medial neglect, in other words, characterizes all of our intuitive ways of cognizing cognition. At fairly every turn, the enabling dimension of our cognitive systems is consigned to oblivion, generating, upon reflection, the metacognitive impression of ‘transparency,’ or ‘aboutness’—intentionality in Brentano’s sense. So when Sloman and Fernbach attempt to understand the cognitive nature of heuristic selectivity, they cue the heuristic systems we evolved to solve practical epistemic problems absent any sensitivity to the actual systems responsible, and so run afoul a kind of ‘transparency illusion,’ the notion that heuristic cognition requires fastening onto something intrinsically simple and out there—a ‘truth’ of some description, when all our brain need to do is identify some serendipitously correlated cue in its sensory streams.

This misapprehension is doubly attractive, I think, for the theoretical cover it provides their contention that all human cognition is causal cognition. As they write:

… the purpose of thinking is to choose the most effective action given the current situation. That requires discerning the deep properties that are constant across situations. What sets humans apart is our skill at figuring out what those deep, invariant properties are. It takes human genius to identify the key properties that indicate if someone has suffered a concussion or has a communicable disease, or that it’s time to pump up a car’s tires. 53

In fact, they go so far as to declare us “the world’s master causal thinkers” (52)—a claim they spend the rest of the book qualifying. As we’ve seen, humans are horrible at understanding how things work: “We may be better at causal reasoning than other kinds of reasoning, but the illusion of explanatory depth shows that we are still quite limited as individuals in how much of it we can do” (53).

So, what gives? How can we be both causal idiots and causal savants?

Once again, the answer lies in their own commitments. Time and again, they demonstrate the way the shallowness of human cognition prevents us from cognizing that shallowness as such. The ‘deep abstracta’ posited by Sloman and Fernbach constitute a metacognitive version of the very illusion of explanatory depth they’re attempting to solve. Oblivious to the heuristic nature of our metacognitive intuitions, they presume those intuitions deep, theoretically sufficient ways to cognize the structure of human cognition. Like the physics of light, the enabling networks of contingent correlations assuring the efficacy of various tells get flattened into oblivion—the mediating nature vanishes—and the connection between heuristic systems and the environments they solve becomes an apparently intentional one, with ‘knowing’ here, ‘known’ out there, and nothing in between. Rather than picking out strategically connected cues, heuristic cognition isolates ‘deep causal truths.’

How can we be both idiots and savants when it comes to causality? The fact is, all cognition is not causal cognition. Some cognition is causal, while other cognition—the bulk of it—is correlative. What Sloman and Fernbach systematically confuse are the kinds of cognitive efficacy belonging to the isolation of actual mechanisms with the kinds of cognitive efficacy belonging to the isolation of tells possessing unfathomable (‘deep’) correlations to those mechanisms. The latter cognition, if anything, turns on ignoring the actual causal regularities involved. This is what makes it both so cheap and so powerful (for both humans and AI): it relieves us of the need to understand the deeper nature of things, allowing us to focus on what happens next.

Although some predictions turn on identifying actual causes, those requiring the heuristic solution of complex systems turn on identifying tells, triggers that are systematically correlated precursors to various significant events. Given our metacognitive neglect of the intervening systems, we regularly fetishize the tells available, take them to be the causes of the kinds of effects we require. Sloman and Fernbach’s insistence on the causal nature of human cognition commits this very error: it fetishizes heuristic cues. (Or to use Klaus Fiedler’s terminology, it confuses pseudocontingencies for genuine contingencies, or to use Andrei Cimpian’s, it fails to recognize a kind of ‘inherence heuristic’ as heuristic).

The power of predictive reasoning turns on the plenitude of potential tells, our outright immersion in environmental systematicities. No understanding of celestial mechanics is required to use the stars to anticipate seasonal changes and so organize agricultural activities. The cost of this immersion, on the other hand, is the inverse problem, the problem of isolating genuine causes as opposed to mere correlations on the basis of effects. In diagnostic reasoning, the sheer plenitude of correlations is the problem: finding causes amounts to finding needles in haystacks, sorting systematicities that are genuinely counterfactual from those that are not. Given this difficulty, it should come as no surprise that problems designed to cue predictive deliberation tend to neglect the causal dimension altogether. Tells, even when imbued with causal powers, fetishized, stand entirely on their own.

Sloman and Fernbach’s explanation of ‘alternative cause neglect’ thoroughly illustrates, I think, the way cognitivism and post-cognitivism have snarled cognitive psychology in the barbed wire of incompatible intuitions. They also point out the comparative ease of predictive versus diagnostic reasoning. But where the above sketch explains this disparity in thoroughly ecological terms, their explanation is decidedly cognitivist: we recapitulate systems, they claim, run ‘mental simulations’ to explore the space of possible effects. Apparently, running these tapes backward to explore the space of possible causes is not something nature has equipped us to do, at least easily. “People ignore alternative causes when reasoning from cause to effect,” they contend, “because their mental simulations have no room for them, and because we’re unable to run mental simulations backward in time from effect to cause” (61).

Even setting aside the extravagant metabolic expense their cognitivist tack presupposes, it’s hard to understand how this explains much of anything, let alone how the difference between these two modes figures in the ultimate moral of Sloman and Fernbach’s story: the social intransigence of the knowledge illusion.

Toward the end of the book, they provide a powerful and striking picture of the way false beliefs seem to have little, if anything, to do with the access to scientific facts. The provision of reasons likewise has little or no effect. People believe what their group believes, thus binding generally narcissistic or otherwise fantastic worldviews to estimations of group membership and identity. For Sloman and Fernbach, this dovetails nicely with their commitment to extended minds, the fact that ‘knowing’ is fundamentally collective.

Beliefs are hard to change because they are wrapped up with our values and identities, and they are shared with our community. Moreover, what is actually in our own heads—our causal models—are sparse and often wrong. This explains why false beliefs are so hard to weed out. Sometimes communities get the science wrong, usually in ways supported by our causal models. And the knowledge illusion means that we don’t check our understanding often or deeply enough. This is a recipe for antiscientific thinking. 169

But it’s not simply the case that reports of belief signal group membership. One need only think of the ‘kooks’ or ‘eccentrics’ in one’s own social circles (and fair warning, if you can’t readily identify one, that likely means you’re it!) to bring home the cognitive heterogeneity one finds in every community, people who demonstrate reliability in some other way (like my wife’s late uncle who never once attended church, but who cut the church lawn every week all the same).

Like every other animal on this planet, we’ve evolved to thrive in shallow cognitive ecologies, to pick what we need when we need it from wherever we can, be it the world or one another. We are cooperative cognitive scavengers, which is to say, we live in communal shallow cognitive ecologies. The cognitive reports of ingroup members, in other words, are themselves powerful tells, correlations allowing us to predict what will happen next absent deep environmental access or understanding. As an outgroup commentator on these topics, I’m intimately acquainted with the powerful way the who trumps the what in claim-making. I could raise a pyramid with all the mud and straw I’ve accumulated! But this has nothing to do with the ‘intrinsically communal nature of knowledge,’ and everything to do with the way we are biologically primed to rely on our most powerful ancestral tools. It’s not simply that we ‘believe to belong,’ but because, ancestrally speaking, it provided an extraordinarily metabolically cheap way to hack our natural and social environments.

So cheap and powerful, in fact, we’ve developed linguistic mechanisms, ‘knowledge talk,’ to troubleshoot cognitive reports.

And this brings us back to the well-dressed man in The War of the Worlds, left stranded with his useless bills, dumbfounded by the sudden impotence of what had so reliably commanded the actions of others in the past. Paper currency requires vast systems of regularities to produce the local effects we all know and love and loathe. Since these local, or shallow, effects occur whether or not we possess any inkling of the superordinate, deep, systems responsible, we can get along quite well simply supposing, like the well-dressed man, that money possesses this power on its own, or intrinsically. Pressed to explain this intrinsic power, to explain why this paper commands such extraordinary effects, we posit a special kind of property, value.

What the well-dressed man illustrates, in other words, is the way shallow cognitive ecologies generate illusions of local sufficiency. We have no access to the enormous amount of evolutionary, historical, social, and personal stage-setting involved when our doctor diagnoses us with depression, so we chalk it up to her knowledge, not because any such thing exists in nature, but because it provides us a way to communicate and troubleshoot an otherwise incomprehensible local effect. How did your doctor make you better? Obviously, she knows her stuff!

What could be more intuitive?

But then along comes science, and lo, we find ourselves every bit as dumbfounded when asked to causally explain knowledge as (to use Sloman and Fernbach’s examples) when asked to explain toilets or bicycles or vaccination or climate warming or why incest possessing positive consequences is morally wrong. Given our shallow metacognitive ecology, we presume that the heuristic systems applicable to troubleshooting practical cognitive problems can solve the theoretical problem of cognition as well. When we go looking for this or that intentional formulation of ‘knowledge’ (because we cannot even agree on what it is we want to explain) in the head, we find ourselves, like the well-dressed man, even more dumbfounded. Rather than finding anything sufficient, we discover more and more dependencies, evidence of the way our doctor’s ability to cure our depression relies on extrinsic environmental and social factors. But since we remain committed to our fetishization of knowledge, we conclude that knowledge, whatever it is, simply cannot be in the head. Knowledge, we insist, must be nonlocal, reliant on natural and social environments. But of course, this cuts against the very intuition of local sufficiency underwriting the attribution of knowledge in the first place. Sure, my doctor has a past, a library, and a community, but ultimately, it’s her knowledge that cures my depression.

And so, cognitivism and post-cognitivism find themselves at perpetual war, disputing theoretical vocabularies possessing local operational efficacy in everyday or specialized experimental contexts, but perpetually deferring the possibility of any global, genuinely naturalistic understanding of human cognition. The strange fact of the matter is that there’s no such thing or function as ‘knowledge’ in nature, nothing deep to redeem our shallow intuitions, though knowledge talk (which is very real) takes us a long way to resolve a wide variety of practical problems. The trick isn’t to understand what knowledge ‘really is,’ but rather to understand the deep, supercomplicated systems underwriting the optimization of behaviour, and how they underwrite our shallow intuitive and deliberative manipulations. Insofar as knowledge talk forms a component of those systems, we must content ourselves with studying ‘knowledge’ as a term rather than an entity, leaving intentional cognition to solve what problems it can where it can. The time has come to leave both cognitivism and post-cognitivism behind, and to embrace genuinely post-intentional approaches, such as the ecological eliminativism espoused here.

The Knowledge Illusion, in this sense, provides a wonderful example of crash space, the way in which the introduction of deep, scientific information into our shallow cognitive ecologies is prone to disrupt or delude or simply fall flat altogether. Intentional cognition provides a way for us to understand ourselves and each other while remaining oblivious to any of the deep machinations actually responsible. To suffer ‘medial neglect’ is to be blind to one’s actual sources, to comprehend and communicate human knowledge, experience, and action via linguistic fetishes, irreducible posits possessing inexplicable efficacies, entities fundamentally incompatible with the universe revealed by natural science.

For all the conceits Sloman and Fernbach reveal, they overlook and so run afoul perhaps greatest, most astonishing conceit of them all: the notion that we should have evolved the basic capacity to intuit our own deepest nature, that hunches belonging to our shallow ecological past could show us the way into our deep nature, rather than lead us, on pain of systematic misapplication, into perplexity. The time has come to dismantle the glamour we have raised around traditional philosophical and psychological speculation, to stop spinning abject ignorance into evidence of glorious exception, and to see our millennial dumbfounding as a symptom, an artifact of a species that has stumbled into the trap of interrogating its heuristic predicament using shallow heuristic tools that have no hope of generating deep theoretical solutions. The knowledge illusion illusion.

BBT Creep: The Inherence Heuristic

by rsbakker

Exciting stuff! For years now the research has been creeping toward my grim semantic worst-case scenario–but “The inherence heuristic” is getting close, very close, especially the way it explicitly turns on the importance of heuristic neglect. The pieces have been there for quite some time; now researchers are beginning to put them together.

One way of looking at blind brain theory’s charge against intentionalism is that so-called intentional phenomena are pretty clear cut examples of inherence heuristics as discussed in this article, ways to handle complex systems absent any causal handle on those systems.  When Cimpion and Saloman write,

“To reiterate, the pool of facts activated by the mental shotgun for the purpose of generating an explanation for a pattern may often be heavily biased toward the inherent characteristics of that pattern’s constituents. As a result, when the storytelling part of the heuristic process takes over and attempts to make sense of the information at its disposal, it will have a rather limited number of options. That is, it will often be forced to construct a story that explains the existence of a pattern in terms of the inherent features of the entities within that pattern rather than in terms of factors external to it. However, the one-sided nature of the information delivered by the mental shotgun is not an impediment to the storytelling process. Quite the contrary – the less information is available, the easier it will be to fit it all into a coherent story.” 464

I think they are also describing what’s going on when philosophers attempt to theoretically solve intentionality, intentional cognition, relying primarily on the resources of intentional cognition. In fact, once you understand the heuristic nature of intentional cognition, the interminable nature of intentional philosophy becomes very easy to understand. We have no way of carving the complexities of cognition at the joints of the world, so we carve it at the joints of the problem instead. When your neighbour repairs your robotic body servant, rather than cognizing all the years he spent training to be a spy before being inserted into your daily routines, you ‘attribute’ him ‘knowledge,’ something miraculously efficacious in its own  right, inherent. And for the vast majority of problems you encounter, it works. Then the philosopher asks, ‘What is knowledge?’ and because adducing causal information scrambles our intuitions of ‘inherence,’ he declares only intentional idioms can cognize intentional phenomena, and the species remains stumped to this very day. Exactly as we should expect. Why should we think tools adapted to do without information regarding our nature can decode their own nature? What would this ‘nature’ be?

The best way to understand intentional philosophy, on a blind brain view, is as a discursive ‘crash space,’ a point where the application of our cognitive tools outruns their effectiveness in ways near and far. I’ve spent the last few years, now, providing various diagnoses of the kinds of theoretical wrecks we find in this space. Articles such as this convince me I won’t be alone for much longer!

So to give a brief example. Once one understands the degree to which intentional idioms turn on ‘inherence heuristics’–ways to manage causal systems absent any behavioural sensitivity to the mechanics of those systems–you can understand the deceptiveness of things like ‘intentional stances,’ the way they provide an answer that functions more like a get-out-of-jail-free card than any kind of explanation.

Given that ‘intentional stances’ belong to intentional cognition, then the fact that intentional cognition solves problems neglecting what is actually going on reflects rather poorly on the theoretical fortunes of the intentional stance. The fact is ‘intentional stances’ leave us with a very low dimensional understanding of our actual straits when it comes to understanding cognition–as we should expect, given that it utilizes a low dimensional heuristic system geared to solving practical problems on the fly and theoretical problems not at all.

All along I’ve been trying to show the way heuristics allow us to solve the explanatory gap, to finally get rid of intentional occultisms like the intentional stance and replace them with a more austere, and more explanatorily comprehensive picture. Now that the cat’s out of the bag, more and more cognitive scientists are going to explore the very real consequences of heuristic neglect. They will use it to map out the neglect structure of the human brain in ever finer detail, thus revealing where our intuitions trip over their own heuristic limits, and people will begin to see how thought can be construed as mangles of parallel-distributed processing meat. It will be clear that the ‘real patterns’ are not the ones required to redeem reflection, or its jargon. Nothing can do that now. Mark my words, inherence heuristics have a bright explanatory future.

Bonfire bright.

Call to the Edge

by rsbakker

Thomas Metzinger recently emailed asking me to flag these cognitive science/philosophy of mind goodies–dividends of his OPENmind initiative–and to spread the word regarding his MIND Group. As he writes on the website:

“The MIND Group sees itself as part of a larger process of exploring and developing new formats for promoting junior researchers in philosophy of mind and cognitive science. One of the basic ideas behind the formation of the group was to create a platform for people with one systematic focus in philosophy (typically analytic philosophy of mind or ethics) and another in empirical research (typically cognitive science or neuroscience). One of our aims has been to build an evolving network of researchers. By incorporating most recent empirical findings as well as sophisticated conceptual work, we seek to integrate these different approaches in order to foster the development of more advanced theories of the mind. One major purpose of the group is to help bridge the gap between the sciences and the humanities. This not only includes going beyond old-school analytic philosophy or pure armchair phenomenology by cultivating a new, type of interdisciplinarity, which is “dyed-in-the-wool” in a positive sense. It also involves experimenting with new formats for doing research, for example, by participating in silent meditation retreats and trying to combine a systematic, formal practice of investigating the structure of our own minds from the first-person perspective with proper scientific meetings, during which we discuss third-person criteria for ascribing mental states to a given type of system.”

The papers being offered look severely cool. As you all know, I think it’s pretty much a no-brainer that these are the issues of our day. Even if you hate the stuff, think my worst case scenario is flat out preposterous, these remain the issues of our day. Everywhere traditional philosophy turns it will be asked why its endless controversies enjoy any immunity from the mountains of data coming out of cognitive science. Billions are being spent on uncovering the facts of our nature, and the degree to which those facts are scientific is the degree to which we ourselves have become technology, something that can be manipulated in breathtaking ways. And what does the tradition provide then? Simple momentum? A garrotte? A messiah?

The Philosopher, the Drunk, and the Lamppost

by rsbakker

A crucial variable of interest is the accuracy of metacognitive reports with respect to their object-level targets: in other words, how well do we know our own minds? We now understand metacognition to be under segregated neural control, a conclusion that might have surprised Comte, and one that runs counter to an intuition that we have veridical access to the accuracy of our perceptions, memories and decisions. A detailed, and eventually mechanistic, account of metacognition at the neural level is a necessary first step to understanding the failures of metacognition that occur following brain damage and psychiatric disorder. Stephen M. Fleming and Raymond J. Dolan, “The neural basis of metacognitive ability,” Phil. Trans. R. Soc. B (2012) 367, 1338–1349doi:10.1098/rstb.2011.0417

As well as the degree to which we should accept the deliverances of philosophical reflection.

Philosophical reflection is a cultural achievement, an exaptation of pre-existing cognitive capacities. It is entirely possible that philosophical reflection, as an exaptation of pre-existing biocognitive capacities, suffers any number of cognitive short-circuits. And this could very well explain why philosophy suffers the perennial problems it does.

In other words, the empirical possibility of Blind Brain Theory cannot be doubted—no matter how disquieting its consequences seem to be. What I would like to assess here is the probability of the account being empirically substantiated.

The thesis is that traditional philosophical problem-solving continually runs afoul illusions falling out of metacognitive neglect. The idea is that intentional philosophy has been the butt of the old joke about the police officer who stops to help a drunk searching for his keys beneath a lamppost. The punch-line, of course, is that even though the drunk lost his keys in the parking lot, he’s searching beneath the lamppost because that’s the only place he can see. The twist for the philosopher lies in the way neglect consigns the parking lot—the drunk’s whole world in fact—to oblivion, generating the illusion that the light and the lamppost comprise an independent order of existence. For the philosopher, the keys to understanding what we are essentially can be found nowhere else because they exhaust everything that is within that order. Of course the keys that this or that philosopher claims to have found take wildly different forms—they all but shout profound theoretical underdetermination—but this seems to trouble only the skeptical spoil-sports.

Now I personally think the skeptics have always possessed far and away the better position, but since they could only articulate their critiques in the same speculative idiom as philosophy, they have been every bit as easy to ignore as philosophers. But times, I hope to show, have changed—dramatically so. Intentional philosophy is simply another family of prescientific discourses. Now that science has firmly established itself within its traditional domains, we should expect it to be progressively delegitimized the way all prescientific discourses have delegitimized.

To begin with, it is simply an empirical fact that philosophical reflection on the nature of human cognition suffers massive neglect. To be honest, I sometimes find myself amazed that I even need to make this argument to people. Our blindness to our own cognitive makeup is the whole reason we require cognitive science in the first place. Every single fact that the sciences of cognition and the brain have discovered is another fact that philosophical reflection is all but blind to, another ‘dreaded unknown unknown’ that has always structured our cognitive activity without our knowledge.

As Keith Frankish and Jonathan Evans write:

The idea that we have ‘two minds’ only one of which corresponds to personal, volitional cognition, has also wide implications beyond cognitive science. The fact that much of our thought and behaviour is controlled by automatic, subpersonal, and inaccessible cognitive processes challenges our most fundamental and cherished notions about personal and legal responsibility. This has major ramifications for social sciences such as economics, sociology, and social policy. As implied by some contemporary researchers … dual process theory also has enormous implications for educational theory and practice. As the theory becomes better understood and more widely disseminated, its implications for many aspects of society and academia will need to be thoroughly explored. In terms of its wider significance, the story of dual-process theorizing is just beginning.  “The Duality of Mind: An Historical Perspective, In Two Minds: Dual Processes and Beyond, 25

We are standing on the cusp of a revolution in self-understanding unlike any in human history. As they note, the process of digesting the implications of these discoveries is just getting underway—news of the revolution has just hit the streets of capital, and the provinces will likely be a long time in hearing it. As a result, the old ways still enjoy what might be called the ‘Only-game-in-town Effect,’ but not for very long.

The deliverances of theoretical metacognition just cannot be trusted. This is simply an empirical fact. Stanslaus Dehaene even goes so far as to state it as a law: “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79).

As I mentioned, I think this is a deathblow, but philosophers have devised a number of cunning ways to immunize themselves from this fact—philosophy is the art of rationalization, after all! If the brain (for some pretty obvious reasons) is horrible at metacognizing brain functions, then one need only insist that something more than the brain is at work. Since souls will no longer do, the philosopher switches to functions, but not any old functions. The fact that the functions of a system look different depending on the grain of investigation is no surprise: of course neurocellular level descriptions will differ from neural-network level descriptions. The intentional philosopher, however, wants to argue for a special, emergent order of intentional functions, one that happens to correspond to the deliverances of philosophical reflection. Aside from this happy correspondence, what makes these special functions so special is their incompatibility with biomechanical functions—an incompatibility so profound that biomechanical explanation renders them all but unintelligible.

Call this the ‘apples and oranges’ strategy. Now I think the sheer convenience of this view should set off alarm bells: If the science of a domain contradicts the findings of philosophical reflection, then that science must be exploring a different domain. But the picture is far more complicated, of course. One does not overthrow more than two thousand years of (apparent) self-understanding on the back of two decades of scientific research. And even absent this institutional sanction, there remains something profoundly compelling about the intentional deliverances of philosophical reflection, despite all the manifest problems. The intentionalist need only bid you to theoretically reflect, and lo, there are the oranges… Something has to explain them!

In other words, pointing out the mountain of unknown unknowns revealed by cognitive science is simply not enough to decisively undermine the conceits of intentional philosophy. I think it should be, but then I think the ancient skeptics had the better of things from the outset. What we really need, if we want to put an end to this vast squandering of intellectual resources, is to explain the oranges. So long as oranges exist, some kind of abductive case can be made for intentional philosophy. Doing this requires we take a closer look at what cognitive science can teach us about philosophical reflection and its capacity to generate self-understanding.

The fact is the intentionalist is in something of a dilemma. Their functions, they admit, are naturalistically inscrutable. Since they can’t abide dualism, they need their functions to be natural (or whatever it is the sciences are conjuring miracles out of) somehow, so whatever functions they posit, say as one realized in the scorekeeping attitudes of communities, they have to track brain function somehow. This responsibility to cognitive scientific finding regarding their object is matched by a responsibility to cognitive scientific finding regarding their cognitive capacity. Oranges or no oranges, both their domain and their capacity to cognize that domain answer to what cognitive science ultimately reveals. Some kind of emergent order has to be discovered within the order of nature, and we have to have to somehow possess the capacity to reliably metacognize that emergent order. Given what we already know, I think a strong case can be made that this latter, at least, is almost certainly impossible.

Consider Dehaene’s Global Neuronal Workspace Theory of Consciousness (GNW). On his account, at any given moment the information available for conscious report has been selected from parallel swarms of nonconscious processes, stabilized, and broadcast across the brain for consumption by other swarms of other nonconscious processes. As Dehaene writes:

The brain must contain a ‘router’ that allows it to flexibly broadcast information to and from its internal routines. This seems to be a major function of consciousness: to collect the information from various processors, synthesize it, and then broadcast the result—a conscious symbol—to other, arbitrarily selected processors. These processors, in turn, apply their unconscious skills to this symbol, and the entire process may repeat a number of times. The outcome is a hybrid serial-parallel machine, in which stages of massively parallel computation are interleaved with a serial stage of conscious decision making and information routing. Consciousness and the Brain, 105

Whatever philosophical reflection amounts to, insofar as it involves conscious report it involves this ‘hybrid serial-parallel machine’ described by Dehaene and his colleagues, a model which is entirely consistent with the ‘adaptive unconscious’ (See Tim Wilson’s A Stranger to Ourselves for a somewhat dated, yet still excellent overview) described in cognitive psychology. Whatever a philosopher can say regarding ‘intentional functions’ must in some way depend on the deliverances of this system.

One of the key claims of the theory, confirmed via a number of different experimental paradigms, is that access (or promotion) to the GNW is all or nothing. The insight is old: psychologists have long studied what is known as the ‘psychological refractory period,’ the way attending to one task tends to blot out or severely impair our ability to perform other tasks simultaneously. But recent research is revealing more of the radical ‘cortical bottleneck’ that marks the boundary between the massively parallel processing of multiple precepts (or interpretations thereof) and the serial stage of conscious cognition. [Marti, S., et al., A shared cortical bottleneck underlying Attentional Blink and Psychological Refractory Period, NeuroImage (2011), doi:10.1016/j.neuroimage.2011.09.063]

This is important because it means that the deliverances the intentional philosopher depend on when reflecting on problems involving intentionality or ‘experience’ more generally are limited to what makes the ‘conscious access cut.’ You could say the situation is actually far worse, since conscious deliberation on conscious phenomena requires the philosopher use the very apparatus they’re attempting to solve. In a sense they’re not only wagering that the information they require actually reaches consciousness in the first place, but that it can be recalled for subsequent conscious deliberation. The same way the scientist cannot incorporate information that doesn’t, either via direct observation or indirect observation via instrumentation, find its way to conscious awareness, the philosopher likewise cannot hazard ‘educated’ guesses regarding information that does not somehow make the conscious access cut, only twice over. In a sense, they’re peering at the remaindered deliverances of a serial straw through a serial straw–one that appears as wide as the sky for neglect! So there is a very real question of whether philosophical reflection, an artifactual form of deliberative cognition, has anything approaching access to the information it needs to solve the kinds of problems it purports to solve. Given the role that information scarcity plays in theoretical underdetermination, the perpetually underdetermined theories posed by intentional philosophers strongly suggest that the answer is no.

But if the science suggests that philosophical reflection may not have access to enough information to answer the questions in its bailiwick, it also raises real questions of whether it has access to the right kind of information. Recent research has focussed on attempting to isolate the mechanisms in the brain responsible for mediating metacognition. The findings seem to be converging on the rostrolateral prefrontal cortex (rlPFC) as playing a pivotal role in the metacognitive accuracy of retrospective reports. As Fleming and Dolan write:

A role for rlPFC in metacognition is consistent with its anatomical position at the top of the cognitive hierarchy, receiving information from other prefrontal cortical regions, cingulate and anterior temporal cortex. Further, compared with non-human primates, rlPFC has a sparser spatial organization that may support greater interconnectivity. The contribution of rlPFC to metacognitive commentary may be to represent task uncertainty in a format suitable for communication to others, consistent with activation here being associated with evaluating self-generated information, and attention to internal representations. Such a conclusion is supported by recent evidence from structural brain imaging that ‘reality monitoring’ and metacognitive accuracy share a common neural substrate in anterior PFC.  Italics added, “The neural basis of metacognitive ability,” Phil. Trans. R. Soc. B (2012) 367, 1343. doi:10.1098/rstb.2011.0417

As far as I can tell, the rlPFC is perhaps the best candidate we presently have for something like a ‘philosopher module’ [See Badre, et al. “Frontal cortex and the discovery of abstract action rules.” Neuron (2010) 66:315–326.] though the functional organization of the PFC more generally remains a mystery. [Kalina Christoff’s site and Steve Fleming’s site are great places to track research developments in this area of cognitive neuroscience] It primarily seems to be engaged by abstract relational and semantic tasks, and plays some kind of role mediating verbal and spatial information. Mapping evidence also shows that its patterns of communication to other brain regions varies as tasks vary; in particular, it seems to engage regions thought to involve visuospatial and semantic processes. [Wendelken et al., “Rostrolateral Prefrontal Cortex: Domain-General or Domain-Sensitive?” Human Brain Mapping, 000:00-00, 2011 1-12.]

Cognitive neuroscience is nowhere close to any decisive picture of abstract metacognition, but hopefully the philosophical moral of the research should be clear: whatever theoretical metacognition is, it is neurobiological. And this is just to say that the nature of philosophical reflection—in the form of say, ‘making things explicit,’ or what have you—is not something that philosophical reflection on ‘conscious experience’ can solve! Dehaene’s law applies as much to metacognition as to any other metacognitive process—as we should expect, given the cortical bottleneck and what we know of the rlPFC. Information is promoted for stabilization and broadcast from nonconscious parallel swarms to be consumed by nonconscious parallel swarms, which include the rlPFC, which in turn somehow informs further stabilizations and broadcasts. What we presently ‘experience,’ the well from which our intentional claims are drawn, somehow comprises the serial ‘stabilization and broadcast’ portion of this process—and nothing else.

The rlPFC is an evolutionary artifact, something our ancestors developed over generations of practical problem-solving. It is part and parcel of the most complicated (not to mention expensive) organ known. Assume, for the moment, that the rlPFC is the place where the magic happens, the part of the ruminating philosopher’s brain where ‘accurate intuitions’ of the ‘nature of mind and thought’ arise allowing for verbal report. (The situation is without a doubt far more complicated, but since complication is precisely the problem the philosopher faces, this example actually does them a favour). There’s no way the rlPFC could assist in accurately cognizing its own function—another rlPFC would be required to do that, requiring a third rlPFC, and so on and so on. In fact, there’s no way the brain could directly cognize its own activities in any high-dimensionally accurate way. What the rlPFC does instead—obviously one would think—is process information for behaviour. It has to earn its keep after all! Given this, one should expect that it is adapted to process information that is itself adapted to solve the kinds of behaviourally related problems faced by our ancestors, that it consists of ad hoc structures processing ad hoc information.

Philosophy is quite obviously an exaptation of the capacities possessed by the rlPFC (and the systems of which it is part), the learned application of metacognitive capacities originally adapted to solve practical behavioural problems to theoretical problems possessing radically different requirements—such as accuracy, the ability to not simply use a cognitive tool, but to be able to reliably determine what that cognitive tool is.

Even granting the intentionalist their spooky functional order, are we to suppose, given everything considered, that we just happened to have evolved the capacity to accurately intuit this elusive functional order? Seems a stretch. The far more plausible answer is that this exaptation, relying as it does on scarce and specialized information, was doomed from the outset to get far more things wrong than right (as the ancient skeptics insisted!). The far more plausible answer is that our metacognitive capacity is as radically heuristic as cognitive science suggests. Think of the scholastic jungle that is analytic and continental philosophy. Or think of the yawning legitimacy gap between mathematics (exaptation gone right) versus the philosophy of mathematics (exaptation gone wrong). The oh so familiar criticisms of philosophy, that it is impractical, disconnected from reality, incapable of arbitrating its controversies—in short, that it does not decisively solve—are precisely the kinds of problems we might expect, were philosophical reflection an artifact of an exaptation gone wrong.

On my account it is wildly implausible that any design paradigm like evolution could deliver the kind of cognition intentionalism requires. Evolution solves difficult problems heuristically: opportunistic fixes are gradually sculpted by various contingent frequencies in its environment, which in our case, were thoroughly social. Since the brain is the most difficult problem any brain could possibly face, we can assume the heuristics our brain relies on to cognize other brains will be specialized, and that the heuristics it uses to cognize itself will be even more specialized still. Part of this specialization will involve the ability to solve problems absent any causal information: there is simply no way the human brain can cognize itself the way it cognizes its natural environment. Is it really any surprise that causal information would scuttle problem-solving adapted to solve in its absence? And given our blindness to the heuristic nature of the systems involved, is it any surprise that we would be confounded by this incompatibility for as long as we have?

The problem, of course, it that it so doesn’t seem that way. I was a Heideggerean once. I was also a Wittgensteinian. I’ve spent months parsing Husserl’s torturous attempts to discipline philosophical reflection. That version of myself would have scoffed at these kinds of criticisms. ‘Scientism!’ would have been my first cry; ‘Performative contradiction!’ my second. I was so certain of the intrinsic intentionality of human things that the kind of argument I’m making here would have struck me as self-evident nonsense. ‘Not only are these intentional oranges real,’ I would have argued, ‘they are the only thing that makes scientific apples possible.’

It’s not enough to show the intentionalist philosopher that, by the light of cognitive science, it’s more than likely their oranges do not exist. Dialectically, at least, one needs to explain how, intuitively, it could seem so obvious that they do exist. Why do the philosopher’s ‘feelings of knowing,’ as murky and inexplicable as they are, have the capacity to convince them of anything, let alone monumental speculative systems?

As it turns out, cognitive psychology has already begun interrogating the general mechanism that is likely responsible, and the curious ways it impacts our retrospective assessments: neglect. In Thinking, Fast and Slow, Daniel Kahneman cites the difficulty we have distinguishing experience from memory as the reason why we retrospectively underrate our suffering in a variety of contexts. Given the same painful medical procedure, one would expect an individual suffering for twenty minutes to report a far greater amount than an individual suffering for half that time or less. Such is not the case. As it turns out duration has “no effect whatsoever on the ratings of total pain” (380). Retrospective assessments, rather, seem determined by the average of the pain’s peak and its coda. Absent intellectual effort, you could say the default is to remove the band-aid slowly.

Far from being academic, this ‘duration neglect,’ as Kahneman calls it, places the therapist in something of a bind. What should the physician’s goal be? The reduction of the pain actually experienced, or the reduction of the pain remembered. Kahneman provocatively frames the problem as a question of choosing between selves, the ‘experiencing self’ that actually suffers the pain and the ‘remembering self’ that walks out of the clinic. Which ‘self’ should the therapist serve? Kahneman sides with the latter. “Memories,” he writes, “are all we get to keep from our experience of living, and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self” (381). If the drunk has no recollection of the parking lot, then as far as his decision making is concerned, the parking lot simply does not exist. Kahneman writes:

Confusing experience with the memory of it is a compelling cognitive illusion—and it is the substitution that makes us believe a past experience can be ruined. The experiencing self does not have a voice. The remembering self is sometimes wrong, but it is the one that keeps score and governs what we learn from living, and it is the one that makes decisions. What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experience. This is the tyranny of the remembering self. 381

Could it be that this is what philosophers are doing? Could they, in the course of defining and arranging their oranges, simply be confusing their memory of experience with experience itself? So in the case of duration neglect, information regarding the duration of suffering makes no difference in the subject’s decision making because that information is nowhere to be found. Given the ubiquity of similar effects, Kahneman generalizes the insight into what he calls WYSIATI, or What-You-See-Is-All-There-Is:

An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. [Our nonconscious cognitive system] excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have. 85

Kahneman’s WYSIATI, you could say, provides a way to explain Dehaene’s Law regarding the chronic overestimation of awareness. The cortical bottleneck renders conscious access captive to the facts as they are given. If information regarding things like the duration of suffering in an experimental context isn’t available, then that information simply makes no difference for subsequent behaviour. Likewise, if information regarding the reliability of an intuition or ‘feeling of knowing’ (aptly abbreviated as ‘FOK’ in the literature!) isn’t available, then that information simply makes no difference—at all.

Thus the illusion of what I’ve been calling cognitive sufficiency these past few years. Kahneman lavishes the reader in Thinking, Fast and Slow with example after example of how subjects perennially confuse the information they do have with all the information they need:

You cannot help dealing with the limited information you have as if it were all there is to know. You build the best possible story from the information available to you, and if it is a good story, you believe it. Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance. 201

You could say what his research has isolated the cognitive conceit that lies at the heart of Plato’s cave: absent information regarding the low-dimensionality of the information they have available, shadows become everything. Like the parking lot, the cave, the chains, the fire, even the possibility of looking from side-to-side simply do not exist for the captives.

As the WYSIATI rule implies, neither the quantity nor the quality of the evidence counts for much in subjective confidence. The confidence that individuals have in their beliefs depends mostly on the quality of the story they can tell about what they see, even if they see little. We often fail to allow for the possibility that evidence that should be critical to our judgment is missing—what we see is all there is. Furthermore, our associative system tends to settle on a coherent pattern of activation and suppresses doubt and ambiguity. 87-88

Could the whole of intentional philosophy amount to varieties of story-telling, ‘theory-narratives’ that are compelling to their authors precisely to the degree they are underdetermined? The problem as Kahneman outlines it is twofold. For one, “[t]he human mind does not deal well with nonevents” (200) simply because unavailable information is information that makes no difference. This is why deception, or any instance of controlling information availability, allows us to manipulate our fellow drunks so easily. For another, “[c]onfidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it,” and “not a reasoned evaluation of the probability that this judgment is correct” (212). So all that time I was reading Heidegger nodding, certain that I was getting close to finding the key, I was simply confirming parochial assumptions. Once I had bought in, coherence was automatic, and the inferences came easy. Heidegger had to be right—the key had to be beneath his lamppost—simply because it all made so much remembered sense ‘upon reflection.’

Could it really be as simple as this? Now given philosophers’ continued insistence on making claims despite their manifest institutional incapacity to decisively arbitrate any of them, neglect is certainly a plausible possibility. But the fact is this is precisely the kind of problem we should expect given that philosophical reflection is an exaptation of pre-existing cognitive capacities.

Why? Because what researchers term ‘error awareness,’ like every other human cognitive capacity, does not come cheap. To be sure, the evolutionary premium on error-detection is high to the extent that adaptive behaviour is impossible otherwise. It is part and parcel of cognition. But philosophical reflection is, once again, an exaptation of pre-existing metacognitive capacities, a form of problem-solving that has no evolutionary precedent. Research has shown that metacognitive error-awareness is often problematic even when applied to problems, such as assessing memory accuracy or behavioural competence in retrospect, that it has likely evolved to solve. [See, Wessel, “Error awareness and the error-related negativity: evaluating the first decade of evidence,” Front Hum Neurosci. 2012; 6: 88. doi: 10.3389/fnhum.2012.00088, for a GNW related review] So if conscious error-awareness is hit or miss regarding adaptive activities, we should expect that, barring some cosmic stroke of evolutionary good fortune, it pretty much eludes philosophical reflection altogether. Is it really surprising that the only erroneous intuitions philosophers seem to detect with any regularity are those belonging to their peers?

We’re used to thinking of deficits in self-awareness in pathological terms, as something pertaining to brain trauma. But the picture emerging from cognitive science is positively filled with instances of non-pathological neglect, metacognitive deficits that exist by virtue of our constitution. The same way researchers can game the heuristic components of vision to generate any number of different visual illusions, experimentalists are learning how to game the heuristic components of cognition to isolate any number of cognitive illusions, ways in which our problem-solving goes awry without the least conscious awareness. In each of these cases, neglect plays a central role in explaining the behaviour of the subjects under scrutiny, the same way clinicians use neglect to explain the behaviour of their impaired patients.

Pathological neglect strikes us as so catastrophically consequential in clinical settings simply because of the behavioural aberrations of those suffering it. Not only does it make a profoundly visible difference, it makes a difference that we can only understand mechanistically. It quite literally knocks individuals from the problem-ecology belonging to socio-cognition into the problem-ecologies belonging to natural cognition. Socio-cognition, as radically heuristic, leans heavily on access to certain environmental information to function properly. Pathological neglect denies us that information.

Non-pathological neglect, on the other hand, completely eludes us because, insofar as we share the same neurophysiology, we share the same ‘neglect structure.’ The neglect suffered is both collective and adaptive. As a result, we only glimpse it here and there, and are more cued to resolve the problems it generates than ponder the deficits in self-awareness responsible. We require elaborate experimental contexts to draw it into sharp focus.

All Blind Brain Theory does is provide a general theoretical framework for these disparate findings, one that can be extended to a great number of traditional philosophical problems—including the holy grail, the naturalization of intentionality. As of yet, the possibility of such a framework remains at most an inkling to those at the forefront of the field (something that only speculative fiction authors dare consider!) but it is a growing one. Non-pathological neglect is not only a fact, it is ubiquitous. Conceptualized the proper way, it possesses a very parsimonious means of dispatching with a great number of ancient and new conundrums…

At some point, I think all these mad ramblings will seem painfully obvious, and the thought of going back to tackling issues of cognition neglecting neglect will seem all but unimaginable. But for the nonce, it remains very difficult to see—it is neglect we’re talking about, after-all!—and the various researchers struggling with its implications lie so far apart in terms of expertise and idiom that none can see the larger landscape.

And what is this larger landscape? If you swivel human cognitive capacity across the continuum of human interrogation you find a drastic plunge in the dimensionality and an according spike in the specialization of the information we can access for the purposes of theorization as soon as brains are involved. Metacognitive neglect means that things like ‘person’ or ‘rule’ or what have you seem as real as anything else in the world when you ponder them, but in point of fact, we have only our intuitions to go on, the most meagre deliverances lacking provenance or criteria. And this is precisely what we should expect given the rank inability of the human brain to cognize itself or others in the high-dimensional manner it cognizes its environments.

This is the picture that traditional, intentional philosophy, if it is to maintain any shred of cognitive legitimacy moving forward, must somehow accommodate. Since I see traditional philosophy as largely an unwitting artifact of this landscape, I think such an accommodation will result in dissolution, the realization that philosophy has largely been a painting class for the blind. Some useful works have been produced here and there to be sure, but not for any reason the artists responsible suppose. So I would like to leave you with a suggestive parallel, a way to compare the philosopher with the sufferer of Anton’s Syndrome, the notorious form of anosognosia that leaves blind patients completely convinced they can see. So consider:

First, the patient is completely blind secondary to cortical damage in the occipital regionsof the brain. Second, these lesions are bilateral. Third, the patient is not only unaware of her blindness; she rejects any objective evidence of her blindness. Fourth, the patient offers plausible, but at times confabulatory responses to explain away any possible evidence of her failure to see (e.g., “The room is dark,” or “I don’t have my glasses,therefore how can I see?”). Fifth, the patient has an apparent lack of concern (or anosodiaphoria) over her neurological condition. Prigatano and Wolf, “Anton’s Syndrome and Unawareness of Partial or Complete Blindness,” The Study of Anosognosia, 456.

And compare to:

First, the philosopher is metacognitively blind secondary to various developmental and structural constraints. Second, the philosopher is not aware of his metacognitive blindness, and is prone to reject objective evidence of it. Third, the philosopher offers plausible, but at times confabulatory responses to explain away evidence of his metacognitive incapacity. And fourth, the philosopher often exhibits an apparent lack of concern for his less than ideal neurological constitution.

The Missing Half of the Global Neuronal Workspace: A Commentary on Stanislaus Dehaene’s Consciousness and the Brain

by rsbakker

Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts

.

Introduction

Stanislaus Dehaene, to my mind at least, is the premier consciousness researcher on the planet, one of those rare scientists who seems equally at home in the theoretical aether (like we are here) and in the laboratory (where he is there). His latest book, Consciousness and the Brain provides an excellent, and at times brilliant, overview of the state of contemporary consciousness research. Consciousness has come a long way in the past two decades, and Dehaene deserves credit for much of the yardage gained.

I’ve been anticipating Consciousness and the Brain for quite some time, especially since I bumped across “The Eternal Silence of the Neuronal Spaces,” Dehaene’s review of Cristopher Koch’s Consciousness: Confessions of a Romantic Reductionist, where he concludes with a confession of his own: “Can neuroscience be reconciled with living a happy, meaningful, moral, and yet nondelusional life? I will confess that this question also occasionally keeps me lying awake at night.” Since the implications of the neuroscientific revolution, the prospects of having a technically actionable blueprint of the human soul, often keep my mind churning into the wee hours, I was hoping that I might see a more measured, less sanguine Dehaene in this book, one less inclined to soft-sell the troubling implications of neuroscientific research.

And in that one regard, I was disappointed. Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts is written for a broad audience, so in a certain sense one can understand the authorial instinct to make things easy for the reader, but rendering a subject matter more amenable to lay understanding is quite a different thing than rendering it more amenable to lay sensibilities. Dehaene, I think, caters far too much to the very preconceptions his science is in the process of dismantling. As a result, the book, for all its organizational finesse, all its elegant formulations, and economical summaries of various angles of research, finds itself haunted by a jagged shadow, the intimation that things simply are not as they seem. A contradiction—of expressive modes if not factual claims.

Perhaps the most stark example of this contradiction comes at the very conclusion of the book, where Dehaene finally turns to consider some of the philosophical problems raised by his project. Adopting a quasi-Dennettian argument (from Freedom Evolves) that the only ‘free will’ that matters is the free will we actually happen to have (namely, one compatible with physics and biology), he writes:

“Our belief in free will expresses the idea that, under the right circumstances, we have the ability to guide our decisions by our higher-level thoughts, beliefs, values, and past experiences, and to exert control over our undesired lower-level impulses. Whenever we make an autonomous decision, we exercise our free will by considering all the available options, pondering them, and choosing the one that we favor. Some degree of chance may enter in a voluntary choice, but this is not an essential feature. Most of the time our willful acts are anything but random: they consist in a careful review of our options, followed by the deliberate selection of the one we favor.” 264

And yet for his penultimate, concluding line no less, he writes, “[a]s you close this book to ponder your own existence, ignited assemblies of neurons literally make up your mind” (266). At this point, the perceptive reader might be forgiven for asking, ‘What happened to me pondering, me choosing the interpretation I favour, me making up my mind?’ The easy answer, of course, is that ‘ignited assemblies of neurons’ are the reader, such that whatever they ‘make,’ the reader ‘makes’ as well. The problem, however, is that the reader has just spent hours reading hundreds of pages detailing all the ways neurons act entirely outside his knowledge. If ignited assemblies of neurons are somehow what he is, then he has no inkling what he is—or what it is he is supposedly doing.

As we shall see, this pattern of alternating expressive modes, swapping between the personal and the impersonal registers to describe various brain activities, occurs throughout Consciousness and the Brain. As I mentioned above, I’m sure this has much to do with Dehaene’s resolution to write a reader friendly book, and so to market the Global Neuronal Workspace Theory (GNWT) to the broader public. I’ve read enough of Dehaene’s articles to recognize the nondescript, clinical tone that animates the impersonally expressed passages, and so to see those passages expressed in more personal idioms as self-conscious attempts on his part to make the material more accessible. But as the free will quote above makes plain, there’s a sense in which Dehaene, despite his odd sleepless night, remains committed to the fundamental compatibility of the personal and the impersonal idioms. He thinks neuroscience can be reconciled with a meaningful and nondelusional life. In what follows I intend to show why, on the basis of his own theory, he’s mistaken. He’s mistaken because, when all is said and done, Dehaene possesses only half of what could count as a complete theory of consciousness—the most important half to be sure, but half all the same. Despite all the detailed explanations of consciousness he gives in the book, he actually has no account whatsoever of what we seem to take consciousness to be–namely, ourselves.

For that account, Stanislaus Dehaene needs to look closely at the implicature of his Global Neuronal Workspace Theory—it’s long theoretical shadow, if you will—because there, I think, he will find my own Blind Brain Theory (BBT), and with it the theoretical resources to show how the consciousness revealed in his laboratory can be reconciled with the consciousness revealed in us. This, then, will be my primary contention: that Dehaene’s Global Neuronal Workspace Theory directly implies the Blind Brain Theory, and that the two theories, taken together, offer a truly comprehensive account of consciousness…

The one that keeps me lying awake at night.

.

Function Dysfunction

Let’s look at a second example. After drawing up an inventory of varous, often intuition-defying, unconscious feats, Dehaene cautions the reader against drawing too pessimistic a conclusion regarding consciousness—what he calls the ‘zombie theory’ of consciousness. If unconscious processes, he asks, can plan, attend, sum, mean, read, recognize, value and so on, just what is consciousness good for? The threat of these findings, as he sees it, is that they seem to suggest that consciousness is merely epiphenomenal, a kind of kaliedoscopic side-effect to the more important, unconscious business of calculating brute possibilities. As he writes:

“The popular Danish science writer Tor Norretranders coined the term ‘user illusion’ to refer to our feeling of being in control, which may well be fallacious; every one of our decisions, he believes, stems from unconscious sources. Many other psychologists agree: consciousness is the proverbial backseat driver, a useless observer of actions that lie forever beyond its control.” 91

Dehaene disagrees, claiming that his account belongs to “what philosophers call the ‘functionalist’ view of consciousness” (91). He uses this passing criticism as a segue for his subsequent, fascinating account of the numerous functions discharged by consciousness—what makes consciousness a key evolutionary adaptation. The problem with this criticism is that it simply does not apply. Norretranders, for instance, nowhere espouses epiphenomenalism—at least not in The User Illusion. The same might be said of Daniel Wegner, one the ‘many psychologists,’ Dehaene references in the accompanying footnote. Far from epiphenomenalism, the argument that consciousness has no function whatsoever (as, say, Susan Pockett (2004) has argued), both of these authors contend that it’s ‘our feeling of being in control’ that is illusory. So in The Illusion of Conscious Will, for instance, Wegner proposes that the feeling of willing allows us to socially own our actions. For him, our consciousness of ‘control’ has a very determinate function, just one that contradicts our metacognitive intuition of that functionality.

Dehaene is simply in error here. He is confusing the denial of intuitions of conscious efficacy with a denial of conscious efficacy. He has simply run afoul the distinction between consciousness as it is and consciousness as appears to us—the distinction between consciousness as impersonally and personally construed. Note the way he actually slips between idioms in the passage quoted above, at first referencing ‘our feeling of being in control’ and then referencing ‘its control.’ Now one might think this distinction between these two very different perspectives on consciousness would be easy to police, but such is not the case (See Bennett and Hacker, 2003). Unfortunately, Dehaene is far from alone when it comes to running afoul this dichotomy.

For some time now, I’ve been arguing for what I’ve been calling a Dual Theory approach to the problem of consciousness. On the one hand, we need a theoretical apparatus that will allow us to discover what consciousness is as another natural phenomenon in the natural world. On the other hand, we need a theoretical apparatus that will allow us to explain (in a manner that makes empirically testable predictions) why consciousness appears the way that it does, namely, as something that simply cannot be another natural phenomenon in the natural world. Dehaene is in the business of providing the first kind of theory: a theory of what consciousness actually is. I’ve made a hobby of providing the second kind of theory: a theory of why consciousness appears to possess the baffling form that it does.

Few terms in the conceptual lexicon are quite so overdetermined as ‘consciousness.’ This is precisely what makes Dehaene’s operationalization of ‘conscious access’ invaluable. But salient among those traditional overdeterminations is the peculiarly tenacious assumption that consciousness ‘just is’ what it appears to be. Since what it appears to be is drastically at odds with anything else in the natural world, this assumption sets the explanatory bar rather high indeed. You could say consciousness needs a Dual Theory approach for the same reason that Dualism constitutes an intuitive default (Emmons 2014). Our dualistic intuitions arguably determine the structure of the entire debate. Either consciousness really is some wild, metaphysical exception to the natural order, or consciousness represents some novel, emergent twist that has hitherto eluded science, or something about our metacognitive access to consciousness simply makes it seem that way. Since the first leg of this trilemma belongs to theology, all the interesting action has fallen into orbit around the latter two options. The reason we need an ‘Appearance Theory’ when it comes to consciousness as opposed to other natural phenomena, has to do with our inability to pin down the explananda of consciousness, an inability that almost certainly turns on the idiosyncrasy of our access to the phenomena of consciousness compared to the phenomena of the natural world more generally. This, for instance, is the moral of Michael Graziano’s (otherwise flawed) Consciousness and the Social Brain: that the primary job of the neuroscientist is to explain consciousness, not our metacognitive perspective on consciousness.

The Blind Brain Theory is just such an Appearance Theory: it provides a systematic explanation of the kinds of cognitive confounds and access bottlenecks that make consciousness appear to be ‘supra-natural.’ It holds, with Dehaene, that consciousness is functional through and through, just not in any way we can readily intuit outside empirical work like Dehaene’s. As such, it takes findings such as Wegner’s, where the function we presume on the basis of intuition (free willing) is belied by some counter-to-intuition function (behaviour ownership), as paradigmatic. Far from epiphenomenalism, BBT constitutes a kind of ‘ulterior functionalism’: it acknowledges that consciousness discharges a myriad of functions, but it denies that metacognition is any position to cognize those functions (see “THE Something about Mary“) short of sustained empirical investigation.

Dehaene is certainly sensitive to the general outline of this problem: he devotes an entire chapter (“Consciousness Enters the Lab”) to discussing the ways he and others have overcome the notorious difficulties involved in experimentally ‘pinning consciousness down.’ And the masking and attention paradigms he has helped develop have done much to transform consciousness research into a legitimate field of scientific research. He even provides a splendid account of just how deep unconscious processing reaches into what we intuitively assume are wholly conscious exercises—an account that thoroughly identifies him as a fellow ulterior functionalist. He actually agrees with me and Norretranders and Wegner—he just doesn’t realize it quite yet.

.

The Global Neuronal Workspace

As I said, Dehaene is primarily interested in theorizing consciousness apart from how it appears. In order to show how the Blind Brain Theory actually follows from his findings, we need to consider both these findings and the theoretical apparatus that Dehaene and his colleagues use to make sense of them. We need to consider his Global Neuronal Workspace Theory of consciousness.

According to GNWT, the primary function of consciousness is to select, stabilize, solve, and broadcast information throughout the brain. As Dehaene writes:

“According to this theory, consciousness is just brain-wide information sharing. Whatever we become conscious of, we can hold it in our mind long after the corresponding stimulation has disappeared from the outside world. That’s because the brain has brought it into the workspace, which maintains it independently of the time and place at which we first perceived it. As a result, we may use it in whatever way we please. In particular, we can dispatch it to our language processors and name it; this is why the capacity to report is a key feature of a conscious state. But we can also store it in long-term memory or use it for our future plans, whatever they are. The flexible dissemination of information, I argue, is a characteristic property of a conscious state.” 165

A signature virtue of Consciousness and the Brain lays in Dehaene’s ability to blend complexity and nuance with expressive economy. But again one needs to be wary of his tendency to resort to the personal idiom, as he does in this passage, where the functional versatility provided by consciousness is explicitly conflated with agency, the freedom to dispose of information ‘in whatever way we please.’ Elsewhere he writes:

“The brain must contain a ‘router’ that allows it to flexibly broadcast information to and from its internal routines. This seems to be a major function of consciousness: to collect the information from various processors, synthesize it, and then broadcast the result–a conscious symbol–to other, arbitrarily selected processors. These processors, in turn, apply their unconscious skills to this symbol, and the entire process may repeat a number of times. The outcome is a hybrid serial-parallel machine, in which stages of massively parallel computation are interleaved with a serial stage of conscious decision making and information routing.” 105

Here we find him making essentially the same claims in less anthropomorphic or ‘reader-friendly’ terms. Despite the folksy allure of the ‘workspace’ metaphor, this image of the brain as a ‘hybrid serial-parallel machine’ is what lies at the root of GNWT. For years now, Dehaene and others have been using masking and attention experiments in concert with fMRI, EEG, and MEG to track the comparative neural history of conscious and unconscious stimuli through the brain. This has allowed them to isolate what Dehaene calls the ‘signatures of consciousness,’ the events that distinguish percepts that cross the conscious threshold from percepts that do not. A theme that Dehaene repeatedly evokes is the information asymmetric nature of conscious versus unconscious processing. Since conscious access is the only access we possess to our brain’s operations, we tend to run afoul a version of what Daniel Kahneman (2012) calls WYSIATI, or the ‘what-you-see-is-all-there-is’ effect. Dehaene even goes so far as to state this peculiar tendency as a law: “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (79). The fact is the nonconscious brain performs the vast, vast majority of the brain’s calculations.

The reason for this has to do with the Inverse Problem, the challenge of inferring the mechanics of some distal system, a predator or a flood, say, from the mechanics of some proximal system such as ambient light or sound. The crux of the problem lies in the ambiguity inherent to the proximal mechanism: a wild variety of distal events could explain any given retinal stimulus, for instance, and yet somehow we reliably perceive predators or floods or what have you. Dehaene writes:

“We never see the world as our retina sees it. In fact, it would be a pretty horrible sight: a highly distorted set of light and dark pixels, blown up toward the center of the retina, masked by blood vessels, with a massive hole at the location of the ‘blind spot’ where cables leave for the brain; the image would constantly blur and change as our gaze moved around. What we see, instead, is a three-dimensional scene, corrected for retinal defects, mended at the blind spot, and massive reinterpreted based on our previous experience of similar visual scenes.” 60

The brain can do this because it acts as a massively parallel Bayesian inference engine, analytically breaking down various elements of our retinal images, feeding them to specialized heuristic circuits, and cobbling together hypothesis after hypothesis.

“Below the conscious stage, myriad unconscious processors, operating in parallel, constantly strive to extract the most detailed and complete interpretation of our environment. They operate as nearly optimal statisticians who exploit the slightest perceptual hint—a faint movement, a shadow, a splotch of light—to calculate the probability that a given property holds true in the outside world.” 92

But hypotheses are not enough. All this machinery belongs to what is called the ‘sensorimotor loop.’ The whole evolutionary point of all this processing is to produce ‘actionable intelligence,’ which is to say, to help generate and drive effective behaviour. In many cases, when the bottom-up interpretations match the top-down expectations and behaviour is routine, say, such selection need not result in consciousness of the stimuli at issue. In other cases, however, the interpretations are relayed to the nonconscious attentional systems of the brain where they are ranked according to their relevance to ongoing behaviour and selected accordingly for conscious processing. Dehaene summarizes what happens next:

“Conscious perception results from a wave of neuronal activity that tips the cortex over its ignition threshold. A conscious stimulus triggers a self-amplifying avalanche of neural activity that ultimately ignites many regions into a tangled state. During that conscious state, which starts approximately 300 milliseconds after stimulus onset, the frontal regions of the brain are being informed of sensory inputs in a bottom-up manner, but these regions also send massive projections in the converse direction, top-down, and to many distributed areas. The end result is a brain web of synchronized areas whose various facets provide us with many signatures of consciousness: distributed activation, particularly in the frontal and parietal lobes, a P3 wave, gamma-band amplification, and massive long-distance synchrony.” 140

As Dehaene is at pains to point out, the machinery of consciousness is simply too extensive to not be functional somehow. The neurophysiological differences observed between the multiple interpretations that hover in nonconscious attention, and the interpretation that tips the ‘ignition threshold’ of consciousness is nothing if not dramatic. Information that was localized suddenly becomes globally accessible. Information that was transitory suddenly becomes stable. Information that was hypothetical suddenly becomes canonical. Information that was dedicated suddenly becomes fungible. Consciousness makes information spatially, temporally, and structurally available. And this, as Dehaene rightly argues, makes all the difference in the world, including the fact that “[t]he global availability of information is precisely what we subjectively experience as a conscious state” (168).

.

A Mile Wide and an Inch Thin

Consciousness is the Medieval Latin of neural processing. It makes information structurally available, both across time and across the brain. As Dehaene writes, “The capacity to synthesize information over time, space, and modalities of knowledge, and to rethink it at any time in the future, is a fundamental component of the conscious mind, one that seems likely to have been positively selected for during evolution” (101). But this evolutionary advantage comes with a number of crucial caveats, qualifications that, as we shall see, make some kind of Dual Theory approach unavoidable.

Once an interpretation commands the global workspace, it becomes available for processing via the nonconscious input of a number of different processors. Thus the metaphor of the workspace. The information can be ‘worked over,’ mined for novel opportunities, refined into something more useful, but only, as Dehaene points out numerous times, synoptically and sequentially.

Consciousness is synoptic insofar as it samples mere fractions of the information available: “An unconscious army of neurons evaluates all the possibilities,” Dehaene writes, “but consciousness receives only a stripped down report” (96). By selecting, in other words, the workspace is at once neglecting, not only all the alternate interpretations, but all the neural machinations responsible: “Paradoxically, the sampling that goes on in our conscious vision makes us forever blind to its inner complexity” (98).

And consciousness is sequential in that it can only sample one fraction at a time: “our conscious brain cannot experience two ignitions at once and lets us perceive only a single conscious ‘chunk’ at a given time,” he explains. “Whenever the prefrontal and parietal lobes are jointly engaged in processing a first stimulus, they cannot simultaneously reengage toward a second one” (125).

All this is to say that consciousness pertains to the serial portion of the ‘hybrid serial-parallel machine’ that is the human brain. Dehaene even goes so far as to analogize consciousness to a “biological Turing machine” (106), a kind of production system possessing the “capacity to implement any effective procedure” (105). He writes:

“A production system comprises a database, also called ‘working memory,’ and a vast array of if-then production rules… At each step, the system examines whether a rule matches the current state of its working memory. If multiple rules match, then they compete under the aegis of a stochastic prioritizing system. Finally, the winning rule ‘ignites’ and is allowed to change the contents of working memory before the entire process resumes. Thus this sequence of steps amounts to serial cycles of unconscious competition, conscious ignition, and broadcasting.” 105

The point of this analogy, Dehaene is quick to point out, isn’t to “revive the cliché of the brain as a classical computer” (106) so much as it is to understand the relationship between the conscious and nonconscious brain. Indeed, in subsequent experiments, Dehaene and his colleagues discovered that the nonconscious, for all its computational power, is generally incapable of making sequential inferences: “The mighty unconscious generates sophisticated hunches, but only a conscious mind can follow a rational strategy, step after step” (109). It seems something of a platitude to claim that rational deliberation requires consciousness, but to be able to provide an experimentally tested neurobiological account of why this is so is nothing short of astounding. Make no mistake: these are the kind of answers philosophy, rooting through the mire of intuition, has sought for millennia.

Dehaene, as I mentioned, is primarily interested in providing a positive account of what consciousness is apart from what we take it to be. “Putting together all the evidence inescapably leads us to a reductionist conclusion,” Dehaene writes. “All our conscious experiences, from the sound of an orchestra to the smell of burnt toast, result from a similar source: the activity of massive cerebral circuits that have reproducible neuronal signatures” (158). Though he does consider several philosophical implications of his ‘reductionist conclusions,’ he does so only in passing. He by no means dwells on them.

Given that consciousness research is a science attempting to bootstrap its way out of the miasma of philosophical speculation regarding the human soul, this reluctance is quite understandable—perhaps even laudable. The problem, however, is that philosophy and science both traffic in theory, general claims about basic things. As a result, the boundaries are constitutively muddled, typically to the detriment of the science, but sometimes to its advantage. A reluctance to speculate may keep the scientist safe, but to the extent that ‘data without theory is blind,’ it may also mean missed opportunities.

So consider Dehaene’s misplaced charge of epiphenomenalism, the way he seemed to be confusing the denial of our intuitions of conscious efficacy with the denial of conscious efficacy. The former, which I called ‘ulterior functionalism,’ entirely agrees that consciousness possesses functions; it denies only that we have reliable metacognitive access to those functions. Our only recourse, the ulterior functionalist holds, is to engage in empirical investigation. And this, I suggested, is clearly Dehaene’s own position. Consider:

“The discovery that a word or a digit can travel throughout the brain, bias our decisions, and affect our language networks, all the while remaining unseen, was an eye-opener for many cognitive scientists. We had underestimated the power of the unconscious. Our intuitions, it turned out, could not be trusted: we had no way of knowing what cognitive processes could or could not proceed without awareness. The matter was entirely empirical. We had to submit, one by one, each mental faculty to a thorough inspection of its component processes, and decide which of those faculties did or did not appeal to the conscious mind. Only careful experimentation could decide the matter…” 74

This could serve as a mission statement for ulterior functionalism. We cannot, as a matter of fact, trust any of our prescientific intuitions regarding what we are, no more than we could trust our prescientific intuitions regarding the natural world. This much seems conclusive. Then why does Dehaene find the kinds of claims advanced by Norretranders and Wegner problematic? What I want to say is that Dehaene, despite the occasional sleepless night, still believes that the account of consciousness as it is will somehow redeem the most essential aspects of consciousness as it appears, that something like a program of ‘Dennettian redefinition’ will be enough. Thus the attitude he takes toward free will. But then I encounter passages like this:

“Yet we never truly know ourselves. We remain largely ignorant of the actual unconscious determinants of our behaviour, and therefore cannot accurately predict what our behaviour will be in circumstances beyond the safety zone of our past experiences. The Greek motto ‘Know thyself,’ when applied to the minute details of our behaviour, remains an inaccessible ideal. Our ‘self’ is just a database that gets filled in through our social experiences, in the same format with which we attempt to understand other minds, and therefore it is just as likely to include glaring gaps, misunderstandings, and delusions.” 113

Claims like this, which radically contravene our intuitive, prescientific understanding of self, suggest that Dehaene simply does not know where he stands, that he alternately believes and does not believe that his work can be reconciled with our traditional understand of ‘meaningful life.’ Perhaps this explains the pendulum swing between the personal and the impersonal idiom that characterizes this book—down to the final line, no less!

Even though this is an eminently honest frame of mind to take to this subject matter, I personally think his research cuts against even this conflicted optimism. Not surprisingly, the Global Neuronal Workspace Theory of Consciousness casts an almost preposterously long theoretical shadow; it possesses an implicature that reaches to the furthest corners of the great human endeavour to understand itself. As I hope to show, the Blind Brain Theory of the Appearance of Consciousness provides a parsimonious and powerful way to make this downstream implicature explicit.

.

From Geocentrism to ‘Noocentrism’

“Most mental operations,” Dehaene writes, “are opaque to the mind’s eye; we have no insight into the operations that allow us to recognize a face, plan a step, add two digits, or name a word” (104-5). If one pauses to consider the hundreds of experiments that he directly references, not to mention the thousands of others that indirectly inform his work, this goes without saying. We require a science of consciousness simply because we have no other way of knowing what consciousness is. The science of consciousness is literally predicated on the fact of our metacognitive incapacity (See “The Introspective Peepshow“).

Demanding that science provide a positive explanation of consciousness as we intuit it is no different than demanding that science provide a positive explanation of geocentrism—which is to say, the celestial mechanics of the earth as we once intuited it. Any fool knows that the ground does not move. If anything, the fixity of the ground is what allows us to judge movement. Certainly the possibility that the earth moved was an ancient posit, but lacking evidence to the contrary, it could be little more than philosophical fancy. Only the slow accumulation of information allowed us to reconceive the ‘motionless earth’ as an artifact of ignorance, as something that only the absence of information could render obvious. Geocentrism is the product of a perspectival illusion, plain and simple, the fact that we literally stood too close to the earth to comprehend what the earth in fact was.

We stand even closer to consciousness—so close as to be coextensive! Nonetheless, a good number of very intelligent people insist on taking (some version of) consciousness as we intuit it to be the primary explanandum of consciousness research. Given his ‘law’ (We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (79)), Dehaene is duly skeptical. He is a scientific reductionist, after all. So with reference to David Chalmers’ ‘hard problem’ of consciousness, we find him writing:

“My opinion is that Chalmers swapped the labels: it is the ‘easy’ problem that is hard, while the hard problem just seems hard because it engages ill-defined intuitions. Once our intuition is educated by cognitive neuroscience and computer simulations, Chalmer’s hard problem will evaporate.” 262

Referencing the way modern molecular biology has overthrown vitalism, he continues:

“Likewise, the science of consciousness will keep eating away at the hard problem until it vanishes. For instance, current models of visual perception already explain not only why the human brain suffers from a variety of visual illusions but also why such illusions would appear in any rational machine confronted with the same computational problem. The science of consciousness already explains significant chunks of our subjective experience, and I see no obvious limits to this approach.” 262

I agree entirely. The intuitions underwriting the so-called ‘hard problem’ are perspectival artifacts. As in the case of geocentrism, our cognitive systems stand entirely too close to consciousness to not run afoul a number of profound illusions. And I think Dehaene, not unlike Galileo, is using the ‘Dutch Spyglass’ afforded by masking and attention paradigms to accumulate the information required to overcome those illusions. I just think he remains, despite his intellectual scruples, a residual hostage of the selfsame intuitions he is bent on helping us overcome.

Dehaene only needs to think through the consequences of GNWT as it stands. So when he continues to discuss other ‘hail Mary’ attempts (those of Eccles and Penrose) to find some positive account of consciousness as it appears, writing that “the intuition that our mind chooses its actions ‘at will’ begs for an explanation” (263), I’m inclined to think he already possesses the resources to advance such an explanation. He just needs to look at his own findings in a different way.

Consider the synoptic and sequential nature of what Dehaene calls ‘ignition,’ the becoming conscious of some nonconscious interpretation. The synoptic nature of ignition, the fact that consciousness merely samples interpretations, means that consciousness is radically privative, that every instance of selection involves massive neglect. The sequential nature of ignition, on the other hand, the fact that the becoming conscious of any interpretation precludes the becoming conscious of another interpretation, means that each moment of consciousness is an all or nothing affair. As I hope to show, these two characteristics possess profound implications when applied to the question of human metacognitive capacity—which is to say, our capacity to intuit our own makeup.

Dehaene actually has very little to say regarding self-consciousness and metacognition in Consciousness and the Brain, aside from speculating on the enabling role played by language. Where other mammalian species clearly seem to possess metacognitive capacity, it seems restricted to the second-order estimation of the reliability of their first-order estimations. They lack “the potential infinity of concepts that a recursive language affords” (252). He provides an inventory of the anatomical differences between primates and other mammals, such as specialized ‘broadcast neurons,’ and between humans and their closest primate kin, such as the size of the dendritic trees possessed by human prefrontal neurons. As he writes:

“All these adaptations point to the same evolutionary trend. During hominization, the networks of our prefrontal cortex grew denser and denser, to a larger extent than would be predicted by brain size alone. Our workspace circuits expanded way beyond proportion, but this increase is probably just the tip of the iceberg. We are more than just primates with larger brains. I would not be surprised if, in the coming years, cognitive neuroscientists find that the human brain possesses unique microcircuits that give it access to a new level of recursive, language-like operations.” 253

Presuming the remainder of the ‘iceberg’ does not overthrow Dehaene’s workspace paradigm, however, it seems safe to assume that our metacognitive machinery feeds from the same informational trough, that it is simply one among the many consumers of the information broadcast in conscious ignition. The ‘information horizon’ of the Workspace, in other words, is the information horizon of conscious metacognition. This would be why our capacity to report seems to be coextensive with our capacity to consciously metacognize: the information we can report constitutes the sum of information available for reflective problem-solving.

So consider the problem of a human brain attempting to consciously cognize the origins of its own activity—for the purposes of reporting to other brains, say. The first thing to note is that the actual, neurobiological origins of that activity are entirely unavailable. Since only information that ignites is broadcast, only information that ignites is available. The synoptic nature of the information ignited renders the astronomical complexities of ignition inaccessible to conscious access. Even more profoundly, the serial nature of ignition suggests that consciousness, in a strange sense, is always too late. Information pertaining to ignition can never be processed for ignition. This is why so much careful experimentation is required, why our intuitions are ‘ill-defined,’ why ‘most mental operations are opaque.’ The neurofunctional context of the workspace is something that lies outside the capacity of the workspace to access.

This explains the out-and-out inevitability of what I called ‘ulterior functionalism’ above: the information ignited constitutes the sum of the information available for conscious metacognition. Whenever we interrogate the origins or our conscious episodes, reflection only has our working memory of prior conscious episodes to go on. This suggests something as obvious as it is counterintuitive: that conscious metacognition should suffer a profound form of source blindness. Whenever conscious metacognition searches for the origins of its own activity, it finds only itself.

Free will, in other words, is a metacognitive illusion arising out of the structure of the global neuronal workspace, one that, while perhaps not appearing “in any rational machine confronted with the same computational problem” (262), would appear in any conscious system possessing the same structural features as the global neuronal workspace. The situation is almost directly analogous to the situation faced by our ancestors before Galileo. Absent any information regarding the actual celestial mechanics of the earth, the default assumption is that the earth has no such mechanics. Likewise, absent any information regarding the actual neural mechanics of consciousness, the default assumption is that consciousness also has no such mechanics.

But free will is simply one of many problems pertaining to our metacognitive intuitions. According to the Blind Brain Theory of the Appearance of Consciousness, a great number of the ancient and modern perplexities can be likewise explained in terms of metacognitive neglect, attributed to the fact that the structure and dynamics of the workspace render the workspace effectively blind to its own structure and dynamics. Taking Dehaene’s Global Neuronal Workspace Theory of Consciousness, it can explain away the ‘ill-defined intuitions’ that underwrite attributions of some extraordinary irreducibility to conscious phenomena.

On BBT, the myriad structural peculiarities that theologians and philosophers have historically attributed to the first person are perspectival illusions, artifacts of neglect—things that seem obvious only so long as we remain ignorant of the actual mechanics involved (See, “Cognition Obscura“). Our prescientific conception of ourselves is radically delusional, and the kind of counterintuitive findings Dehaene uses to patiently develop and explain GNWT are simply what we should expect. Noocentrism is as doomed as was geocentrism. Our prescientific image of ourselves is as blinkered as our prescientific image of the world, a possibility which should, perhaps, come as no surprise. We are simply another pocket of the natural world, after all.

But the overthrow of noocentrism is bound to generate even more controversy than the overthrow of geocentrism or biocentrism, given that so much of our self and social understanding relies upon this prescientific image. Perhaps we should all lay awake at night, pondering our pondering…

Just Plain Crazy Enactive Cognition: A Review and Critical Discussion of Radicalizing Enactivism: Basic Minds without Content, by Dan Hutto and Erik Myin

by rsbakker

Mechanically the picture of how we are related to our environment is ontologically straightforward and astronomically complicated. Intentionally, the picture of how we are related to our environment is ontologically occult and surprisingly simple. Since the former is simply an extension of the scientific project into what was historically the black-box domain of the human, it is the latter that has been thrown into question. Pretty much all philosophical theories of consciousness and cognition break about how to conceive the relation between these two pictures. Very few embrace all apparent intentional phenomena,[1] but the vast majority of theorists embrace at least some—typically those they believe the most indispensible for cognition. Given the incompatibility of these with the mechanical picture they need some way to motivate their application.

But why bother? If the intentional resists explanation in natural terms, and if the natural explanation of cognition is our primary desideratum, then why not simply abandon the intentional? The answer to this question is complex, but the fact remains that any explanation of knowing, whether it involves ‘knowing how’ or ‘knowing that,’ has to explain the manifest intentionality of knowledge. No matter what one thinks of intentionality, any scientific account of cognition is going to have to explain it—at least to be convincing.

Why? Because explanation requires an explanandum, and the explanandum in this instance is, intuitively at least, intentional through and through. To naturally explain cognition, one must naturally explain correct versus incorrect cognition, because, for better or worse, this is the how cognition is implicitly conceived. The capacity to be right or wrong, true or false, is a glaring feature of all cognition, so much so that any explanation that fails to explain it pretty clearly fails to explain cognition.[2]

So despite the naturalistic inscrutability of intentionality, it nonetheless remains an ineliminable feature of cognition. We find ourselves in the apparent bind of having to naturalistically explain something that cannot be naturalistically explained to explain cognition. Thus what might be called the great Scandal of Cognitive Science: the lack of any consensus commanding definition, let alone explanation, of what cognition is. The naturalistic inscrutability versus the explanatory ineliminability of intentionality is the perennial impasse, the ‘Master Hard Problem,’ one might say, underwriting the aforementioned Scandal.

Radicalizing Enactivism: Basic Minds without Content, by Dan Hutto and Erik Myin, constitutes another attempt to finesse this decidedly uncomfortable situation. Both Hutto and Myin are proponents of the ‘enactive,’ or ‘embodied,’ cognitive research programme, an outlook that emphasizes understanding cognition, and even phenomenal consciousness, in environmentally holistic terms—as ‘wide’ or ‘extended.’ The philosophical roots of enactivism are various and deep,[3] but they all share a common antagonism to the representationalism that characterizes mainstream cognitive science. Once one defines cognition in terms of computations performed on representations, one has effectively sealed cognition inside the head. Where enactivists are prone to explicitly emphasize the continuity of cognition and behaviour, representationalists are prone to implicitly assume their discontinuity. Even though animal life so obviously depends on solving environments via behaviour, both in its evolutionary genesis and in its daily maintenance, representationalists generally think this behavioural solving of the world is the product of a prior cognitive solving of representations of the world. The wide cognition championed by the enactivist, therefore, requires the critique of representationalism.

This is the task that Hutto and Myin set themselves. As they write, “We will have succeeded if, having reached the end of the book, the reader is convinced that the idea of basic contentless minds cannot be cursorily dismissed; that it is a live option that deserves to be taken much more seriously than it is currently” (xi).

As much as I enjoyed the book, I’m not so sure they succeed. But I’ve been meaning to discuss the relation between embodied cognitive accounts and the Blind Brain Theory for quite some time and Radicalizing Enactivism presents the perfect opportunity to finally do so. I know of a few souls following Three Pound Brain who maintain enactivist sympathies. If you happen to be one of them, I heartily encourage you to chip in your two cents.

Without any doubt, the strength of Radicalizing Enactivism, and the reason it seems to have garnered so many positive reviews, lies in the lucid way Hutto and Myin organize their critique around what they call the ‘Hard Problem of Content’:

“Defenders of CIC [Cognition necessarily Involves Content] must face up to the Hard Problem of Content: that positing informational content is incompatible with explanatory naturalism. The root trouble is that Covariance doesn’t Constitute Content. If covariance is the only scientifically respectable notion of information that can do the work required by explanatory naturalists, it follows that informational content does not exist in nature—or at least it doesn’t exist independently from and prior to the existence of certain social practices. If informational content doesn’t exist in nature, then cognitive systems don’t literally traffic in informational content…” xv

The information they are referring to here is semantic information, or as Floridi puts it in his seminal The Philosophy of Information, “the kind of information that we normally take to be essential for epistemic purposes” (82). To say that cognition necessarily involves content is to say that cognition amounts to the manipulation of information about. The idea is as intuitive as can be: the senses soak up information about the world, which the brain first cognizes then practically utilizes. For most theorists, the truth of this goes without saying: the primary issue is one of the role truth plays in semantic information. For these theorists, the problem that Hutto and Myin alude to, the Hard Problem of Content, is more of a ‘going concern’ rather than a genuine controversy. But if anything this speaks to its intractability as opposed to its relevance. For Floridi, who calls it the Symbol Grounding Problem (following Harnad (1990)),  it remains “one of the most important open questions in the philosophy of information” (134). As it should, given that it is the question upon which the very possibility of semantic information depends.

The problem is one of explaining how information understood as covariance, which can be quantified and so rigorously operationalized, comes to possess the naturalistically mysterious property of ‘aboutness,’ and thus the equally mysterious property of ‘evaluability.’ As with the Hard Problem of Consciousness, many theoretical solutions have been proposed and all have been found wanting in some obvious respect.

Calling the issue ‘the Hard Problem of Content’ is both justified and rhetorically inspired, given the way it imports the obvious miasma of Consciousness Research into the very heart of Cognitive Science. Hutto and Myin wield it the way the hero wields a wooden stake in a vampire movie. They patiently map out the implicatures of various content dependent approaches, show how each of them cope with various challenges, then they finally hammer the Hard Problem of Content through their conceptual heart.

And yet, since this problem has always been a problem, there’s a sense in which Hutto and Myin are demanding that intentionalists bite a bullet (or stake) they have bitten long ago. This has the effect of rendering much of their argument rhetorical—at least it did for me. The problem isn’t that the intentionalists haven’t been able to naturalize intentionality in any remotely convincing way, the problem is that no one has—including Hutto and Myin!

And this, despite all the virtues of this impeccably written and fascinating book, has to be its signature weakness: the fact that Hutto and Myin never manage to engage, let alone surmount, the apparent ineliminability of the intentional. All they really do is exorcise content from what they call ‘basic’ cognition and perception, all the while conceding the ineliminability of content to language and ‘social scaffolding.’ The more general concession they make to explanatory ineliminability is actually explicit in their thesis “that there can be intentionally directed cognition and, even, perceptual experience without content” (x).

So if you read this book hoping to be illuminated as to the nature of the intentional, you will be disappointed. As much as Hutto and Myin would like to offer illumination regarding intentionality, all they really have is another strategic alternative in the end, a way to be less worried about the naturalistic inscrutability of content in particular rather than intentionality more generally. At turns, they come just short of characterizing Radical Enactive Cognition the way Churchill famously characterized democracy: as the least worst way to conceptualize cognition.

So in terms of the Master Hard Problem of naturalistic inscrutability versus explanatory ineliminability, they also find it necessary to bite the inscrutability bullet, only as softly as possible lest anyone hear. They are not interested in any thoroughgoing content skepticism, or what they call ‘Really Radical Enactive or Embodied Cognition’: “Some cognitive activity—plausibly, that associated with and dependent upon the mastery of language—surely involves content” ( xviii). Given that their Hard Problem of Content partitions the Master Problem along such narrow, and ultimately arbitrary, lines, it becomes difficult to understand why anyone should think their position ‘radical’ in any sense.

If they’re not interested in any thoroughgoing content skepticism, they’re even less interested in any thoroughgoing meaning skepticism. Thus the sense of conceptual opportunism that haunted my reading of the book: the failure to tackle the problem of intentionality as a whole lets them play fast and loose with the reader’s intuitions of explanatory ineliminability. Representational content, after all, is the traditional and still (despite the restlessness of graduate students around the world) canonical way of understanding ‘intentional directedness.’ Claiming that representational content runs afoul inscrutability amounts to pointing out the obvious. This means the problem lies in its apparent ineliminability. Pointing out that the representional mountain cannot be climbed simply begs the question of how one gets around it. Systematically avoiding this question lets Hutto and Myin have it both ways, to raise the problem of inscrutability where it serves their theoretical interests, all the while implicitly assuming the very ineliminability that justifies it.

One need only compare the way they hold Tyler Burge (2010) accountable the Hard Problem of Content in Chapter 6 with their attempt to circumvent the Hard Problem of Consciousness in Chapter 8. Burge accepts both inscrutability, the apparent inability to naturalize intentionality, and ineliminability, the apparent inability to explain cognition without intentionality. Like Bechtel, he thinks representational inscrutability is irrelevant insofar as cognitive science has successfully operationalized representations. Rather than offer a ‘straight solution’ to the Hard Problem of Content, Burge argues that we should set it aside, and allow science—and the philosophy concerned with it—to continue pursuing achievable goals.

Hutto and Myin complain:

“Without further argumentation, Burge’s proposal is profoundly philosophically unsatisfying. Even if we assume that contentful states of mind must exist because they are required by perceptual science, this does nothing to address deeply puzzling questions about how this could be so. It is, in effect, to argue from the authority of science. We are asked to believe in representational content even though none of the mysteries surrounding it are dealt with—and perhaps none of them may ever be dealt with. For example, how do the special kinds of natural norms of which Burge speaks come in being? What is their source, and what is their basis? How can representational contents qua representational contents cause, or bring about, other mental of physical events?” 116-117

But when it comes to the Hard Problem of Consciousness, however, Hutto and Myin find themselves whistling an argumentative tune that sounds eerily similar to Burge’s. Like Burge, they refuse to offer any ‘straight solutions,’ arguing that “[r]ather than presenting science and philosophy with an agenda of solving impossible problems, [their] approach liberates both science and philosophy to pursue goals they are able to achieve” (178). And since this is the last page of the book, no corresponding problem of ‘profound philosophical dissatisfaction’ ever arises.

The problem of Radicalizing Enactivism—and the reason why I think it will ultimately harden opinions against the enactivist programme—lies in its failure to assay the shape of what I’ve been calling the Master Problem of naturalistic inscrutability and explanatory ineliminability. The inscrutability of content is simply a small part of this larger problem, which involves, not only the inscrutability of intentionality more generally, but the all-important issue of ineliminability as well, the fact that various ‘intentional properties’ such as evaluability so clearly seem to belong to cognition. By focussing on the inscrutability of content to the exclusion of the Master Problem, they are able to play on specific anxieties due to inscrutability without running afoul more general scruples regarding ineliminability. They can eat their intentional cake and have it too.[4]

Personally, I’m inclined to agree with the more acerbic critics of so-called ‘radical,’ or anti representationalist, enactivism: it simply is not a workable position.[5] But I think I do understand its appeal, why, despite forcing its advocates to fudge and dodge the way they seem to do on what otherwise seem to be relatively straightforward issues, it nevertheless continues to grow in popularity. First and foremost, the problem of inscrutability has grown quite long in the tooth: after decades of pondering this problem, our greatest philosophical minds have only managed to deepen the mire. Add to this the successes of DST and situated AI, plus the simple observation that we humans are causally embedded in—‘coupled to’—our causal environments, and it becomes easy to see how mere paradigm fatigue can lapse into outright paradigm skepticism.

I think Hutto and Myin are right in insisting that representationalism has been played out, that it’s time to move on. The question is really only one of how far we have to move. I actually think this, the presentiment of needing to get away, to start anew, is why ‘radical’ has become such a popular modifier in embodied cognition circles. But I’m not sure it’s a modifier that any of these positions necessarily deserve. I say this because I’m convinced that answering the Master Problem of inscrutability versus ineliminability forces us to move far, far further than any philosopher (that I know of at least) has hitherto dared to go. The fact is Hutto and Myin remain intentionalists, plain and simple. To put it bluntly: if they count as ‘radical,’ then they better lock me up, because I’m just plain crazy![6]

If I’m right, the only way to drain the inscrutability swamp is to tackle the problem of inscrutability whole, which is to say, to tackle the Master Problem. So long as inscrutability remains a problem, the strategy of partitioning intentionality into ‘good’ and ‘bad,’ eliminable and ineliminable—the strategy that Hutto and Myin share with representationalists more generally—can only lead to a reorganization of the controversy. Perhaps one of these reorganizations will turn out to be the lucky winner—who can say?—but it’s important to see that Radical Enactive Cognition, despite its claims to the contrary, amounts to ‘more of the same’ in this crucial respect. All things being equal, it’s doomed to complicate as opposed to solve, insofar as it merely resituates (in this case, literally!) the problem of inscutability.

Now I’m an institutional outsider, which is rarely a good thing if you have a dramatic reconceptualization to sell. When matters become this complicated, professionalization allows us to sort the wheat from the chaff before investing time and effort in either. The problem, however, is that chaff seems to be all anyone has. What I’m calling the Scandal of Cognitive Science represents as clear an example of institutional failure as you will find in the sciences. Given that the problem of inscrutability turns on explicit judgments and implicit assumptions that have been institutionalized, there’s a sense in which hobbyists such as myself, individuals who haven’t been stamped by the conceptual prejudices of their supervisors, or shamed out of pursuing an unconventional line of reasoning by the embarrassed smiles of their peers, may actually have a kind of advantage.

Regardless, there are novel ways to genuinely radicalize this problem, and if they initially strike you as ‘crazy,’ it might just be because they are sane. The Scandal of Cognitive Science,
after all, is the fact that its members have no decisive means to judge one way or another! So, with this in mind, I want to introduce what might be called ‘Just Plain Crazy Enactice Cognition’ (JPCEC), an attempt to apply Hutto and Myin’s ultimately tendentious dialectical use of inscrutability across the board—to solve the Master Problem of naturalistic inscrutability and explanatory ineliminability, in effect. It can be done—I actually think cognitive scientists of the future will smirk and shake their heads, reviewing the twist we presently find ourselves in, but only because they will have internalized something similar to the decidedly alien view I’m about to introduce here.

For reasons that should become apparent, the best way to introduce Just Plain Crazy Enactice Cognition is to pick up where Hutto and Myin end their argument for Radical Enactive Cognition: the proposed solution to the Hard Problem of Consciousness they offer in Chapter 8. The Hard Problem of Consciousness, of course, is the problem of naturalistically explaining phenomenal properties in naturalistic terms of physical structures and dynamics. In accordance with their enactivism, Hutto and Myin hold that phenomenality is environmentally determined in certain important respects. Since ‘wide phenomenality’ is incompatible with qualia as normally understood, this entails qualia eliminativism, which warrants rejecting the explanatory gap—the Hard Problem of Consciousness. They adopt the Dennettian argument that the Hard Problem is impossible to solve given the definition of qualia as “intrinsically qualitative, logically private, introspectable, incomparable, ineffable, incorrigible entities of our mental acquaintance” (156). And since impossible questions warrant no answers they refuse to listen:

“What course do we recommend? Stick with [Radical Enactive Cognition] and take phenomenality to be nothing but forms of activities—perhaps only neural—that are associated with environment-involving interactions. If that is so, there are not two distinct relata—the phenomenal and the physical—standing in a relation other than identity. Lastly, come to see that such identities cannot, and need not be explained. If so, the Hard Problem totally disappears.” 169

When I first read this, I wrote ‘Wish It Away Strategy?’ in the margin. On my second reading, I wrote, ‘Whew! I’m glad consciousness isn’t a baffling mystery anymore!’

The first note was a product of ignorance; I simply didn’t know what was coming next. Hutto and Myin adopt a variant of the Type B Materialist response to the Hard Problem, admitting that there is an explanatory gap, while denying any ontological gap. Conscious experiences and brain-states are considered identical, though phenomenal and physical concepts we use to communicate them are systematically incompatible. It is the difference between the latter that fools us into imputing some kind of ontological difference between the former, giving license to innumerable, ultimately unanswerable questions. Ontological identity means there is no Hard Problem to be solved. Conceptual difference means that phenomenal vocabularies cannot be translated into physical vocabularies, that the phenomenal is ‘irreducible.’ As a result, the phenomenal character of experience cannot be physically explained—it is entirely natural, but utterly inexplicable in natural terms.

But Hutto and Myin share the standard objection against Type B Materialisms: their inability to justify their foundational identity claim.

“Standard Type B offerings therefore fail to face up to the root challenge of the Hard Problem—they fail to address worries about the intelligibility of making certain identity claims head on. They do nothing to make the making of such claims plausible. The punch line is that to make a credible case for phenemeno-physical identity claims it is necessary to deal with—to explain away—appearances of difference in a more satisfactory way than by offering mere stipulations.” 174

Short of some explanation of the apparent difference between conscious experiences and brain states, in other words,Type B approaches can only be ‘wish it away strategies.’ The question accordingly becomes one of motivating the identity of the phenomenal and the physical. Since Hutto and Myin think the naturalistic inscrutability of phenomenality renders standard scientific identification is impossible, they argue that the practical, everyday identity between the phenomenal and the physical we implicitly assume amply warrants the required identification. And as it turns out, this implicit everyday identity is extensive or wide:

“Enactivists foreground the ways in which environment-involving activities are required for understanding and conceiving of phenomenality. They abandon attempts to explain phenemeno-physical identities in deductive terms for attempts to motivate belief in such identities by reminding us of our common ways of thinking and talking about phenomenal experience. Continued hesitance to believe in such identities stems largely from the fact that experiences—even if understood as activities—are differently encountered by us: sometimes we live them through embodied activity and sometimes we get at them only descriptively.” 177

Thus the second comment I wrote reading the above passage!

What ‘motivates’ the enactive Type B materialist’s identity claim, in other words, is simply the identity we implicitly assume in our worldly engagements, an identity that dissolves because of differences intrinsic to the activity of theoretically engaging phenomenality.

I’m assuming that Hutto and Myin use ‘motivate,’ rather than ‘justify,’ simply because it remains entirely unclear why the purported assumption of identity implicit in embodied activity should trump the distinctions made by philosophical reflection. As a result, the force of this characterization is not so much inferential as it is redemptive. It provides an elegant enough way to rationalize giving up on the Hard Problem via assumptive identity, but little more. Otherwise it redeems the priority of lived life, and, one must assume, all the now irreducible intentional phenomena that go with it.

The picture they paint has curb appeal, no doubt about that. In terms of our Master Hard Problem, you could say that Radical Enactivism uses ‘narrow inscrutability’ to ultimately counsel (as opposed to argue) wide ineliminability. All we have to be is eliminativists about qualia and non-linguistic content, and the rest of the many-coloured first-person comes for free.

The problem—and it is a decisive one—is that redemption just ain’t a goal of naturalistic inquiry, no matter how speculative. Since our cherished, prescientific assumptions are overthrown more often than not, a theory’s ability to conserve those assumptions (as opposed to explain them) should warn us away, if anything. The rational warrant of Hutto and Myin’s recommendation lies entirely in assuming the epistemic priority of our implicit assumptions, and this, unfortunately, is slender warrant indeed, presuming, as it does, that when it comes to this one particular yet monumental issue—the identity of the physical and the phenomenal—we’re better philosophers when we don’t philosophize than when we do!

Not surprisingly, questions abound:

1) What, specifically, is the difference between ‘embodied encounters’ and ‘descriptive’ ones?

2) Why are the latter so prone to distort?

3) And if the latter are so prone to distort, to what extent is this description of ‘embodied activity’ potentially distorted?

4) What is the nature of the confounds involved?

5) Is there any way to puzzle through parts of this problem given what the sciences of the brain already know?

6) Is it possible to hypothesize what might be going on in the brain, such that we find ourselves in such straits?

As it turns out, these questions are not only where Radical Enactive Cognition ends, but also where Just Plain Crazy Enactive Cognition begins. Hutto and Myin can’t pose these questions because their ‘motivation’ consists in assuming we already implicitly know all that we need to know to skirt (rather than shirk) the Hard Problem of Consciousness. Besides, their recommendation is to abandon the attempt to naturalistically answer the question of the phenomeno-physical relation. Any naturalistic inquiry into the question of how theoretical reflection distorts the presumed ‘whole’ (‘integral,’ or ‘authentic’) nature of our implicit assumption would seem to require some advance, naturalistic understanding of just what is being distorted—and we have been told that no such understanding is possible.

This is where JPCEC begins, on the other hand, because it assumes that the question of inscrutability and ineliminability is itself an empirical one. Speculative recommendations such as Hutto and Myin’s only possess the intuitive force they do because we find it impossible to imagine how the intentional and the phenomenal could be rendered compatible with the natural. Given the conservative role that failures of imagination have played in science historically, JPCEC assumes the solution lies in the same kind of dogged reimagination that has proven so successful in the past. Given that the intentional and the phenomenal are simply ‘more nature,’ then the claim that they represent something so extraordinary, either ontologically or epistemologically, as to be somehow exempt from naturalistic cognition has to be thought extravagant in the extreme. Certainly it would be far more circumspect to presume that we simply don’t know.

And here is where Just Plain Crazy Enactive Cognition sets its first, big conceptual wedge: not only does it assume that we don’t know—that the hitherto baffling question of the first person is an open question—it asks the crucial question of why we don’t know. How is it that the very thing we once implicitly and explicitly assumed was the most certain, conscious experience, has become such a dialectical swamp?

The JPCEC approach is simple: Noting the role the scarcity of information plays in the underdetermination of scientific theory more generally, it approaches this question in these very terms. It asks, 1) What kind of information is available for deliberative, theoretical metacognition? 2) What kind of cognitive resources can be brought to bear on this information? And 3) Are either of these adequate to the kinds of questions theoreticians have been asking?

And this has a remarkable effect of turning contemporary Philosophy of Mind on its head. Historically, the problem has been one of explaining how physical structure and dynamics could engender the first-person in either its phenomenal or intentional guises. The problem, in other words, is traditionally cast in terms of accomplishment. How could neural structure and dynamics generate ‘what is it likeness’? How could causal systems generate normativity? The problem of inscrutability is simply a product of our perennial inability to answer these questions in any systematically plausible fashion.[7]

Just Plain Crazy Enactive Cognition inverts this approach. Rather than asking how the brain could possibly generate this or that apparent feature of the first-person, it asks how the brain could possibly cognize any such features in the first place. After all, it takes a tremendous amount of machinery to accurately, noninferentially cognize our environments in the brute terms we do: How much machinery would be required to accurately, noninferentially cognize the most complicated mechanism in the known universe?[8]

JPCEC, in other words, begins by asking what the brain likely can and cannot metacognize. And as it turns out, we can make a number of safe bets given what we already know. Taken together, these bets constitute what I call the Blind Brain Theory, or BBT, the systematic explanation of phenomenality and intentionality via human cognitive and metacognitive—this is the important part—incapacity.

Or in other words, neglect. The best way to explain the peculiarity of our phenomenal and intentional inklings is via a systematic account of the information (construed as systematic differences making systematic differences) that our brain cannot access or process.

So consider the unity of consciousness, the feature that most convinced Descartes to adopt dualism. Where the tradition wonders how the brain could accomplish such as thing, BBT asks how the brain could accomplish anything else. Distinctions require information. Flickering lights fuse in experience once their frequency surpasses our detection threshold. What looks like paint spilled on the sidewalk from a distance turns out to be streaming ants. Given that the astronomical complexity of the brain far and away outruns its ability to cognize complexity, the miracle, from the metacognitive standpoint, would be the high-dimensional intuition of the brain as an externally related multiplicity.

As it turns out, many of the perplexing features of the first-person can be understood in terms of information privation. Neglect provides a way to causally characterize the narrative granularity of the ‘mind,’ to naturalize intentionality and phenomenality, in effect. And in doing so it provides a parsimonious and comprehensive way to understand both naturalistic inscrutability and explanatory ineliminability. What I’ve been calling JPCEC, in other words, allows us to solve the Master Hard Problem.[9]

It turns on two core claims. First, it agrees with the consensus opinion that cognition and perception are heuristic, and second, it asserts that social cognition and metacognition in particular are radically heuristic.

To say that cognition and perception are heuristic is to say they exploit the structure of a given problem ecology to effect solutions in the absence of other relevant information. This much is widely accepted, though few have considered its consequences in any detail. If all cognition is heuristic, then all cognition possesses 1) a ‘problem ecology,’ as Todd and Gigerenzer term it (2012), some specific domain of reliability, and 2) a blind spot, an insensitivity, structural or otherwise, to information pertinent to the problem.

To understand the second core claim—the idea that social cognition and metacognition are radically heuristic—one has to appreciate that wider heuristic blind spots generally mean more narrow problem ecologies (though this need not always be the case). Given the astronomical complexity of the human brain—or any brain for that matter—we must presume that our heuristic repertoire for solving brains, whether belonging to others or belonging to ourselves, involves extremely wide neglect, which in turn implies very narrow problem ecologies. So if it turns out that metacognition is primarily adapted to things like refining practical skills, consuming the activities of the default mode, and regulating social performance, then it becomes a real question whether it possesses the cognitive and/or  informational resources required to solve the kinds of problems philosophers are prone to ponder. Philosophical reflection on the ‘nature of knowledge’ could be akin to using a screwdriver to tighten bolts! The fact that we generally have no metacognitive inkling of swapping between different cognitive tools whatsoever pretty clearly suggests it very well might be—at least when it comes to theorizing things such as ‘knowledge’![10]

At this point it’s worth noting how this way of conceiving cognition and perception amounts to a kind of ‘subpersonal enactivism.’ To say cognition is heuristic and fractionate is to say that cognition cannot be understood independent of environments, no more than a screw-driver can be understood independent of screws. It’s also worth noting how this simply follows from mechanistic paradigm of the natural sciences. Humans are just another organic component of their natural environments: emphasizing the heuristic, fractionate nature of cognition and perception allows us to investigate our ‘dynamic componency’ in a more detailed way, in terms of specific environments cuing specific heuristic systems cuing specific behaviours and so on.[11]

But if this subpersonal enactivism is so obvious—if ‘cognitive componency’ simply follows from the explanatory paradigm of the natural sciences—then why all the controversy? Why should ‘enactive’ or ‘embodied’ cognition even be a matter of debate? What motivates the opportunistic eliminativism of Radical Enactive Cognition, remember, is the way content has the tendency to ‘internalize’ cognition, to narrow it to the head. Once the environment is rolled up into the representational brain, trouble-shooting the environment becomes intracranial. So, if one can find some way around the apparent explanatory ineliminability of content, one can simply assert the cognitive componency implied by the mechanistic paradigm of natural science. And this, remember, was what made Hutto and Myin’s argument more deceptive than illuminating. Rather than focus on ineliminability, they turned to inscrutability, the bullet everyone—including themselves!—has already implicitly or explicitly bitten.

Just Plain Crazy Enactive Cognition, however, diagnoses the problem in terms of metacognitive neglect. Content, as it turns out, isn’t the only way to short-circuit the apparent obviousness cognitive componency. One might ask, for instance, why it took us so damn long to realize the fractionate, heuristic nature of our own cognitive capacities. Metacognitive neglect provides an obvious answer: Absent any way of making the requisite distinctions, we simply assumed cognition was monolithic and universal. Absent the ability to discriminate environmentally dependent cognitive functions, it was difficult to see cognition as a biological component of a far larger, ‘extensive’ mechanism. A gear that can turn every wheel is no gear at all.

‘Simples’ are cheaper to manage than ‘complexes’ and evolution is a miser. We cognize/metacognize persons rather than subpersonal assemblages because this was all the information our ancestors required. Not only is metacognition blind to the subpersonal, it is blind to the fact that it is blind: as far as it’s concerned, the ‘person’ is all there is. Evolution had no clue we would begin reverse-engineering her creation, begin unearthing the very causal information that our social and metacognitive heuristic systems are adapted to neglect. Small wonder we find ourselves so perplexed! Every time we ask how this machinery could generate ‘persons’—rational, rule-following, and autonomous ‘agents’—we’re attempting to understand the cognitive artifact of a heuristic system designed to problem solve in the absence of causal information in terms of causal information. Not surprisingly, we find ourselves grinding our heuristic gears.

The person, naturalistically understood, can be seen as a kind of strategic simplification. Given the abject impossibility of accurately intuiting itself, the brain only cognizes itself so far as it once paid evolutionary dividends and no further. The person, which remains naturalistically inscrutable as an accomplishment (How could physical structure and dynamics generate ‘rational agency’?) becomes naturalistically obvious, even inevitable, when viewed as an artifact of neglect.[12] Since intuiting the radically procrustean nature of the person requires more information, more metabolic expense, evolution left us blessedly ignorant of the possibility. What little we can theoretically metacognize becomes an astounding ‘plenum,’ the sum of everything to be metacognized—a discrete and naturalistically inexplicable entity, rather than a shadowy glimpse serving obscure ancestral needs. We seem to be a ‘rational agent’ before all else

Until, that is, disease or brain injury astounds us.[13]

This explanatory pattern holds for all intentional phenomena. Intentionality isn’t so much a ‘stance’ we take to systems, as Dennett argues, as it is a particular family of heuristic mechanisms adapted to solve certain problem ecologies. Intentionality, in other words, is mechanical—which is to say, not intentional. Resorting to these radically heuristic mechanisms may be the only way to solve a great number of problems, but it doesn’t change the fact that what we are actually doing, what is actually going on in our brain, is natural like anything else, mechanical. The fact that you, me, or anyone exploits the heuristic efficiency of terms like ‘exploit’ no more presupposes any implicit commitment to the priority, let alone the ineliminability, of intentionality than reliance on naive physics implies the falsehood of quantum mechanics.

This has to be far and away the most difficult confound to surmount: the compulsion to impute efficacy to our metacognitive inklings. So it seems that what we call ‘rationality,’ even though it so obviously bears all the hallmarks of informatic underdetermination, must in some way drive ‘action.’ As the sum of what our brain can cognize of its activity, our brain assumes that it exhausts that activity. It mistakes what little it cognizes for the breath-taking complexity of what it actually is. The granular shadows—‘reasons,’ ‘rules,’ ‘goals,’ and so on—seem to cast the physical structure and dynamics of the brain, rather than vice versa. The hard won biological efficacy of the brain is attributed to some mysterious, reason-imbibing, judgment-making ‘mind.’

Metacognitive incapacity simply is not on the metacognitive menu. Thus the reflexive, question-begging assumption that any use of normative terms presupposes normativity rather than the spare mechanistic sketch provided above.

Here we can clearly see both the form of the Master Hard Problem and the way to circumvent it. Intentionality seems inscrutable to naturalistic explanation because intentional heuristics are adapted to solve problems in the absence of pertinent causal information—the very information naturalistic explanation requires. Metacognitive blindness to the fractionate, heuristic nature of cognition also means metacognitive blindness to the various problem ecologies those heuristics are adapted to solve. In the absence of information (difference making differences), we historically assumed simplicity, a single problem ecology with a single problem solving capacity. Only the repeated misapplication of various heuristics over time provided the information needed to distinguish brute subcapacities and subecologies. Eventually we came to distinguish causal and intentional problem-solving, and to recognize their peculiar, mutual antipathy as well. But so long as metacognition remained blind to metacognitive blindness, we persisted in committing the Accomplishment Fallacy, cognizing intentional phenomena as they appeared to metacognition as accomplishments, rather than side-effects of our brain’s murky sense of itself.

So instead of seeing cognition wholly in enactive terms of componency—which is to say, in terms of mechanistic covariance—we found ourselves confronted by what seemed to be obvious, existent ‘intentional properties.’ Thus explanatory ineliminability, the conviction that any adequate naturalistic account of cognition would have to naturalistically account for intentional phenomena such as evaluability—the very properties, it so happens, that underwrite the attribution of representational content to the brain.

So, where Radical Enactive Cognition is forced to ignore the Master Problem in order to opportunistically game the problem of naturalistic inscrutability (in its restricted representationalist form) to its own advantage, Just Plain Crazy Enactivist Cognition is able to tackle the problem whole by simply turning the traditional accomplishment paradigm upside down. The theoretical disarray of cognitive science, it claims, is an obvious artifact of informatic underdetermination. What distinguishes this instance of underdetermination is the degree it turns on the invisibility of metacognitive incapacity, the way cognizing the insufficiency of the information and resources available to metacognition requires more information and resources. This generates the illusion of metacognitive sufficiency, the implicit conviction that what we intuit is what there is…

That we actually possess something called a ‘mind.’

Thus the ‘Just Plain Crazy’—the Blind Brain Theory offers nothing by way of redemption, only what could be the first naturalistically plausible way out of the traditional maze. On BBT, ‘consciousness’ or ‘mind’ is just the brain seen darkly.

In Hutto and Myin’s account of Radical Enactive Cognition, considerations of the kinds of conceptual resources various positions possess to tackle various problems figure large. The more problem solving resources a position possesses the better. In this respect, the superiority of JPCEC to REC should be clear already: insofar as REC, espousing both inscrutability and ineliminability, actually turns on the Master Hard Problem, it clearly lacks the conceptual resources to solve it.

But surely more is required. Any position that throws out the baby of explanatory ineliminability with the bathwater of naturalistic inscrutability has a tremendous amount of ‘splainin’ to do. In his Radical Embodied Cognition, Anthony Chemero does an excellent job illustrating the ‘guide to discovery’ objection to antirepresentationalist approaches to cognition such as his own. He relates the famous debate between Ernst Mach and Ludwig Boltzmann regarding the role ‘atoms’ in physics. For Mach, atoms amounted to an unnecessary fairy-tale posit, something that serious physicists did not need to carry out their experimental work. In his 1900 “The Recent Development of Method in Theoretical Physics,” however, Boltzmann turned the tide of the debate by showing how positing atoms had played an instrumental role in generating a number of further discoveries.

The power of this argumentative tactic was brought home to me in a recent talk by Bill Bechtel,[14] who presented his own guide to discovery argument for representationalism by showing the way representational thinking facilitated the discovery of place and grid cells and the role they play in spatial memory and navigation. Chemero, given his pluralism, is more interested in showing that radical embodied approaches possess their own pedigree of discoveries. In Radicalizing Enactivism, Hutto and Myin seem more interested in simply blunting the edge of these arguments and moving on. In their version, they stress the fact that scientists actually don’t talk about content and representation all that much. Bechtel, however, was at pains to show that they do! And why shouldn’t they, he would ask, given that we find ‘maps’ scattered throughout the brain?

The big thing to note here is the inevitability of argumentative stalemate. Neither side possesses the ‘conceptual resources’ to do much more than argue about what actual researchers actually mean or think and how this bears on their subsequent discoveries. Insofar as it possesses the ‘he-said-she-said’form of a domestic spat, you could say this debate is tailor-made to be intractable. Who the hell knows what anyone is ‘really thinking’? And it seems we make discoveries both positing representations and positing their absence!

Just Plain Crazy Enactive Cognition, however, possesses the resources to provide a far more comprehensive, albeit entirely nonredemptive, view. It begins by reminding us that any attempt to understand the brain necessarily involves the brain. It reminds us, in other words, of the subpersonally enactive nature of all research, that it involves physical systems engaging other physical systems. Insofar as researchers have brains, this has to be the case. The question then becomes one of how representational cognition could possibly fit into this thoroughly mechanical picture.

Pointing out our subpersonal relation to our subject matter is well and fine. The problem is one of connecting this picture to our intuitive, intentional understanding of our relation. Given the appropriate resources, we could specify all the mechanical details of the former relation—we could cobble together an exhaustive account of all the systematic covariances involved—and still find ourselves unable to account for out and out crucial intentional properties such as ‘evaluability.’ Call this the ‘cognitive zombie hunch.’

Now the fact that ‘hard problems’ and ‘zombie hunches’ seem to plague all the varying forms of intentionality and phenomenality is certainly no coincidence. But if other approaches touch on this striking parallelism at all, they typically advert—the way Hutto and Myin do—to some vague notion of ‘conceptual incompatibility,’ one definitive enough to rationalize some kind of redemptive form of ‘irreducibility,’ and nothing more. On Just Plain Crazy Enactive Cognition, however, these are precisely the kinds of problems we should expect given the heuristic character of the cognitive systems involved.

To say that cognition is heuristic, recall, is to say, 1) that it possesses a given problem-ecology, and 2) that it neglects otherwise relevant information. As we have seen, (1) warrants what I’ve been calling ‘subpersonal enactivism.’ The key to unravelling the knot of representationalism, of finding some way to square the purely mechanical nature of cognition with apparently self-evident intentional properties such as evaluability lies in (2). The problem, remember, is that any exhaustive mechanical account of cognition leaves us unable to account for the intentional properties of cognition. One might ask, ‘Where do these properties come from? What makes ‘evaluability,’ say, tick?’ But the problem, of course, is that we don’t know. What is more, we can’t even fathom what it would take to find out. Thus all the second-order attempts to reinterpret obvious ignorance into arcane forms of ‘irreducibility.’ But if we can’t naturalistically explain where these extraordinary properties come from, perhaps we can naturalistically explain where our idea of these extraordinary properties comes from…

Where else, if not metacognition?

And as we saw above, metacognition involves neglect at every turn. Any human brain attempting to cognize its own cognitive capacities simply cannot—for reasons of structural complicity (the fact that it is the very thing it is attempting to cognize) and target complexity (the fact that its complexity vastly outruns its ability to cognize complexity)—cognize those capacities the same way it cognizes its natural environments, which is to say, causally. The human brain necessarily suffers what might be called proximal or medial neglect. It constitutes its own blind spot, insofar as it cannot cognize its own functions in the same manner that it cognizes environmental functions.

One minimal phenomenological claim one could make is that the neurofunctionality that enables conscious cognition and experience is in no way evident in conscious cognition and experience. On BBT, this is a clear cut artifact of medial neglect, the fact that the brain simply cannot engage the proximate mechanical complexities it requires to engage its distal environments. Solving itself, therefore, requires a special kind of heuristic, one cued to providing solutions in the abject absence of causal information pertaining to its actual neurofunctionality.

Think about it. You see trees, not trees causing you to see trees. Even though you are an environmentally engaged ‘tree cognizing’ system, phenomenologically you simply see… trees. All the mechanical details of your engagement, the empirical facts of your coupled systematicity, are walled off by neglect—occluded. Because they are occluded, ‘seeing trees’ not only becomes all that you can intuit, it becomes all that you need to intuit, apparently.

Thus ‘aboutness,’ or intentionality in Brentano’s restricted sense: given the structural occlusion of our componency, the fact that we’re simply another biomechanically embedded biomechanical system, problems involving our cognitive relation to our environments have to be solved in some other way, in terms not requiring this vast pool of otherwise relevant information. Aboutness is this alternative, the primary way our brains troubleshoot their cognitive engagements.

It’s important to note here that the ‘aboutness heuristic’ lies outside the brain’s executive purview, that its deployment is mandatory. No matter how profoundly we internalize our intellectual understanding of our componency, we see trees nevertheless. This is what makes aboutness so compelling: it constitutes our intuitive baseline.

So, when our brains are cued to troubleshoot their cognitive engagements they’re attempting to finesse an astronomically complex causal symphony via a heuristic that is insensitive to causality. This means that aboutness, even though it captures the brute cognitive relation involved, has no means of solving the constraints involved. Thus normativity, the hanging constraints (or ‘skyhooks’ as Dennett so vividly analogizes them) we somehow intuit when troubleshooting the accuracy of various aboutnesses. As a result, we cognize cognition as a veridical aboutness—in terms commensurate with subjectivity rather than componency.

Nor do we seem to have much choice. Our intuitive understanding of understanding as evaluable, intentional directedness seems to be reflexive, a kind of metacognitive version of a visual illusion. This is why thought experiments like Leibniz’s Mill or arguments like Searle’s Chinese Room rattle our intuitions so: because, for one, veridical aboutness heuristics have adapted to solve problems without causal information, and because deliberative metacognition, at least, cannot identify the heuristics as such and so assumes the universality of their application. Our intuitive understanding of understanding intuitively strikes us as the only game in town.

This is why the frame of veridical aboutness anchors countless philosophical chasses, why you find it alternately encrusted in the human condition, boiled down to its formal bones, pitched as the ground of mere experience, or painted as the whole of reality. For millennia, human philosophical thought has buzzed within it like a fly in an invisible Klein Bottle, finding ourselves caught in the self-same dichotomies of subject and object, ideal and real.

Philosophy’s inability to clarify any of its particularities attests to its metacognitive informatic penury. Intentionality is a haiku—we simply lack the information and resources to pin any one interpretation to its back. And yet, as obviously scant as this picture is, we’ve presumed the diametric opposite historically, endlessly insisting, as if afflicted with a kind of theoretical anosognosia, that it provides the very frame of intelligibility rather than a radically heuristic way to solve for cognition.

Thus the theoretical compulsion that is representationalism. Given the occlusion of componency, or medial neglect, any instance of mistaken cognition necessarily becomes binary, a relation between. To hallucinate is to be directed at something not of the world, which is to say, at something other than the world. The intuitions underwriting veridical directedness, in other words, lend themselves to further intuitions regarding the binary structure of mistaken cognition. Because veridical aboutness constitutes our mandatory default problem solving mode, any account of mistaken cognition in terms of componency—in terms of mere covariance— seems not only counter-intuitive, but hopelessly procrustean as well, to be missing something impossible to explain and yet ‘obviously essential.’ Since the mechanical functions of cognition are themselves mandatory to scientific understanding, theorists feel compelled to map veridical aboutness onto those functions.

Thus the occult notion of mental and perceptual content, the ontological attribution of veridical aboutness to various components in the brain (typically via some semantic account of information).

Given that the function of veridical aboutness is to solve in the absence of mechanical information, it is perhaps surprising that it is relatively easy to attribute to various mechanisms. Mechanistic inscrutability, it turns out, is apparently no barrier to mechanistic applicability. But this actually makes a good deal of sense. Given that any component of a mechanism is a component by virtue of its dynamic, systematic interrelations with the rest of the mechanism, it can always be argued that any downstream component possesses implicit ‘information about’ other parts of the mechanism. When that component is dedicated, however, when it simply discharges the same function come what may, the ‘veridical’ aspect becomes hard to understand, and the attribution seems arbitrary. Like our intuitive sense of agency, veridicality requires ‘wiggle room.’ This is why the attribution possesses real teeth only when the component at issue plays a variable, regulatory function like, say, a Watt governor on a steam engine. As mechanically brute as a Watt governor is, it somehow still makes ‘sense’ to say that it is ‘right or wrong,’ performing as it ‘should.’ (Make no mistake: veridical aboutness heuristics do real cognitive work, just in a way that resists mechanical analysis—short of Just Plain Crazy Enactive Cognition, that is).

The debate thus devolves into the blind (because we have no metacognitive inkling that heuristics are involved) application of competing heuristics. The representationalist generally emphasizes the component at issue, drawing attention away from the systematic nature of the whole to better leverage the sense of variability or ‘wiggle room’ required to cue our veridical intuitions. The anti-representationalist, on the other hand, will emphasize the mechanism as a whole, drawing attention to the temporally deterministic nature of the processes at work to block any intuition of variability, to deny the representationalist their wiggle room.

This was why Bechtel, in his presentation on the role representations played in the discovery of place and grid cells, remained fixated on the notion of ‘neural maps’: these are the components that, when conceived apart from the monstrously complicated neural mechanisms they functioned within, are most likely to trigger the intuition of veridical aboutness, and so seem like bits of nature possessing the extraordinary property of being true or false of the world— obvious representations.

Those bits, of course, possessed no such extraordinary properties. Certainly they recapitulate environmental information, but any aboutness they seem to possess is simply an artifact of our hardwired penchant to problem solve (or communicate our solutions) around our own pesky mechanical details.

But if anything speaks to the difficulty we have overcoming our intuitions of veridical aboutness, it is the degree to which so-called anti-representationalists like Hutto and Myin so readily concede it otherwise. Apparently, even radicals have a hard time denying its reality. Even Dennett, whose position often verges on Just Plain Crazy Enactive Cognition, insists that intentionality can be considered ‘real’ to the extent that intentional attributions pick out real patterns.[15] But do they? For instance, how could positing a fictive relationship, veridical aboutness, solve anything, let alone the cognitive operations of the most complicated machine known? There’s no doubt that solutions follow upon such posits regularly enough. But the posit only needs to be systematically related to the actual mechanical work of problem-solving for that to be the case. Perhaps the posit solves an altogether different problem, such as the need to communicate cognitive issues.

The problem, in other words, lies with metacognition. In addition to asking what informs our intentional attributions, we need to ask what informs our attributions of ‘intentional attribution’? Does adopting the ‘intentional stance’ serve to efficiently solve certain problems, or does it serve to efficiently communicate certain problems solved by other means—even if only to ourselves? Could it be a kind of orthogonal ‘meta-heuristic,’ a way to solve the problem of communicating solutions? Dennett’s ‘intentional stance’ possesses nowhere near the conceptual resources required to probe the problem of intentionality from angles such as these. In fact, it lacks the resources to tackle the problem in anything but the most superficial naturalistic terms. As often as Dennett claims that the intentional arises from the natural, he never actually provides any account of how.[16]

As intuitively appealing as the narrative granularity of Dennett’s ‘intentional stance’ might be, it leaves the problem of intentionality stranded at all the old philosophical border stations.[17] The approach advocated here, however, where we speak of the deployment of various subpersonal heuristics, is less intuitive, hewing to componency as it does, but to the extent that it poses the problem of intentionality in mechanical as opposed to intentional terms, it stamps the passport, and finally welcomes intentionality to the realm of natural science. The mechanical idiom, which allows us to scale up and down various ‘levels of description,’ to speak of proteins and organelles and cells and organisms and ecologies in ontologically continuous terms, is tailor made for dealing with the complexities raised above.

Just Plain Crazy Enactive Cognition follows through on the problem of the intentional in a ruthlessly consistent manner. The story is mechanical all the way down—as we should expect, given the successes of the natural sciences. The ‘craziness,’ by its lights, is the assumption that one can pick and choose between intentional phenomena, eliminate this, yet pin the very possibility of intelligibility on that.

Consider Andy Clark’s now famous attempt (1994, 1997) to split the difference between embodied and intellectual approaches to cognition: the notion that some systems are, as he terms it, ‘representation hungry.’[18] One of the glaring difficulties faced by ‘radical enactive’ approaches turns on the commitment to direct realism. The representationalist has no problem explaining the constructed nature of perception, the fact that we regularly ‘see more than there is’: once the brain has accumulated enough onboard environmental ‘information about,’ direct sensory information is relegated to a ‘supervisory’ role. Since this also allows them to intuitively solve the ‘hard’ problem of illusion, biting the Hard Problem of Content seems more than a fair trade.

Those enactivists who eschew perceptual content not only reject information about but all the explanatory work it seems to do. This puts them in the unenviable theoretical position of arguing that perception is direct, and that the environment, accordingly, possesses all the information required for perceptually guided behaviour. All sophisticated detection systems, neural or electronic, need to solve the Inverse Problem, the challenge of determining properties belonging to distal systems via the properties of some sensory medium. Since sensory properties are ambiguous between any number of target properties, added information is required to detect the actual property responsible. Short of the system accumulating environmental information, it becomes difficult to understand how such disambiguation could be accomplished. The dilemma becomes progressively more and more difficult the higher you climb the cognitive ladder. So with language, for instance, you simply see/hear simple patterns of shape/sound from which you derive things like murderous intent to theories of cognition!

Some forms of cognition, in other words, seem to be more representation hungry than others, with human communication appearing to be the most representation hungry of all. In all likelihood this is the primary reason Hutto and Myin opt to game naturalistic inscrutability and explanatory ineliminability the way they do, rather than argue anything truly radical.

But if this is where the theoretical opportunism of Radical Embodied Cognition stands most revealed, it is also where the theoretical resources of Just Plain Crazy Enactive Cognition—or the Blind Brain Theory—promise to totally redefine the debate as traditionally conceived. No matter how high we climb Clark’s Chain of Representational Hunger, veridical aboutness remains just as much a heuristic—and therefore just as mechanical—as before. On BBT, Clark’s Chain of Representational Hunger is actually a Chain of Mechanical Complexity: the more sophisticated the perceptually guided behaviour, the more removed from bare stimulus-response, the more sophisticated the machinery required—full stop. It’s componency all the way down. On a thoroughgoing natural enactive view—which is to say, a mechanical view—brains can be seen as devices that transform environmental risk into onboard mechanical complexity, a complexity that, given medial neglect, metacognition flattens into heuristics such as aboutness. Certainly part of that sophistication involves various recapitulations of environmental structure, numerous ‘maps,’ but only as components of larger biomechanical systems, which are themselves components of the environments they are adapted to solve. This is as much the case with ‘pinnacle cognition,’ human theoretical practice, as it is with brute stimulus and response. There’s no content to be found anywhere simply because, as inscrutability has shouted for so very long, there simply is no such thing outside of our metacognitively duped imaginations.

The degree that language seems to require content is simply the degree to which the mechanical complexities involved elude metacognition—which is to say, the degree to which language has to be heuristically cognized in noncausal terms. In the absence of cognizable causal constraints , the fact that language is a biomechanical phenomena, we cognize ‘hanging constraints,’ the ghost-systematicity of normativity. In the absence of cognizable causal componency, the fact that we are mechanically embedded in our environments, we cognize aboutness, a direct and naturalistically occult relation that somehow binds words to world. In the absence of any way to cognize these radical heuristics as such, we assume their universality and sufficiency—convince ourselves that these things are real.

On the Blind Brain Theory, or as I’ve been calling it here, Just Plain Crazy Enactive Cognition, we are natural all the way down. On this account, intentionality is simply what mechanism looks like from a particular, radically blinkered angle. There is no original intentionality, and neither is there any derived intentionality. If our brains do not ‘take as meaningful,’ then neither do we. If environmental speech cues the application of various, radically heuristic cognitive systems in our brain, then this is what we are actually doing whenever we understand any speaker.

Intentionality is a theoretical construct, the way it looks whenever we ‘descriptively encounter’ or theoretically metacognize our linguistic activity—when we take a particular, information starved perspective on ourselves. As intentionally understood, norms, reasons, symbols, and so on are the descriptions of blind anosognosiacs, individuals convinced they can see for the simple lack of any intuition otherwise. The intuition, almost universal in philosophy, that ‘rule following’ or ‘playing the game of giving and asking for reasons’ is what we implicitly do is simply a cognitive conceit. On the contrary, what we implicitly do is mechanically participate in our environments as a component of our environments.

Now because it’s neglect that we are talking here, which is to say, a cognitive incapacity that we cannot cognize, I appreciate how counter-intuitive—even crazy—this must all sound. What I’m basically saying is that the ancient skeptics were right: we simply don’t know what we are talking about when we turn to theoretical metacognition for answers. But where the skeptics were primarily limited to second-order observations of interpretative underdetermination, I have an empirical tale to tell, a natural explanation for that interpretative underdetermination (and a great deal besides), one close to what I think cognitive science will come to embrace in the course of time. Even if you disagree, I would wager that you do concede the skeptical challenge is legitimate one, that there is a reason why so much philosophy can be read as a response to it. If so, then I would entreat you to regard this as a naturalized skepticism. The fact is, we have more than enough reason to grant the skeptic the legitimacy of their worry. In this respect, Just Plain Crazy Enactive Cognition provides a possible naturalistic explanation for what is already a legitimate worry.

Just consider how remarkably frail the intuitive position is despite seeming so obvious. Given that I used the term ‘legitimate’ in the preceding paragraph, the dissenter’s reflex will be to accuse me of obvious ‘incoherence,’ to claim that I am implicitly presupposing the very normativity I claim to be explaining away.

But am I? Is ‘presupposing normativity’ really what I am implicitly doing when I use terms such as ‘legitimate’? Well, how do you know? What informs this extraordinary claim to know what I ‘necessarily mean’ better than I do? Why should I trust your particular interpretation, given that everyone seems to have their own version? Why should I trust any theoretical metacognitive interpretation, for that matter, given their manifest unreliability?

I’ll wait for your answer. In the meantime, I’m sure you’ll understand if I continue assuming that whatever I happen to be implicitly doing is straightforwardly compatible with the mechanical paradigm of natural science.

For all its craziness, Just Plain Crazy Enactive Cognition is a very tough nut to crack. The picture it paints is a troubling one, to be sure. If empirically confirmed, it will amount to an overthrow of ‘noocentrism’ comparable to the overthrow of geocentrism and biocentrism in centuries previous.[19] Given our traditional understanding of ourselves, it is without a doubt an unmitigated disaster, a worst-case scenario come true. Given the quest to genuinely understand ourselves, however, it provides a means to dissolve the Master Problem, to naturalistically understand intentionality, and so a way to finally—finally!—cognize our profound continuity with nature.

In fact, the more you ponder it, the more inevitable it seems. Evolution gave us the cognition we needed, nothing more. To the degree we relied on metacognition and casual observation to inform our self-conception, the opportunistic nature of our cognitive capacities remained all but invisible, and we could think ourselves the very rule, stamped not just the physical image of God, but in His cognitive image as well. Like God, we had no back side, nothing to render us naturally contingent. We were the motionless centre of the universe: the earth, in a very real sense, was simply enjoying our ride. The fact of our natural, evolutionarily adventitious componency escaped us because the intuition of componency requires causal information, and metacognition offered us none.

Science, in other words, was set against our bottomless metacognitive intuitions from the beginning, bound to show that our traditional understanding of our cognition, like our traditional understanding of our planet and our biology, was little more than a trick of our informatic perspective.

.

Notes

[1] I mean this in the umbrella sense of the term, which includes normative, teleological, and semantic phenomena.

[2] Of course, there are other apparent intentional properties of cognition that seem to require explanation as well, including aboutness, so-called ‘opacity,’ productivity, and systematicity.

[3] For those interested in a more detailed overview, I highly recommend Chapter 2 of Anthony Chemero’s Radical Embodied Cognitive Science.

[4] This is one reason why I far prefer Anthony Chemero’s Radical Embodied Cognition (2009), which, even though it is argued in a far more desultory fashion, seems to be far more honest to the strengths and weaknesses of the recent ‘enactive turn.’

[5] One need only consider the perpetual inability of its advocates to account for illusion. In their consideration of the Muller-Lyer Illusion, for instance, Hutto and Myin argue that perceptual illusions “depend for their very existence on high-level interpretative capacities being in play” (125), that illusion is quite literally something only humans suffer because only humans possess the linguistic capacity to interpret them as such. Without the capacity to conceptualize the disjunction between what we perceive and the way the world is there are no ‘perceptual illusions.’ In other words, even though it remains a fact that you perceive two lines of equal length as possessing different lengths in the Muller-Lyer Illusion, the ‘illusion’ is just a product of your ability to judge it so. Since the representationalist is interested in the abductive warrant provided by the fact of the mistaken perception, it becomes difficult to see the relevance of the judgment. If the only way the enactivist can deal with the problem of illusion is by arguing illusions are linguistic constructs, then they have a hard row to how indeed!

[6] Which given the subject matter, perhaps isn’t so ‘crazy’ after all, if Eric Schwitzgebel is to be believed!

[7] Hutto and Myin have identified the proper locus of the problem, but since they ultimately want to redeem intentionality and phenomenality, their diagnosis turns on the way the ‘theoretical attitude’—or the ‘descriptive encounter’ favoured by the ‘Intellectualist’—frames the problem in terms of two distinct relata. Thus their theoretical recommendation that we resist this one particular theoretical move and focus instead on the implicit identity belonging to their theoretical account of embodied activity.

[8] See “THE Something about Mary” for a detailed consideration of this specific problem.

[9] Without, it is important to note, solving the empirical question of what consciousness is. What BBT offers, rather, is a naturalistic account of why phenomenality and intentionality baffle us so.

[10] See “The Introspective Peepshow: Consciousness and the Dreaded Unknown Unknowns” for a more thorough account.

[11] Note also the way this clears away the ontological fog of Gibson’s ‘affordances’: our dynamic componency, the ways we are caught up in the stochastic machinery of nature, is as much an ‘objective’ feature of the world as anything else.

[12] See “Cognition Obscura” for a comprehensive overview.

[13] We understand ourselves via heuristics that simply do not admit the kind of information provided by a great number of neuropathologies. Dissociations such as pain asymbolia, for example, provide dramatic evidence of how profound our neglect-driven intuition of phenomenal simplicity runs.

[14] “Investigating Neural Representations: The Tale of Place Cells,” presented at the Rotman Institute of Philosophy, Sept. 19th, 2013.

[15] See “Real Patterns.”

[16] This is perhaps nowhere more apparent than in Dennett’s critical discussion of Brandom’s Making it Explicit, “The Evolution of [a] Why.”

[17] ‘Nibbling’ is what he calls his strategy in his latest book, where we “simply postpone the worrisome question of what really has a mind, about what the proper domain of the intentional stance is” and simply explore the power of this ‘good trick’ (Intuition Pumps, 79). Since he can’t definitively answer either question, the suspicion is that he’s simply attempting to recast a theoretical failure as a methodological success.

[18] See “Doing Without Representing?”

[19] In fact, it provides the resources to answer the puzzling question of why these ‘centrisms’ should constitute our default understanding in the first place.