Three Pound Brain

No bells, just whistling in the dark…

Tag: The Blind Brain Theory

The Ontology of Ghosts

by rsbakker

In the courtyard a shadowy giant elm

Spreads ancient boughs, her ancient arms where dreams,

False dreams, the old tale goes, beneath each leaf

Cling and are numberless.

–Virgil, The Aenied, Book VI

.

I’m always amazed, looking back, at how fucking clear things had seemed at this or that juncture of my philosophical life—how lucid. The two early conversions, stumbling into nihilism as a teenager, then climbing into Heidegger in my early twenties, seem the most ‘religious’ in retrospect. I think this is why I never failed to piss people off even back then. You have this self-promoting skin you wear when you communicate, this tactical gloss that compels you to impress. This is what non-intellectuals hear when you speak, tactics and self-promotion. This is why it’s so easy to tar intellectualism in the communal eye: insecurity and insincerity are of its essence. All value judgements are transitive in human psychology: Laugh up your sleeve at what I say, and you are laughing at me. I was an insecure, hypercritical, know-it-all. You add the interpersonal trespasses of religion—intolerance, intensity, and aggressiveness—and I think it’s safe to assume I came across as an obnoxious prick.

But if I was evangelical, it was that I could feel those transformations. Each position possessed its own, distinct metacognitive attitude toward experience, a form of that I attributed to this, whatever it might be. With my adolescent nihilism, I remember obsessively pondering the way my thoughts bubbled up out of oblivion—and being stupefied. I was some kind of inexplicable kink in the real. I was so convinced I was an illusion that I would ache for being alone, grip furniture for fear of flying.

But with Heidegger, it was like stepping into a more resonant clime, into a world rebarred with meaning, with projects and cares and rules and hopes. A world of towardness, where what you are now is a manifold of happenings, a gazing into an illuminated screen, a sitting in a world bound to you via your projects, a grasping of these very words. The intentional things, the phenomena of lived life, these were the foundation, I believed, the sine qua non of empirical inquiry. Before we can ask the question of freedom and meaning we need to ask the question of what comes first.

What could be more real than lived life?

It took a long time for me to realize just how esoteric, just how parochial, my definition of ‘lived life’ was. No matter how high you scratch your charcoal cloud, the cave wall always has the final say. It’s the doctors that keep you alive; philosophers just help you fall to sleep. Everywhere I looked across Continental philosophy, I saw all these crazy-ass interpretations, variants spanning variants, revivals and exhaustions, all trying to get the handle on the intentional ontology of a ‘lived life’ that took years of specialized training to appreciate. This is how I began asking the question of the cognitive difference. And this is how I found myself back at the beginning, my inaugural, adolescent departure from the naive.

The difference being, I am no longer stupefied.

I have a new religion, one that straightens out all the kinks, and so dispels rather than saves the soul. I am no exception. I have been chosen by nobody for nothing. I am continuous with the x-dimensional totality that we call nature—continuous in every respect. I watch images from Hubble, the most distant galactic swirls, and I tell myself, I am this, and I feel grand and empty. I am the environment that chokes, the climate that reels. I am the body that the doctor attends…

And you are too.

Thus the most trivial prophecy, the prediction that you will waver, crumble, that the florescent light will wobble to the sound of loved ones weeping… breathing. That someone, maybe, will clutch your hand.

Such hubris, when you think about it, to assume that lived life lay at your intellectual fingertips—the thing most easily grasped! For someone who has spent their life reading philosophy this stands tall among the greater insults: the knowledge that we have been duped all along, that all those profundities, that resonant world I found such joy and rancour pondering, were little more than the artifact of machines taking their shadows for reflections, the cave wall for a looking glass.

I am the residue of survival—living life. I am an astronomically complicated system, a multifarious component of superordinate systems that cannot cognize itself as such for being such. I am a serial gloss, a transmission from nowhere into nowhere, a pattern plucked from subpersonal pandemonium and broadcast to the neural horde. I am a message that I cannot conceive. As. Are. You.

I can show you pictures of dead people to prove it. Lives lived out.

The first-person is a selective precis of this totality, one that poses as the totality. And this is the trick, the way to unravel the kink and see how it is that Heidegger could confuse his semantic vision with seeing. The oblivion behind my thoughts is the oblivion of neglect. Because oblivion has no time, I have no time, and so watch amazed as my shining hands turn to leather. I breathe deep and think, Now. Because oblivion constrains nothing, I follow rules of my own will, pursue goals of my own desire. I stretch forth my hand and remake what lies before me. Because oblivion distinguishes nothing, I am one. I raise my voice and declare, Me. Because oblivion reveals nothing, I stand opposite the world, always only aimed, never connected. I squint and I squint and I ask, How do I know?

I am bottomless because my foundation was never mine to see. I am a perspective, an agent, a person, just another dude-with-a-bad-attitude—I am all these things because of the way I am not any of these things. I am not what I am because of what I am—again, the same as you.

Ghosts can be defined as a fragment cognized as a whole. In some cultures ghosts have no backs, no faces, no feet. In most all cultures they have no substance, no consistency, temporal or otherwise. The dimensions of lived life have been stripped from them; they are shades, animate shadows. As Virgil says of Aeneas attempting to embrace his father, Anchises, in the Underworld:

 Then thrice around his neck his arms he threw;

And thrice the flitting shadow slipp’d away,

Like winds, or empty dreams that fly the day.

Ghosts are the incorporeal remainder, the something shorn of substance and consistency. This is the lived life of Heidegger, an empty dream that flew the day. Insofar as Dasein lacks meat, Dasein dwells with the dead, another shade in the underworld, another passing fancy. We are not ghosts. If lived life lies in the meat, then the truth of lived life lies in the meat. The truth of what we are runs orthogonal to the being that we all swear that we must be. Consciousness is an anosognosiac broker, and we are the serial sum of deals struck between parties utterly unknown. Who are the orthogonal parties? What are the deals? These are the questions that aim us at our most essential selves, at what we are in fact. These are the answers being pursued by industry.

And yet we insist on the reality of ghosts, so profound is the glamour spun by neglect. There are no orthogonal parties, we cry, and therefore no orthogonal deals. There is no orthogonal regime. Oblivion hides only oblivion. What bubbles up from oblivion, begins with me and ends with me. Thus the enduring attempt to make sense of things sideways, to rummage through the ruin of heaven and erect parallel regimes, ones too impersonal to reek of superstition. We use ghosts of reference to bind our inklings to the world, ghosts of inference to bind our inklings to one another, ghosts of quality to give ethereal substance to experience. Ghosts and more ghosts, all to save the mad, inescapable intuition that our intuitions must be real somehow. We raise them as architecture, and demur whenever anyone poses the mundane question of building material.

‘Thought’… No word short of ‘God’ has shut down more thinking.

Content is a wraith. Freedom is a vapour. Experience is a dream. The analogy is no coincidence.

The ontology of meaning is the ontology of ghosts.

 

 

 

Advertisements

Incomplete Cognition: An Eliminativist Reading of Terrence Deacon’s Incomplete Nature

by rsbakker

Incomplete Nature: How Mind Emerged from Matter

Goal seeking, willing, rule-following, knowing, desiring—these are just some of the things we do that we cannot make sense of in causal terms. We cite intentional phenomena all the time, attributing them the kind of causal efficacy we attribute to the more mundane elements of nature. The problem, as Terrence Deacon frames it, is that whenever we attempt to explain these explainers, we find nothing, only absence and perplexity.

“The inability to integrate these many species of absence-based causality into our scientific methodologies has not just seriously handicapped us, it has effectively left a vast fraction of the world orphaned from theories that are presumed to apply to everything. The very care that has been necessary to systematically exclude these sorts of explanations from undermining our causal analyses of physical, chemical, and biological phenomena has also stymied our efforts to penetrate beyond the descriptive surface of the phenomena of life and mind. Indeed, what might be described as the two most challenging scientific mysteries of the age—both are held hostage by this presumed incompatibility.” Incomplete Nature,12

The question, of course, is whether this incompatibility is the product of our cognitive constitution or the product of some as yet undiscovered twist in nature. Deacon argues the latter. Incomplete Nature is a magisterial attempt to complete nature, to literally rewrite physics in a way that seems to make room for goal seeking, willing, rule-following, knowing, desiring, and so on—in other words, to provide a naturalistic way to make sense of absences that cause. He wants to show how all these things are real.

My own project argues the former, that the notion of ‘absences that cause’ is actually an artifact of neglect. ‘We’ are an astronomically complicated subsystem embedded in the astronomically complicated supersystem that we call ‘nature,’ in such a way that we cannot intuitively cognize ourselves as natural.

The Blind Brain Theory claims to provide the world’s first genuine naturalization of intentionality—a parsimonious, comprehensive way to explain centuries of confusion away. What Intentionalists like Deacon think they are describing are actually twists on a family of metacognitive illusions. Crudely put, since no cognitive capacity could pluck ‘accuracy’ of any kind from the supercomplicated muck of the brain, our metacognitive system confabulates. It’s not that some (yet to be empirically determined) systematicity isn’t there: it’s that the functions discharged via our conscious access to that systematicity are compressed, formatted, and truncated. Metacognition neglects these confounds, and we begin making theoretical inferences assuming the sufficiency of compressed, formatted, and truncated information. Among many things, BBT actually predicts a discursive field clustered about families of metacognitive intuitions, but otherwise chronically incapable of resolving among their claims. When an Intentionalist gives you an account of the ‘game of giving and asking for reasons,’ say, you need only ask them why anyone should subscribe to an ontologization (whether virtual, quasi-transcendental, transcendental, or otherwise) on the basis of almost certainly unreliable metacognitive hunches.

The key conceptual distinction in BBT is that between what I’ve been calling ‘lateral sensitivity’ and ‘medial neglect.’ Lateral sensitivity refers to the brain’s capacity to be ‘imprinted’ by other systems, to be ‘pushed’ in ways that allow it to push back. Since behavioural interventions, or ‘pushing-back,’ requires some kind of systematic relation to the system or systems to be pushed, lateral sensitivity requires being pushed by the right things in the right way. Thus the Inverse Problem and the Bayesian nature of the human brain. The Inverse Problem pertains to the difficulty of inferring the structure/dynamics of some distal system (an avalanche or a wolf, say) via the structure/dynamics of some proximal system (ambient sound or light, say) that reliably co-varies with that distal system. The difficulty is typically described in terms of ambiguity: since any number of distal systems could cause the structure/dynamics of the proximal system, the brain needs some way of allowing the actual distal system to push through the proximal system, if it is to have any hope of pushing back. Unless it becomes a reliable component of its environment, it cannot reliably make components of its environments. This is an important image to keep in mind: that of the larger brainenvironment system, the way the brain is adapted to be pushed, or transformed into a component of larger environmental mechanisms, so as to push back, to ‘componentialize’ environmental mechanisms. Quite simply, we have evolved to be tyrannized by our environment in a manner that enables us to tyrannize our environment.

Lateral sensitivity refers to this ‘tyranny enabling tyranny,’ the brain’s ability to systematically covary with its environment in behaviourally advantageous ways. A system that solves via the Inverse Problem possesses a high degree of reliable covariational complexity. As it turns out, the mechanical complexity required to do this is nothing short of mind-boggling. And as we shall see, this fact possesses some rather enormous consequences. Up to this point, I’ve really only provided an alternate description of the sensorimotor loop; the theoretical dividends begin piling up once we consider lateral sensitivity in concert with medial neglect.

The machinery of lateral sensitivity is so complicated that it handily transcends its own ‘sensitivity threshold.’ This means the brain possesses a profound insensitivity to itself. This might sound daffy, given that the brain simply is a supercomplicated network of mutual sensitivities, but this is actually where the nub of cognition as a distinct biological process is laid bare. Unlike the dedicated sensitivity that underwrites mechanism generally, the sensitivity at issue here involves what might be called the systematic covariation for behaviour. Any process that systematically covaries for behaviour is a properly cognitive process. So the above could be amended to, ‘the brain possesses a profound cognitive insensitivity to itself.’ Medial neglect is this profound cognitive insensitivity.

The advantage of cognition is behaviour, the push-back. The efficacy of this behavioural push-back depends on the sensory push, which is to say, lateral sensitivity. Innumerable behavioural problems, it turns out, require that we be pushed by our pushing back: that our future behaviour (push-back) be informed (pushed) by our ongoing behaviour (pushing-back). Behavioural efficacy is a function of behavioural versatility is a function of lateral sensitivity, which is to say, the capacity to systematically covary with the environment. Medial neglect, therefore, constitutes a critical limit on behavioural efficacy: those ‘problem ecologies’ requiring sensitivity to the neurobiological apparatus of cognition to be solved effectively lay outside the capacity of the system to tackle. We are, quite literally, the ‘elephant in the room,’ a supercomplicated mechanism sensitive to most everything relevant to problem-solving in its environment except itself.

Mechanical allo-sensitivity entails mechanical auto-insensitivity, or auto-neglect. A crucial consequence of this is that efficacious systematic covariation requires unidirectional interaction, or that sensing be ‘passive.’ The degree to which the mechanical activity of tracking actually impacts the system to be tracked is the degree to which that system cannot be reliably tracked. Anticipation via systematic covariation is impossible if the mechanics of the anticipatory system impinge on the mechanics of the system to be anticipated. The insensitivity of the anticipatory system to its own activity, or medial neglect, perforce means insensitivity to systems directly mechanically entangled in that activity. Only ‘passive entanglement’ will do. This explains why so-called ‘observer effects’ confound our ability to predict the behaviour of other systems.

So the stage is set. The brain quite simply cannot cognize itself (or other brains) in the same high-dimensional way it cognizes its environments. (It would be hard to imagine any evolved metacognitive capacity that could achieve such a thing, in fact). It is simply too complex and too entangled. As a result, low-dimensional, special purpose heuristics—fast and frugal kluges—are its only recourse.

The big question I keep asking is, How could it be any other way? Given the problems of complexity and complicity, given the radical nature of the cognitive bottleneck—just how little information is available for conscious, serial processing—how could any evolved metacognitive capacity whatsoever come close to apprehending the functional truth of anything inner’? If you are an Intentionalist, say, you need to explain how the phenomena you’re convinced you intuit are free of perspectival illusions, or conversely, how your metacognitive faculties have overcome the problems posed by complexity and complicity.

On BBT, the brain possesses at least two profoundly different covariational regimes, one integrated, problem-general, and high-dimensional, mediating our engagement in the natural world, the other fractious, problem-specific and low-dimensional, mediating our engagements with ourselves and others (who are also complex and complicit), and thereby our engagement in the natural world. The twist lies in medial neglect, the fact that the latter fractious, problem-specific, and low-dimensional covariational regime is utterly insensitive to its fractious, problem-specific, and low-dimensional nature. Human metacognition is almost entirely blind to the structure of human cognition. This is why we require cognitive science: reflection on our cognitive capacities tells us little or nothing about those capacities, reflection included. Since we have no way of intuiting the insufficiency of these intuitions, we assume they’re sufficient.

We are now in a position to clearly delineate Deacon’s ‘fraction,’ what makes it vast, and why it has been perennially orphaned. Historically, natural science has been concerned with the ‘lateral problem-ecologies,’ with explicating the structure and dynamics of relatively simple systems possessing functional independence. Any problem ecology requiring the mechanistic solution of brains lay outside its purview. Only recently has it developed the capacity to tackle ‘medial problem-ecologies,’ the structure and dynamics of astronomically complex systems possessing no real functional independence. For the first time humanity finds itself confronted with integrated, high-dimensional explications of what it is. The ruckus, of course, is all about how to square these explications with our medial traditions and intuitions. All the so-called ‘hard problems’ turn on our apparent inability to naturalistically find, let alone explain, the phenomena corresponding to our intuitive, metacognitive understanding of the medial.

Why do our integrated, high-dimensional, explications of the medial congenitally ‘leave out’ the phenomena belonging to the medial-as-metacognized? Because metacognitive phenomena like goal seeking, willing, rule-following, knowing, desiring only ‘exist,’ insofar as they exist at all, in specialized problem-solving contexts. ‘Goal seeking’ is something we all do all the time. A friend has an untoward reaction to a comment of ours, so we ask ourselves, in good conscience, ‘What was I after?’ and the process of trying to determine our goal given whatever information we happen to have begins. Despite complexity and complicity, this problem is entirely soluble because we have evolved the heuristic machinery required: we can come to realize that our overture was actually meant to belittle. Likewise, the philosopher asks, ‘What is goal-seeking?’ and the process of trying to determine the nature of goal-seeking given whatever information he happens to have begins. But the problem proves insoluble, not surprisingly, given that the philosopher almost certainly lacks the requisite heuristic machinery. The capacity to solve for goal-seeking qua goal-seeking is just not something our ancestors evolved.

Deacon’s entire problematic turns on the equivocation of the first-order and second-order uses of intentional terms, on the presumption that the ‘goal-seeking’ we metacognize simply has to be the ‘goal-seeking’ referenced in first-order contexts—on the presumption, in other words, of metacognitive adequacy, which is to say something we now know to be false as a matter of empirical fact. For all its grand sweep, for all its lucid recapitulation and provocative conjecture, Incomplete Nature is itself shockingly incomplete. Nowhere does he consider the possibility that the only ‘goal-seeking phenomenon’ missing, the only absence to be explained, is this latter, philosophical goal-seeking.

At no point in the work does he reference, let alone account for, the role metacognition or introspection plays in our attempt to grapple with the incompatibility of natural and intentional phenomena. He simply declares “the obvious inversion of causal logic that distinguishes them” (139), without genuinely considering where that ‘inversion’ occurs. Because this just is the nub of the issue between the emergentist and the eliminativist: whether his ‘obvious inversion’ belongs to the systems observed or to the systems observing. As Deacon writes:

“There is no use denying there is a fundamental causal difference between these domains that must be bridged in any comprehensive theory of causality. The challenge of explaining why such a seeming reversal takes place, and exactly how it does so, must ultimately be faced. At some point in this hierarchy, the causal dynamics of teleological processes do indeed emerge from simpler blind mechanistic dynamics, but we are merely restating this bald fact unless we can identify exactly how this causal about-face is accomplished. We need to stop trying to eliminate homunculi, and to face up to the challenge of constructing teleological properties—information, function, aboutness, end-directedness, self, even conscious experience—from unambiguously non-teleological starting points.” 140

But why do we need to stop ‘trying to eliminate’ homunculi? We know that philosophical reflection on the nature of cognition is woefully unreliable. We know that intentional concepts and phenomena are the stock and trade of philosophical reflection. We know that scientific inquiry generally delegitimizes our prescientific discourses. So why shouldn’t we assume that the matter of intentionality amounts to more of the same?

Deacon never says. He acknowledges “there cannot be a literal ends-causing-the-means process involved” (109) when it comes to intentional phenomena. As he writes:

“Of course, time is neither stopped nor running backwards in any of these processes. Thermodynamic processes are proceeding uninterrupted. Future possible states are not directly causing present events to occur.” 109-110

He acknowledges, in other words, that this ‘inversion of causality’ is apparent only. He acknowledges, in other words, that metacognition is getting things wrong, just not entirely. So what recommends his project of ontologically meeting this appearance halfway over the project of doing away with it altogether? The project of rewriting nature, after all, is far more extravagant than the project of theorizing metacognitive shortcomings.

Deacon’s failure to account for observation-dependent interpretations of intentionality is more than suspiciously convenient, it actually renders the whole of Incomplete Nature an exercise in begging the question. He spends a tremendous amount of time and no little ingenuity in describing the way ‘teleodynamic systems,’ as the result of increasingly recursive complexity, emerge from ‘morphodynamic systems’ which in turn emerge from standard thermodynamic systems. Where thermodynamic systems exhibit straightforward entropy, morphodynamic systems, such as crystal formation, exhibit the tendency to become more ordered. Building on morphodynamics, teleodynamic systems then exhibit the kinds of properties we take to be intentional. A point of pride for Deacon is the way his elaborations turn, as he mentions in the extended passage quoted above, on ‘unambiguously non-teleological starting points.’

He sums this patient process of layering causal complexities in the postulation of what he calls an autogen, “a form of self-generating, self-repairing, self-replicating system that is constituted by reciprocal morphodynamic processes” (547-8), and arguably his most ingenious innovation. He then moves to conclude:

“So even these simple molecular systems have crossed a threshold in which we can say that a very basic form of value has emerged, because we can describe each of the component autogenic processes as there for the sake of autogen integrity, or for the maintenance if that particular form of autogenicity. Likewise, we can describe different features of the surrounding molecular environment as ‘beneficial’ or ‘harmful’ in the same sense that we would apply these assessments to microorganisms. More important, these are not merely glosses provided by a human observer, but intrinsic and functionally relevant features of the consequence-organized nature of the autogen itself.” 322

And the reader is once again left with the question of why. We know that the brain possesses suites of heuristic problem solvers geared to economize by exploiting various features of the environment. The obvious question becomes: How is it that any of the processes he describes do anything more than schematize the kinds of features that trigger the brain to swap out its causal cognitive systems for its intentional cognitive systems?

Time and again, one finds Deacon explicitly acknowledging the importance of the observer, and time and again one finds him dismissing that importance without a lick of argumentation—the argumentation his entire account hangs on. One can even grant him his morphodynamic and teleodynamic ‘phase transitions’ and still plausibly insist that all he’s managed to provide is a detailed description of the kinds of complex mechanical processes prone to trigger our intentional heuristics. After all, if it is the case that the future does not cause the past, then ‘end directedness,’ the ‘obvious inversion of causality,’ actually isn’t an inversion at all. The fact is Deacon’s own account of constraints and the role they play in morphodynamics and teleodynamics is entirely amenable to mechanical understanding. He continually relies on disposition talk. Even his metaphors, like the ‘negentropic ratchet’ (317), tend to be mechanical. The autogen is quite clearly a machine, one that automatically expresses the constraints that make it possible. The fact that these component constraints result in a system that behaves in ways far different than mundane thermodynamic systems speaks to nothing more extraordinary than mechanical emergence, the fact that whole mechanisms do things that their components could not (See Craver, 2007, pp. 211-17 for a consideration of the distinction between mechanical and spooky emergence). Likewise, for all the ink he spills regarding the holistic nature of teleodynamic systems, he does an excellent job explaining them in terms of their contributing components!

In the end, all Deacon really has is an analogy between the ‘intentional absence,’ our empirical inability to find intentional phenomena, and the kind of absence he attributes to constraints. Since systematicity of any kind requires constraints, defining constraints, as Deacon does, in terms of what cannot happen—in terms of what is absent—provides him the rhetorical license he needs to speak of ‘absential causes’ at pretty much any juncture. Since he has already defined intentional phenomena as ‘absential causes’ it becomes very easy thing indeed to lead the reader over the ‘epistemic cut’ and claim that he has discovered the basis of the intentional as it exists in nature, as opposed to an interpretation of those systems inclined to trigger intentional cognition in the human brain. Constraints can be understood in absential terms. Intentional phenomena can only be understood in absential terms. Since the reader, thanks to medial neglect, has no inkling whatsoever of the fractionate and specialized nature of intentional cognition, all Deacon needs to do is comb their existing intuitions in his direction. Constraints are objective, therefore intentionality is objective.

Not surprisingly, Deacon falls far short of ‘naturalizing intentionality.’ Ultimately, he provides something very similar to what Evan Thompson delivers in his equally impressive (and unconvincing) Mind in Life: a more complicated, attenuated picture of nature that seems marginally less antithetical to intentionality. Where Thompson’s “aim is not to close the explanatory gap in a reductive sense, but rather to enlarge and enrich the philosophical and scientific resources we have for addressing the gap (x), Deacon’s is to “demonstrate how a form of causality dependent on specifically absent features and unrealized potentials can be compatible with our best science” (16), the idea being that such an absential understanding will pave the way for some kind of thoroughgoing naturalization of intentionality—as metacognized—in the future.

But such a naturalization can only happen if our theoretical metacognitive intuitions regarding intentionality get intentionality right in general, as opposed to right enough for this or that. And our metacognitive intuitions regarding intentionality can only get intentionality right in general if our brain has somehow evolved the capacity to overcome medial neglect. And the possibility of this, given the problems of complexity and complicity, seems very hard to fathom.

The fact is BBT provides a very plausible and parsimonious observer dependent explanation for why metacognition attributes so many peculiar properties the medial processes. The human brain, as the frame of cognition, simply cannot cognize itself the way it does other systems. It is, as a matter of empirical necessity, not simply blind to its own mechanics, but blind to this blindness. It suffers medial neglect. Unable to access and cognize its origins, and unable to cognize this inability, it assumes that it accesses all there is to access—it confuses itself for something bottomless, an impossible exception to physics.

So when Deacon writes:

“These phenomena not only appear to arise without antecedents, they appear to be defined with respect to something nonexistent. It seems that we must explain the uncaused appearance of phenomena whose causal powers derive from something nonexistent! It should be no surprise that this most familiar and commonplace feature of our existence poses a conundrum for science.” 39

we need to take the truly holistic view that Deacon himself consistently fails to take. We need to see this very real problem in terms of one set of natural systems—namely, us—engaging the set of all natural systems, as a kind of linkage between being pushed and pushing back.

On BBT, Deacon’s ‘obvious inversion of causality’ is merely an illusory artifact of constraints pertaining to the human brain’s ability to cognize itself the way it cognizes its environments. They appear causally inverted simply because no information pertaining to their causal provenance is available to deliberative metacognition. Rules constrain us in some mysterious, orthogonal way. Goals somehow constrain us from the future. Will somehow constrains itself! Desires, like knowledge, are somehow constrained by their objects, even when they are nowhere to be seen. These apparently causally inverted phenomena vanish whenever we search for their origins because they quite simply do not exist in the high-dimensional way things in our environments exist. They baffle scientific reason because the actual neuromechanical heuristics employed are adapted to solve problems in the absence of detailed causal information, and because conscious metacognition, blind to the rank insufficiency of the information available for deliberative problem-solving, assumes that it possesses all the information it needs. Philosophical reflection is a cultural achievement, after all, an exaption of existing, more specialized cognitive resources; it seems quite implausible to assume the brain would possess the capacity to vet the relative sufficiency of information utilized in ways possessing no evolutionary provenance.

We are causally embedded in our environments in such a way that we cannot intuit ourselves as so embedded, and so intuit ourselves otherwise, as goal seeking, willing, rule-following, knowing, desiring, and so on—in ways that systematically neglect the actual, causal relations involved. Is it really just a coincidence that all these phenomena just happen to belong to the ‘medial,’ which is to say, the machinery responsible for cognition? Is it really just a coincidence that all these phenomena exhibit a profound incompatibility with causal explanation? Is it really just a coincidence that all our second-order interpretations of these terms are chronically underdetermined (a common indicator of insufficient information), even though they function quite well when used in everyday, first-order, interpersonal contexts?

Not at all. As I’ve attempted to show in a variety of ways the past couple years a great number of traditional conundrums can be resolved via BBT. All the old problems fall away once we realize that the medial—or ‘first person’—is simply what the third person looks like absent the capacity to laterally solve the third person. The time has come to leave them behind and begin the hard work of discovering what new conundrums await.

Interstellar Dualists and X-phi Alien Superfreaks

by rsbakker

I came up with this little alien thought experiment to illustrate a cornerstone of the Blind Brain Theory: the way systems can mistake information deficits for positive ontological properties, using a species I call the Walleyes (pronounced ‘Wally’s’):

Walleyes possess two very different visual systems, the one high dimensional, adapted to tracking motion and resolving innumerable details, the other myopic in the extreme, adapted to resolving blurry gestalts at best, blobs of shape and colour. Both are exquisitely adapted to solve their respective problem-ecologies, however; those ecologies just happen to be radically divergent. The Walleyes, it turns out, inhabit the twilight line of a world that forever keeps one face turned to its sun. They grow in a linear row that tracks the same longitude around the entire planet, at least wherever there’s land. The high capacity eye is the eye possessing dayvision, adapted to take down mobile predators using poisonous darts. The low capacity eye is the eye possessing nightvision, adapted to send tendrils out to feed on organic debris. The Walleyes, in fact, have nearly a 360 degree view of their environment: only the margin of each defeats them.

The problem, however, is that Walleyes, like anenomes, are a kind of animal that is rooted in place. Save for the odd storm, which blows the ‘head’ about from time to time, there is very little overlap in their respective visual fields, even though each engages (two very different halves of) the same environment. What’s more, the nightvision eye, despite its manifest myopia, continually signals that it possesses a greater degree of fidelity than the first.

Now imagine an advanced alien species introduces a virus that rewires Walleyes for discursive, conscious experience. Since their low-dimensional nightvision system insists (by default) that it sees everything there is to be seen, and its high-dimensional system, always suspicious of camoflaged predators, regularly signals estimates of reliability, the Walleyes have no reason to think heuristic neglect is a problem. Nothing signals the possibility that the problem might be perspectival (related to issues of information access and problem solving capacity), so the metacognitive default of the Walleyes is to construe themselves as special beings that dwell on the interstice of two very different worlds. They become natural dualists…

The same way we seem to be.

Perhaps some X-phi super-aliens are snickering as they read this!

The Missing Half of the Global Neuronal Workspace: A Commentary on Stanislaus Dehaene’s Consciousness and the Brain

by rsbakker

Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts

.

Introduction

Stanislaus Dehaene, to my mind at least, is the premier consciousness researcher on the planet, one of those rare scientists who seems equally at home in the theoretical aether (like we are here) and in the laboratory (where he is there). His latest book, Consciousness and the Brain provides an excellent, and at times brilliant, overview of the state of contemporary consciousness research. Consciousness has come a long way in the past two decades, and Dehaene deserves credit for much of the yardage gained.

I’ve been anticipating Consciousness and the Brain for quite some time, especially since I bumped across “The Eternal Silence of the Neuronal Spaces,” Dehaene’s review of Cristopher Koch’s Consciousness: Confessions of a Romantic Reductionist, where he concludes with a confession of his own: “Can neuroscience be reconciled with living a happy, meaningful, moral, and yet nondelusional life? I will confess that this question also occasionally keeps me lying awake at night.” Since the implications of the neuroscientific revolution, the prospects of having a technically actionable blueprint of the human soul, often keep my mind churning into the wee hours, I was hoping that I might see a more measured, less sanguine Dehaene in this book, one less inclined to soft-sell the troubling implications of neuroscientific research.

And in that one regard, I was disappointed. Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts is written for a broad audience, so in a certain sense one can understand the authorial instinct to make things easy for the reader, but rendering a subject matter more amenable to lay understanding is quite a different thing than rendering it more amenable to lay sensibilities. Dehaene, I think, caters far too much to the very preconceptions his science is in the process of dismantling. As a result, the book, for all its organizational finesse, all its elegant formulations, and economical summaries of various angles of research, finds itself haunted by a jagged shadow, the intimation that things simply are not as they seem. A contradiction—of expressive modes if not factual claims.

Perhaps the most stark example of this contradiction comes at the very conclusion of the book, where Dehaene finally turns to consider some of the philosophical problems raised by his project. Adopting a quasi-Dennettian argument (from Freedom Evolves) that the only ‘free will’ that matters is the free will we actually happen to have (namely, one compatible with physics and biology), he writes:

“Our belief in free will expresses the idea that, under the right circumstances, we have the ability to guide our decisions by our higher-level thoughts, beliefs, values, and past experiences, and to exert control over our undesired lower-level impulses. Whenever we make an autonomous decision, we exercise our free will by considering all the available options, pondering them, and choosing the one that we favor. Some degree of chance may enter in a voluntary choice, but this is not an essential feature. Most of the time our willful acts are anything but random: they consist in a careful review of our options, followed by the deliberate selection of the one we favor.” 264

And yet for his penultimate, concluding line no less, he writes, “[a]s you close this book to ponder your own existence, ignited assemblies of neurons literally make up your mind” (266). At this point, the perceptive reader might be forgiven for asking, ‘What happened to me pondering, me choosing the interpretation I favour, me making up my mind?’ The easy answer, of course, is that ‘ignited assemblies of neurons’ are the reader, such that whatever they ‘make,’ the reader ‘makes’ as well. The problem, however, is that the reader has just spent hours reading hundreds of pages detailing all the ways neurons act entirely outside his knowledge. If ignited assemblies of neurons are somehow what he is, then he has no inkling what he is—or what it is he is supposedly doing.

As we shall see, this pattern of alternating expressive modes, swapping between the personal and the impersonal registers to describe various brain activities, occurs throughout Consciousness and the Brain. As I mentioned above, I’m sure this has much to do with Dehaene’s resolution to write a reader friendly book, and so to market the Global Neuronal Workspace Theory (GNWT) to the broader public. I’ve read enough of Dehaene’s articles to recognize the nondescript, clinical tone that animates the impersonally expressed passages, and so to see those passages expressed in more personal idioms as self-conscious attempts on his part to make the material more accessible. But as the free will quote above makes plain, there’s a sense in which Dehaene, despite his odd sleepless night, remains committed to the fundamental compatibility of the personal and the impersonal idioms. He thinks neuroscience can be reconciled with a meaningful and nondelusional life. In what follows I intend to show why, on the basis of his own theory, he’s mistaken. He’s mistaken because, when all is said and done, Dehaene possesses only half of what could count as a complete theory of consciousness—the most important half to be sure, but half all the same. Despite all the detailed explanations of consciousness he gives in the book, he actually has no account whatsoever of what we seem to take consciousness to be–namely, ourselves.

For that account, Stanislaus Dehaene needs to look closely at the implicature of his Global Neuronal Workspace Theory—it’s long theoretical shadow, if you will—because there, I think, he will find my own Blind Brain Theory (BBT), and with it the theoretical resources to show how the consciousness revealed in his laboratory can be reconciled with the consciousness revealed in us. This, then, will be my primary contention: that Dehaene’s Global Neuronal Workspace Theory directly implies the Blind Brain Theory, and that the two theories, taken together, offer a truly comprehensive account of consciousness…

The one that keeps me lying awake at night.

.

Function Dysfunction

Let’s look at a second example. After drawing up an inventory of varous, often intuition-defying, unconscious feats, Dehaene cautions the reader against drawing too pessimistic a conclusion regarding consciousness—what he calls the ‘zombie theory’ of consciousness. If unconscious processes, he asks, can plan, attend, sum, mean, read, recognize, value and so on, just what is consciousness good for? The threat of these findings, as he sees it, is that they seem to suggest that consciousness is merely epiphenomenal, a kind of kaliedoscopic side-effect to the more important, unconscious business of calculating brute possibilities. As he writes:

“The popular Danish science writer Tor Norretranders coined the term ‘user illusion’ to refer to our feeling of being in control, which may well be fallacious; every one of our decisions, he believes, stems from unconscious sources. Many other psychologists agree: consciousness is the proverbial backseat driver, a useless observer of actions that lie forever beyond its control.” 91

Dehaene disagrees, claiming that his account belongs to “what philosophers call the ‘functionalist’ view of consciousness” (91). He uses this passing criticism as a segue for his subsequent, fascinating account of the numerous functions discharged by consciousness—what makes consciousness a key evolutionary adaptation. The problem with this criticism is that it simply does not apply. Norretranders, for instance, nowhere espouses epiphenomenalism—at least not in The User Illusion. The same might be said of Daniel Wegner, one the ‘many psychologists,’ Dehaene references in the accompanying footnote. Far from epiphenomenalism, the argument that consciousness has no function whatsoever (as, say, Susan Pockett (2004) has argued), both of these authors contend that it’s ‘our feeling of being in control’ that is illusory. So in The Illusion of Conscious Will, for instance, Wegner proposes that the feeling of willing allows us to socially own our actions. For him, our consciousness of ‘control’ has a very determinate function, just one that contradicts our metacognitive intuition of that functionality.

Dehaene is simply in error here. He is confusing the denial of intuitions of conscious efficacy with a denial of conscious efficacy. He has simply run afoul the distinction between consciousness as it is and consciousness as appears to us—the distinction between consciousness as impersonally and personally construed. Note the way he actually slips between idioms in the passage quoted above, at first referencing ‘our feeling of being in control’ and then referencing ‘its control.’ Now one might think this distinction between these two very different perspectives on consciousness would be easy to police, but such is not the case (See Bennett and Hacker, 2003). Unfortunately, Dehaene is far from alone when it comes to running afoul this dichotomy.

For some time now, I’ve been arguing for what I’ve been calling a Dual Theory approach to the problem of consciousness. On the one hand, we need a theoretical apparatus that will allow us to discover what consciousness is as another natural phenomenon in the natural world. On the other hand, we need a theoretical apparatus that will allow us to explain (in a manner that makes empirically testable predictions) why consciousness appears the way that it does, namely, as something that simply cannot be another natural phenomenon in the natural world. Dehaene is in the business of providing the first kind of theory: a theory of what consciousness actually is. I’ve made a hobby of providing the second kind of theory: a theory of why consciousness appears to possess the baffling form that it does.

Few terms in the conceptual lexicon are quite so overdetermined as ‘consciousness.’ This is precisely what makes Dehaene’s operationalization of ‘conscious access’ invaluable. But salient among those traditional overdeterminations is the peculiarly tenacious assumption that consciousness ‘just is’ what it appears to be. Since what it appears to be is drastically at odds with anything else in the natural world, this assumption sets the explanatory bar rather high indeed. You could say consciousness needs a Dual Theory approach for the same reason that Dualism constitutes an intuitive default (Emmons 2014). Our dualistic intuitions arguably determine the structure of the entire debate. Either consciousness really is some wild, metaphysical exception to the natural order, or consciousness represents some novel, emergent twist that has hitherto eluded science, or something about our metacognitive access to consciousness simply makes it seem that way. Since the first leg of this trilemma belongs to theology, all the interesting action has fallen into orbit around the latter two options. The reason we need an ‘Appearance Theory’ when it comes to consciousness as opposed to other natural phenomena, has to do with our inability to pin down the explananda of consciousness, an inability that almost certainly turns on the idiosyncrasy of our access to the phenomena of consciousness compared to the phenomena of the natural world more generally. This, for instance, is the moral of Michael Graziano’s (otherwise flawed) Consciousness and the Social Brain: that the primary job of the neuroscientist is to explain consciousness, not our metacognitive perspective on consciousness.

The Blind Brain Theory is just such an Appearance Theory: it provides a systematic explanation of the kinds of cognitive confounds and access bottlenecks that make consciousness appear to be ‘supra-natural.’ It holds, with Dehaene, that consciousness is functional through and through, just not in any way we can readily intuit outside empirical work like Dehaene’s. As such, it takes findings such as Wegner’s, where the function we presume on the basis of intuition (free willing) is belied by some counter-to-intuition function (behaviour ownership), as paradigmatic. Far from epiphenomenalism, BBT constitutes a kind of ‘ulterior functionalism’: it acknowledges that consciousness discharges a myriad of functions, but it denies that metacognition is any position to cognize those functions (see “THE Something about Mary“) short of sustained empirical investigation.

Dehaene is certainly sensitive to the general outline of this problem: he devotes an entire chapter (“Consciousness Enters the Lab”) to discussing the ways he and others have overcome the notorious difficulties involved in experimentally ‘pinning consciousness down.’ And the masking and attention paradigms he has helped develop have done much to transform consciousness research into a legitimate field of scientific research. He even provides a splendid account of just how deep unconscious processing reaches into what we intuitively assume are wholly conscious exercises—an account that thoroughly identifies him as a fellow ulterior functionalist. He actually agrees with me and Norretranders and Wegner—he just doesn’t realize it quite yet.

.

The Global Neuronal Workspace

As I said, Dehaene is primarily interested in theorizing consciousness apart from how it appears. In order to show how the Blind Brain Theory actually follows from his findings, we need to consider both these findings and the theoretical apparatus that Dehaene and his colleagues use to make sense of them. We need to consider his Global Neuronal Workspace Theory of consciousness.

According to GNWT, the primary function of consciousness is to select, stabilize, solve, and broadcast information throughout the brain. As Dehaene writes:

“According to this theory, consciousness is just brain-wide information sharing. Whatever we become conscious of, we can hold it in our mind long after the corresponding stimulation has disappeared from the outside world. That’s because the brain has brought it into the workspace, which maintains it independently of the time and place at which we first perceived it. As a result, we may use it in whatever way we please. In particular, we can dispatch it to our language processors and name it; this is why the capacity to report is a key feature of a conscious state. But we can also store it in long-term memory or use it for our future plans, whatever they are. The flexible dissemination of information, I argue, is a characteristic property of a conscious state.” 165

A signature virtue of Consciousness and the Brain lays in Dehaene’s ability to blend complexity and nuance with expressive economy. But again one needs to be wary of his tendency to resort to the personal idiom, as he does in this passage, where the functional versatility provided by consciousness is explicitly conflated with agency, the freedom to dispose of information ‘in whatever way we please.’ Elsewhere he writes:

“The brain must contain a ‘router’ that allows it to flexibly broadcast information to and from its internal routines. This seems to be a major function of consciousness: to collect the information from various processors, synthesize it, and then broadcast the result–a conscious symbol–to other, arbitrarily selected processors. These processors, in turn, apply their unconscious skills to this symbol, and the entire process may repeat a number of times. The outcome is a hybrid serial-parallel machine, in which stages of massively parallel computation are interleaved with a serial stage of conscious decision making and information routing.” 105

Here we find him making essentially the same claims in less anthropomorphic or ‘reader-friendly’ terms. Despite the folksy allure of the ‘workspace’ metaphor, this image of the brain as a ‘hybrid serial-parallel machine’ is what lies at the root of GNWT. For years now, Dehaene and others have been using masking and attention experiments in concert with fMRI, EEG, and MEG to track the comparative neural history of conscious and unconscious stimuli through the brain. This has allowed them to isolate what Dehaene calls the ‘signatures of consciousness,’ the events that distinguish percepts that cross the conscious threshold from percepts that do not. A theme that Dehaene repeatedly evokes is the information asymmetric nature of conscious versus unconscious processing. Since conscious access is the only access we possess to our brain’s operations, we tend to run afoul a version of what Daniel Kahneman (2012) calls WYSIATI, or the ‘what-you-see-is-all-there-is’ effect. Dehaene even goes so far as to state this peculiar tendency as a law: “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (79). The fact is the nonconscious brain performs the vast, vast majority of the brain’s calculations.

The reason for this has to do with the Inverse Problem, the challenge of inferring the mechanics of some distal system, a predator or a flood, say, from the mechanics of some proximal system such as ambient light or sound. The crux of the problem lies in the ambiguity inherent to the proximal mechanism: a wild variety of distal events could explain any given retinal stimulus, for instance, and yet somehow we reliably perceive predators or floods or what have you. Dehaene writes:

“We never see the world as our retina sees it. In fact, it would be a pretty horrible sight: a highly distorted set of light and dark pixels, blown up toward the center of the retina, masked by blood vessels, with a massive hole at the location of the ‘blind spot’ where cables leave for the brain; the image would constantly blur and change as our gaze moved around. What we see, instead, is a three-dimensional scene, corrected for retinal defects, mended at the blind spot, and massive reinterpreted based on our previous experience of similar visual scenes.” 60

The brain can do this because it acts as a massively parallel Bayesian inference engine, analytically breaking down various elements of our retinal images, feeding them to specialized heuristic circuits, and cobbling together hypothesis after hypothesis.

“Below the conscious stage, myriad unconscious processors, operating in parallel, constantly strive to extract the most detailed and complete interpretation of our environment. They operate as nearly optimal statisticians who exploit the slightest perceptual hint—a faint movement, a shadow, a splotch of light—to calculate the probability that a given property holds true in the outside world.” 92

But hypotheses are not enough. All this machinery belongs to what is called the ‘sensorimotor loop.’ The whole evolutionary point of all this processing is to produce ‘actionable intelligence,’ which is to say, to help generate and drive effective behaviour. In many cases, when the bottom-up interpretations match the top-down expectations and behaviour is routine, say, such selection need not result in consciousness of the stimuli at issue. In other cases, however, the interpretations are relayed to the nonconscious attentional systems of the brain where they are ranked according to their relevance to ongoing behaviour and selected accordingly for conscious processing. Dehaene summarizes what happens next:

“Conscious perception results from a wave of neuronal activity that tips the cortex over its ignition threshold. A conscious stimulus triggers a self-amplifying avalanche of neural activity that ultimately ignites many regions into a tangled state. During that conscious state, which starts approximately 300 milliseconds after stimulus onset, the frontal regions of the brain are being informed of sensory inputs in a bottom-up manner, but these regions also send massive projections in the converse direction, top-down, and to many distributed areas. The end result is a brain web of synchronized areas whose various facets provide us with many signatures of consciousness: distributed activation, particularly in the frontal and parietal lobes, a P3 wave, gamma-band amplification, and massive long-distance synchrony.” 140

As Dehaene is at pains to point out, the machinery of consciousness is simply too extensive to not be functional somehow. The neurophysiological differences observed between the multiple interpretations that hover in nonconscious attention, and the interpretation that tips the ‘ignition threshold’ of consciousness is nothing if not dramatic. Information that was localized suddenly becomes globally accessible. Information that was transitory suddenly becomes stable. Information that was hypothetical suddenly becomes canonical. Information that was dedicated suddenly becomes fungible. Consciousness makes information spatially, temporally, and structurally available. And this, as Dehaene rightly argues, makes all the difference in the world, including the fact that “[t]he global availability of information is precisely what we subjectively experience as a conscious state” (168).

.

A Mile Wide and an Inch Thin

Consciousness is the Medieval Latin of neural processing. It makes information structurally available, both across time and across the brain. As Dehaene writes, “The capacity to synthesize information over time, space, and modalities of knowledge, and to rethink it at any time in the future, is a fundamental component of the conscious mind, one that seems likely to have been positively selected for during evolution” (101). But this evolutionary advantage comes with a number of crucial caveats, qualifications that, as we shall see, make some kind of Dual Theory approach unavoidable.

Once an interpretation commands the global workspace, it becomes available for processing via the nonconscious input of a number of different processors. Thus the metaphor of the workspace. The information can be ‘worked over,’ mined for novel opportunities, refined into something more useful, but only, as Dehaene points out numerous times, synoptically and sequentially.

Consciousness is synoptic insofar as it samples mere fractions of the information available: “An unconscious army of neurons evaluates all the possibilities,” Dehaene writes, “but consciousness receives only a stripped down report” (96). By selecting, in other words, the workspace is at once neglecting, not only all the alternate interpretations, but all the neural machinations responsible: “Paradoxically, the sampling that goes on in our conscious vision makes us forever blind to its inner complexity” (98).

And consciousness is sequential in that it can only sample one fraction at a time: “our conscious brain cannot experience two ignitions at once and lets us perceive only a single conscious ‘chunk’ at a given time,” he explains. “Whenever the prefrontal and parietal lobes are jointly engaged in processing a first stimulus, they cannot simultaneously reengage toward a second one” (125).

All this is to say that consciousness pertains to the serial portion of the ‘hybrid serial-parallel machine’ that is the human brain. Dehaene even goes so far as to analogize consciousness to a “biological Turing machine” (106), a kind of production system possessing the “capacity to implement any effective procedure” (105). He writes:

“A production system comprises a database, also called ‘working memory,’ and a vast array of if-then production rules… At each step, the system examines whether a rule matches the current state of its working memory. If multiple rules match, then they compete under the aegis of a stochastic prioritizing system. Finally, the winning rule ‘ignites’ and is allowed to change the contents of working memory before the entire process resumes. Thus this sequence of steps amounts to serial cycles of unconscious competition, conscious ignition, and broadcasting.” 105

The point of this analogy, Dehaene is quick to point out, isn’t to “revive the cliché of the brain as a classical computer” (106) so much as it is to understand the relationship between the conscious and nonconscious brain. Indeed, in subsequent experiments, Dehaene and his colleagues discovered that the nonconscious, for all its computational power, is generally incapable of making sequential inferences: “The mighty unconscious generates sophisticated hunches, but only a conscious mind can follow a rational strategy, step after step” (109). It seems something of a platitude to claim that rational deliberation requires consciousness, but to be able to provide an experimentally tested neurobiological account of why this is so is nothing short of astounding. Make no mistake: these are the kind of answers philosophy, rooting through the mire of intuition, has sought for millennia.

Dehaene, as I mentioned, is primarily interested in providing a positive account of what consciousness is apart from what we take it to be. “Putting together all the evidence inescapably leads us to a reductionist conclusion,” Dehaene writes. “All our conscious experiences, from the sound of an orchestra to the smell of burnt toast, result from a similar source: the activity of massive cerebral circuits that have reproducible neuronal signatures” (158). Though he does consider several philosophical implications of his ‘reductionist conclusions,’ he does so only in passing. He by no means dwells on them.

Given that consciousness research is a science attempting to bootstrap its way out of the miasma of philosophical speculation regarding the human soul, this reluctance is quite understandable—perhaps even laudable. The problem, however, is that philosophy and science both traffic in theory, general claims about basic things. As a result, the boundaries are constitutively muddled, typically to the detriment of the science, but sometimes to its advantage. A reluctance to speculate may keep the scientist safe, but to the extent that ‘data without theory is blind,’ it may also mean missed opportunities.

So consider Dehaene’s misplaced charge of epiphenomenalism, the way he seemed to be confusing the denial of our intuitions of conscious efficacy with the denial of conscious efficacy. The former, which I called ‘ulterior functionalism,’ entirely agrees that consciousness possesses functions; it denies only that we have reliable metacognitive access to those functions. Our only recourse, the ulterior functionalist holds, is to engage in empirical investigation. And this, I suggested, is clearly Dehaene’s own position. Consider:

“The discovery that a word or a digit can travel throughout the brain, bias our decisions, and affect our language networks, all the while remaining unseen, was an eye-opener for many cognitive scientists. We had underestimated the power of the unconscious. Our intuitions, it turned out, could not be trusted: we had no way of knowing what cognitive processes could or could not proceed without awareness. The matter was entirely empirical. We had to submit, one by one, each mental faculty to a thorough inspection of its component processes, and decide which of those faculties did or did not appeal to the conscious mind. Only careful experimentation could decide the matter…” 74

This could serve as a mission statement for ulterior functionalism. We cannot, as a matter of fact, trust any of our prescientific intuitions regarding what we are, no more than we could trust our prescientific intuitions regarding the natural world. This much seems conclusive. Then why does Dehaene find the kinds of claims advanced by Norretranders and Wegner problematic? What I want to say is that Dehaene, despite the occasional sleepless night, still believes that the account of consciousness as it is will somehow redeem the most essential aspects of consciousness as it appears, that something like a program of ‘Dennettian redefinition’ will be enough. Thus the attitude he takes toward free will. But then I encounter passages like this:

“Yet we never truly know ourselves. We remain largely ignorant of the actual unconscious determinants of our behaviour, and therefore cannot accurately predict what our behaviour will be in circumstances beyond the safety zone of our past experiences. The Greek motto ‘Know thyself,’ when applied to the minute details of our behaviour, remains an inaccessible ideal. Our ‘self’ is just a database that gets filled in through our social experiences, in the same format with which we attempt to understand other minds, and therefore it is just as likely to include glaring gaps, misunderstandings, and delusions.” 113

Claims like this, which radically contravene our intuitive, prescientific understanding of self, suggest that Dehaene simply does not know where he stands, that he alternately believes and does not believe that his work can be reconciled with our traditional understand of ‘meaningful life.’ Perhaps this explains the pendulum swing between the personal and the impersonal idiom that characterizes this book—down to the final line, no less!

Even though this is an eminently honest frame of mind to take to this subject matter, I personally think his research cuts against even this conflicted optimism. Not surprisingly, the Global Neuronal Workspace Theory of Consciousness casts an almost preposterously long theoretical shadow; it possesses an implicature that reaches to the furthest corners of the great human endeavour to understand itself. As I hope to show, the Blind Brain Theory of the Appearance of Consciousness provides a parsimonious and powerful way to make this downstream implicature explicit.

.

From Geocentrism to ‘Noocentrism’

“Most mental operations,” Dehaene writes, “are opaque to the mind’s eye; we have no insight into the operations that allow us to recognize a face, plan a step, add two digits, or name a word” (104-5). If one pauses to consider the hundreds of experiments that he directly references, not to mention the thousands of others that indirectly inform his work, this goes without saying. We require a science of consciousness simply because we have no other way of knowing what consciousness is. The science of consciousness is literally predicated on the fact of our metacognitive incapacity (See “The Introspective Peepshow“).

Demanding that science provide a positive explanation of consciousness as we intuit it is no different than demanding that science provide a positive explanation of geocentrism—which is to say, the celestial mechanics of the earth as we once intuited it. Any fool knows that the ground does not move. If anything, the fixity of the ground is what allows us to judge movement. Certainly the possibility that the earth moved was an ancient posit, but lacking evidence to the contrary, it could be little more than philosophical fancy. Only the slow accumulation of information allowed us to reconceive the ‘motionless earth’ as an artifact of ignorance, as something that only the absence of information could render obvious. Geocentrism is the product of a perspectival illusion, plain and simple, the fact that we literally stood too close to the earth to comprehend what the earth in fact was.

We stand even closer to consciousness—so close as to be coextensive! Nonetheless, a good number of very intelligent people insist on taking (some version of) consciousness as we intuit it to be the primary explanandum of consciousness research. Given his ‘law’ (We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (79)), Dehaene is duly skeptical. He is a scientific reductionist, after all. So with reference to David Chalmers’ ‘hard problem’ of consciousness, we find him writing:

“My opinion is that Chalmers swapped the labels: it is the ‘easy’ problem that is hard, while the hard problem just seems hard because it engages ill-defined intuitions. Once our intuition is educated by cognitive neuroscience and computer simulations, Chalmer’s hard problem will evaporate.” 262

Referencing the way modern molecular biology has overthrown vitalism, he continues:

“Likewise, the science of consciousness will keep eating away at the hard problem until it vanishes. For instance, current models of visual perception already explain not only why the human brain suffers from a variety of visual illusions but also why such illusions would appear in any rational machine confronted with the same computational problem. The science of consciousness already explains significant chunks of our subjective experience, and I see no obvious limits to this approach.” 262

I agree entirely. The intuitions underwriting the so-called ‘hard problem’ are perspectival artifacts. As in the case of geocentrism, our cognitive systems stand entirely too close to consciousness to not run afoul a number of profound illusions. And I think Dehaene, not unlike Galileo, is using the ‘Dutch Spyglass’ afforded by masking and attention paradigms to accumulate the information required to overcome those illusions. I just think he remains, despite his intellectual scruples, a residual hostage of the selfsame intuitions he is bent on helping us overcome.

Dehaene only needs to think through the consequences of GNWT as it stands. So when he continues to discuss other ‘hail Mary’ attempts (those of Eccles and Penrose) to find some positive account of consciousness as it appears, writing that “the intuition that our mind chooses its actions ‘at will’ begs for an explanation” (263), I’m inclined to think he already possesses the resources to advance such an explanation. He just needs to look at his own findings in a different way.

Consider the synoptic and sequential nature of what Dehaene calls ‘ignition,’ the becoming conscious of some nonconscious interpretation. The synoptic nature of ignition, the fact that consciousness merely samples interpretations, means that consciousness is radically privative, that every instance of selection involves massive neglect. The sequential nature of ignition, on the other hand, the fact that the becoming conscious of any interpretation precludes the becoming conscious of another interpretation, means that each moment of consciousness is an all or nothing affair. As I hope to show, these two characteristics possess profound implications when applied to the question of human metacognitive capacity—which is to say, our capacity to intuit our own makeup.

Dehaene actually has very little to say regarding self-consciousness and metacognition in Consciousness and the Brain, aside from speculating on the enabling role played by language. Where other mammalian species clearly seem to possess metacognitive capacity, it seems restricted to the second-order estimation of the reliability of their first-order estimations. They lack “the potential infinity of concepts that a recursive language affords” (252). He provides an inventory of the anatomical differences between primates and other mammals, such as specialized ‘broadcast neurons,’ and between humans and their closest primate kin, such as the size of the dendritic trees possessed by human prefrontal neurons. As he writes:

“All these adaptations point to the same evolutionary trend. During hominization, the networks of our prefrontal cortex grew denser and denser, to a larger extent than would be predicted by brain size alone. Our workspace circuits expanded way beyond proportion, but this increase is probably just the tip of the iceberg. We are more than just primates with larger brains. I would not be surprised if, in the coming years, cognitive neuroscientists find that the human brain possesses unique microcircuits that give it access to a new level of recursive, language-like operations.” 253

Presuming the remainder of the ‘iceberg’ does not overthrow Dehaene’s workspace paradigm, however, it seems safe to assume that our metacognitive machinery feeds from the same informational trough, that it is simply one among the many consumers of the information broadcast in conscious ignition. The ‘information horizon’ of the Workspace, in other words, is the information horizon of conscious metacognition. This would be why our capacity to report seems to be coextensive with our capacity to consciously metacognize: the information we can report constitutes the sum of information available for reflective problem-solving.

So consider the problem of a human brain attempting to consciously cognize the origins of its own activity—for the purposes of reporting to other brains, say. The first thing to note is that the actual, neurobiological origins of that activity are entirely unavailable. Since only information that ignites is broadcast, only information that ignites is available. The synoptic nature of the information ignited renders the astronomical complexities of ignition inaccessible to conscious access. Even more profoundly, the serial nature of ignition suggests that consciousness, in a strange sense, is always too late. Information pertaining to ignition can never be processed for ignition. This is why so much careful experimentation is required, why our intuitions are ‘ill-defined,’ why ‘most mental operations are opaque.’ The neurofunctional context of the workspace is something that lies outside the capacity of the workspace to access.

This explains the out-and-out inevitability of what I called ‘ulterior functionalism’ above: the information ignited constitutes the sum of the information available for conscious metacognition. Whenever we interrogate the origins or our conscious episodes, reflection only has our working memory of prior conscious episodes to go on. This suggests something as obvious as it is counterintuitive: that conscious metacognition should suffer a profound form of source blindness. Whenever conscious metacognition searches for the origins of its own activity, it finds only itself.

Free will, in other words, is a metacognitive illusion arising out of the structure of the global neuronal workspace, one that, while perhaps not appearing “in any rational machine confronted with the same computational problem” (262), would appear in any conscious system possessing the same structural features as the global neuronal workspace. The situation is almost directly analogous to the situation faced by our ancestors before Galileo. Absent any information regarding the actual celestial mechanics of the earth, the default assumption is that the earth has no such mechanics. Likewise, absent any information regarding the actual neural mechanics of consciousness, the default assumption is that consciousness also has no such mechanics.

But free will is simply one of many problems pertaining to our metacognitive intuitions. According to the Blind Brain Theory of the Appearance of Consciousness, a great number of the ancient and modern perplexities can be likewise explained in terms of metacognitive neglect, attributed to the fact that the structure and dynamics of the workspace render the workspace effectively blind to its own structure and dynamics. Taking Dehaene’s Global Neuronal Workspace Theory of Consciousness, it can explain away the ‘ill-defined intuitions’ that underwrite attributions of some extraordinary irreducibility to conscious phenomena.

On BBT, the myriad structural peculiarities that theologians and philosophers have historically attributed to the first person are perspectival illusions, artifacts of neglect—things that seem obvious only so long as we remain ignorant of the actual mechanics involved (See, “Cognition Obscura“). Our prescientific conception of ourselves is radically delusional, and the kind of counterintuitive findings Dehaene uses to patiently develop and explain GNWT are simply what we should expect. Noocentrism is as doomed as was geocentrism. Our prescientific image of ourselves is as blinkered as our prescientific image of the world, a possibility which should, perhaps, come as no surprise. We are simply another pocket of the natural world, after all.

But the overthrow of noocentrism is bound to generate even more controversy than the overthrow of geocentrism or biocentrism, given that so much of our self and social understanding relies upon this prescientific image. Perhaps we should all lay awake at night, pondering our pondering…

Just Plain Crazy Enactive Cognition: A Review and Critical Discussion of Radicalizing Enactivism: Basic Minds without Content, by Dan Hutto and Erik Myin

by rsbakker

Mechanically the picture of how we are related to our environment is ontologically straightforward and astronomically complicated. Intentionally, the picture of how we are related to our environment is ontologically occult and surprisingly simple. Since the former is simply an extension of the scientific project into what was historically the black-box domain of the human, it is the latter that has been thrown into question. Pretty much all philosophical theories of consciousness and cognition break about how to conceive the relation between these two pictures. Very few embrace all apparent intentional phenomena,[1] but the vast majority of theorists embrace at least some—typically those they believe the most indispensible for cognition. Given the incompatibility of these with the mechanical picture they need some way to motivate their application.

But why bother? If the intentional resists explanation in natural terms, and if the natural explanation of cognition is our primary desideratum, then why not simply abandon the intentional? The answer to this question is complex, but the fact remains that any explanation of knowing, whether it involves ‘knowing how’ or ‘knowing that,’ has to explain the manifest intentionality of knowledge. No matter what one thinks of intentionality, any scientific account of cognition is going to have to explain it—at least to be convincing.

Why? Because explanation requires an explanandum, and the explanandum in this instance is, intuitively at least, intentional through and through. To naturally explain cognition, one must naturally explain correct versus incorrect cognition, because, for better or worse, this is the how cognition is implicitly conceived. The capacity to be right or wrong, true or false, is a glaring feature of all cognition, so much so that any explanation that fails to explain it pretty clearly fails to explain cognition.[2]

So despite the naturalistic inscrutability of intentionality, it nonetheless remains an ineliminable feature of cognition. We find ourselves in the apparent bind of having to naturalistically explain something that cannot be naturalistically explained to explain cognition. Thus what might be called the great Scandal of Cognitive Science: the lack of any consensus commanding definition, let alone explanation, of what cognition is. The naturalistic inscrutability versus the explanatory ineliminability of intentionality is the perennial impasse, the ‘Master Hard Problem,’ one might say, underwriting the aforementioned Scandal.

Radicalizing Enactivism: Basic Minds without Content, by Dan Hutto and Erik Myin, constitutes another attempt to finesse this decidedly uncomfortable situation. Both Hutto and Myin are proponents of the ‘enactive,’ or ‘embodied,’ cognitive research programme, an outlook that emphasizes understanding cognition, and even phenomenal consciousness, in environmentally holistic terms—as ‘wide’ or ‘extended.’ The philosophical roots of enactivism are various and deep,[3] but they all share a common antagonism to the representationalism that characterizes mainstream cognitive science. Once one defines cognition in terms of computations performed on representations, one has effectively sealed cognition inside the head. Where enactivists are prone to explicitly emphasize the continuity of cognition and behaviour, representationalists are prone to implicitly assume their discontinuity. Even though animal life so obviously depends on solving environments via behaviour, both in its evolutionary genesis and in its daily maintenance, representationalists generally think this behavioural solving of the world is the product of a prior cognitive solving of representations of the world. The wide cognition championed by the enactivist, therefore, requires the critique of representationalism.

This is the task that Hutto and Myin set themselves. As they write, “We will have succeeded if, having reached the end of the book, the reader is convinced that the idea of basic contentless minds cannot be cursorily dismissed; that it is a live option that deserves to be taken much more seriously than it is currently” (xi).

As much as I enjoyed the book, I’m not so sure they succeed. But I’ve been meaning to discuss the relation between embodied cognitive accounts and the Blind Brain Theory for quite some time and Radicalizing Enactivism presents the perfect opportunity to finally do so. I know of a few souls following Three Pound Brain who maintain enactivist sympathies. If you happen to be one of them, I heartily encourage you to chip in your two cents.

Without any doubt, the strength of Radicalizing Enactivism, and the reason it seems to have garnered so many positive reviews, lies in the lucid way Hutto and Myin organize their critique around what they call the ‘Hard Problem of Content’:

“Defenders of CIC [Cognition necessarily Involves Content] must face up to the Hard Problem of Content: that positing informational content is incompatible with explanatory naturalism. The root trouble is that Covariance doesn’t Constitute Content. If covariance is the only scientifically respectable notion of information that can do the work required by explanatory naturalists, it follows that informational content does not exist in nature—or at least it doesn’t exist independently from and prior to the existence of certain social practices. If informational content doesn’t exist in nature, then cognitive systems don’t literally traffic in informational content…” xv

The information they are referring to here is semantic information, or as Floridi puts it in his seminal The Philosophy of Information, “the kind of information that we normally take to be essential for epistemic purposes” (82). To say that cognition necessarily involves content is to say that cognition amounts to the manipulation of information about. The idea is as intuitive as can be: the senses soak up information about the world, which the brain first cognizes then practically utilizes. For most theorists, the truth of this goes without saying: the primary issue is one of the role truth plays in semantic information. For these theorists, the problem that Hutto and Myin alude to, the Hard Problem of Content, is more of a ‘going concern’ rather than a genuine controversy. But if anything this speaks to its intractability as opposed to its relevance. For Floridi, who calls it the Symbol Grounding Problem (following Harnad (1990)),  it remains “one of the most important open questions in the philosophy of information” (134). As it should, given that it is the question upon which the very possibility of semantic information depends.

The problem is one of explaining how information understood as covariance, which can be quantified and so rigorously operationalized, comes to possess the naturalistically mysterious property of ‘aboutness,’ and thus the equally mysterious property of ‘evaluability.’ As with the Hard Problem of Consciousness, many theoretical solutions have been proposed and all have been found wanting in some obvious respect.

Calling the issue ‘the Hard Problem of Content’ is both justified and rhetorically inspired, given the way it imports the obvious miasma of Consciousness Research into the very heart of Cognitive Science. Hutto and Myin wield it the way the hero wields a wooden stake in a vampire movie. They patiently map out the implicatures of various content dependent approaches, show how each of them cope with various challenges, then they finally hammer the Hard Problem of Content through their conceptual heart.

And yet, since this problem has always been a problem, there’s a sense in which Hutto and Myin are demanding that intentionalists bite a bullet (or stake) they have bitten long ago. This has the effect of rendering much of their argument rhetorical—at least it did for me. The problem isn’t that the intentionalists haven’t been able to naturalize intentionality in any remotely convincing way, the problem is that no one has—including Hutto and Myin!

And this, despite all the virtues of this impeccably written and fascinating book, has to be its signature weakness: the fact that Hutto and Myin never manage to engage, let alone surmount, the apparent ineliminability of the intentional. All they really do is exorcise content from what they call ‘basic’ cognition and perception, all the while conceding the ineliminability of content to language and ‘social scaffolding.’ The more general concession they make to explanatory ineliminability is actually explicit in their thesis “that there can be intentionally directed cognition and, even, perceptual experience without content” (x).

So if you read this book hoping to be illuminated as to the nature of the intentional, you will be disappointed. As much as Hutto and Myin would like to offer illumination regarding intentionality, all they really have is another strategic alternative in the end, a way to be less worried about the naturalistic inscrutability of content in particular rather than intentionality more generally. At turns, they come just short of characterizing Radical Enactive Cognition the way Churchill famously characterized democracy: as the least worst way to conceptualize cognition.

So in terms of the Master Hard Problem of naturalistic inscrutability versus explanatory ineliminability, they also find it necessary to bite the inscrutability bullet, only as softly as possible lest anyone hear. They are not interested in any thoroughgoing content skepticism, or what they call ‘Really Radical Enactive or Embodied Cognition’: “Some cognitive activity—plausibly, that associated with and dependent upon the mastery of language—surely involves content” ( xviii). Given that their Hard Problem of Content partitions the Master Problem along such narrow, and ultimately arbitrary, lines, it becomes difficult to understand why anyone should think their position ‘radical’ in any sense.

If they’re not interested in any thoroughgoing content skepticism, they’re even less interested in any thoroughgoing meaning skepticism. Thus the sense of conceptual opportunism that haunted my reading of the book: the failure to tackle the problem of intentionality as a whole lets them play fast and loose with the reader’s intuitions of explanatory ineliminability. Representational content, after all, is the traditional and still (despite the restlessness of graduate students around the world) canonical way of understanding ‘intentional directedness.’ Claiming that representational content runs afoul inscrutability amounts to pointing out the obvious. This means the problem lies in its apparent ineliminability. Pointing out that the representional mountain cannot be climbed simply begs the question of how one gets around it. Systematically avoiding this question lets Hutto and Myin have it both ways, to raise the problem of inscrutability where it serves their theoretical interests, all the while implicitly assuming the very ineliminability that justifies it.

One need only compare the way they hold Tyler Burge (2010) accountable the Hard Problem of Content in Chapter 6 with their attempt to circumvent the Hard Problem of Consciousness in Chapter 8. Burge accepts both inscrutability, the apparent inability to naturalize intentionality, and ineliminability, the apparent inability to explain cognition without intentionality. Like Bechtel, he thinks representational inscrutability is irrelevant insofar as cognitive science has successfully operationalized representations. Rather than offer a ‘straight solution’ to the Hard Problem of Content, Burge argues that we should set it aside, and allow science—and the philosophy concerned with it—to continue pursuing achievable goals.

Hutto and Myin complain:

“Without further argumentation, Burge’s proposal is profoundly philosophically unsatisfying. Even if we assume that contentful states of mind must exist because they are required by perceptual science, this does nothing to address deeply puzzling questions about how this could be so. It is, in effect, to argue from the authority of science. We are asked to believe in representational content even though none of the mysteries surrounding it are dealt with—and perhaps none of them may ever be dealt with. For example, how do the special kinds of natural norms of which Burge speaks come in being? What is their source, and what is their basis? How can representational contents qua representational contents cause, or bring about, other mental of physical events?” 116-117

But when it comes to the Hard Problem of Consciousness, however, Hutto and Myin find themselves whistling an argumentative tune that sounds eerily similar to Burge’s. Like Burge, they refuse to offer any ‘straight solutions,’ arguing that “[r]ather than presenting science and philosophy with an agenda of solving impossible problems, [their] approach liberates both science and philosophy to pursue goals they are able to achieve” (178). And since this is the last page of the book, no corresponding problem of ‘profound philosophical dissatisfaction’ ever arises.

The problem of Radicalizing Enactivism—and the reason why I think it will ultimately harden opinions against the enactivist programme—lies in its failure to assay the shape of what I’ve been calling the Master Problem of naturalistic inscrutability and explanatory ineliminability. The inscrutability of content is simply a small part of this larger problem, which involves, not only the inscrutability of intentionality more generally, but the all-important issue of ineliminability as well, the fact that various ‘intentional properties’ such as evaluability so clearly seem to belong to cognition. By focussing on the inscrutability of content to the exclusion of the Master Problem, they are able to play on specific anxieties due to inscrutability without running afoul more general scruples regarding ineliminability. They can eat their intentional cake and have it too.[4]

Personally, I’m inclined to agree with the more acerbic critics of so-called ‘radical,’ or anti representationalist, enactivism: it simply is not a workable position.[5] But I think I do understand its appeal, why, despite forcing its advocates to fudge and dodge the way they seem to do on what otherwise seem to be relatively straightforward issues, it nevertheless continues to grow in popularity. First and foremost, the problem of inscrutability has grown quite long in the tooth: after decades of pondering this problem, our greatest philosophical minds have only managed to deepen the mire. Add to this the successes of DST and situated AI, plus the simple observation that we humans are causally embedded in—‘coupled to’—our causal environments, and it becomes easy to see how mere paradigm fatigue can lapse into outright paradigm skepticism.

I think Hutto and Myin are right in insisting that representationalism has been played out, that it’s time to move on. The question is really only one of how far we have to move. I actually think this, the presentiment of needing to get away, to start anew, is why ‘radical’ has become such a popular modifier in embodied cognition circles. But I’m not sure it’s a modifier that any of these positions necessarily deserve. I say this because I’m convinced that answering the Master Problem of inscrutability versus ineliminability forces us to move far, far further than any philosopher (that I know of at least) has hitherto dared to go. The fact is Hutto and Myin remain intentionalists, plain and simple. To put it bluntly: if they count as ‘radical,’ then they better lock me up, because I’m just plain crazy![6]

If I’m right, the only way to drain the inscrutability swamp is to tackle the problem of inscrutability whole, which is to say, to tackle the Master Problem. So long as inscrutability remains a problem, the strategy of partitioning intentionality into ‘good’ and ‘bad,’ eliminable and ineliminable—the strategy that Hutto and Myin share with representationalists more generally—can only lead to a reorganization of the controversy. Perhaps one of these reorganizations will turn out to be the lucky winner—who can say?—but it’s important to see that Radical Enactive Cognition, despite its claims to the contrary, amounts to ‘more of the same’ in this crucial respect. All things being equal, it’s doomed to complicate as opposed to solve, insofar as it merely resituates (in this case, literally!) the problem of inscutability.

Now I’m an institutional outsider, which is rarely a good thing if you have a dramatic reconceptualization to sell. When matters become this complicated, professionalization allows us to sort the wheat from the chaff before investing time and effort in either. The problem, however, is that chaff seems to be all anyone has. What I’m calling the Scandal of Cognitive Science represents as clear an example of institutional failure as you will find in the sciences. Given that the problem of inscrutability turns on explicit judgments and implicit assumptions that have been institutionalized, there’s a sense in which hobbyists such as myself, individuals who haven’t been stamped by the conceptual prejudices of their supervisors, or shamed out of pursuing an unconventional line of reasoning by the embarrassed smiles of their peers, may actually have a kind of advantage.

Regardless, there are novel ways to genuinely radicalize this problem, and if they initially strike you as ‘crazy,’ it might just be because they are sane. The Scandal of Cognitive Science,
after all, is the fact that its members have no decisive means to judge one way or another! So, with this in mind, I want to introduce what might be called ‘Just Plain Crazy Enactice Cognition’ (JPCEC), an attempt to apply Hutto and Myin’s ultimately tendentious dialectical use of inscrutability across the board—to solve the Master Problem of naturalistic inscrutability and explanatory ineliminability, in effect. It can be done—I actually think cognitive scientists of the future will smirk and shake their heads, reviewing the twist we presently find ourselves in, but only because they will have internalized something similar to the decidedly alien view I’m about to introduce here.

For reasons that should become apparent, the best way to introduce Just Plain Crazy Enactice Cognition is to pick up where Hutto and Myin end their argument for Radical Enactive Cognition: the proposed solution to the Hard Problem of Consciousness they offer in Chapter 8. The Hard Problem of Consciousness, of course, is the problem of naturalistically explaining phenomenal properties in naturalistic terms of physical structures and dynamics. In accordance with their enactivism, Hutto and Myin hold that phenomenality is environmentally determined in certain important respects. Since ‘wide phenomenality’ is incompatible with qualia as normally understood, this entails qualia eliminativism, which warrants rejecting the explanatory gap—the Hard Problem of Consciousness. They adopt the Dennettian argument that the Hard Problem is impossible to solve given the definition of qualia as “intrinsically qualitative, logically private, introspectable, incomparable, ineffable, incorrigible entities of our mental acquaintance” (156). And since impossible questions warrant no answers they refuse to listen:

“What course do we recommend? Stick with [Radical Enactive Cognition] and take phenomenality to be nothing but forms of activities—perhaps only neural—that are associated with environment-involving interactions. If that is so, there are not two distinct relata—the phenomenal and the physical—standing in a relation other than identity. Lastly, come to see that such identities cannot, and need not be explained. If so, the Hard Problem totally disappears.” 169

When I first read this, I wrote ‘Wish It Away Strategy?’ in the margin. On my second reading, I wrote, ‘Whew! I’m glad consciousness isn’t a baffling mystery anymore!’

The first note was a product of ignorance; I simply didn’t know what was coming next. Hutto and Myin adopt a variant of the Type B Materialist response to the Hard Problem, admitting that there is an explanatory gap, while denying any ontological gap. Conscious experiences and brain-states are considered identical, though phenomenal and physical concepts we use to communicate them are systematically incompatible. It is the difference between the latter that fools us into imputing some kind of ontological difference between the former, giving license to innumerable, ultimately unanswerable questions. Ontological identity means there is no Hard Problem to be solved. Conceptual difference means that phenomenal vocabularies cannot be translated into physical vocabularies, that the phenomenal is ‘irreducible.’ As a result, the phenomenal character of experience cannot be physically explained—it is entirely natural, but utterly inexplicable in natural terms.

But Hutto and Myin share the standard objection against Type B Materialisms: their inability to justify their foundational identity claim.

“Standard Type B offerings therefore fail to face up to the root challenge of the Hard Problem—they fail to address worries about the intelligibility of making certain identity claims head on. They do nothing to make the making of such claims plausible. The punch line is that to make a credible case for phenemeno-physical identity claims it is necessary to deal with—to explain away—appearances of difference in a more satisfactory way than by offering mere stipulations.” 174

Short of some explanation of the apparent difference between conscious experiences and brain states, in other words,Type B approaches can only be ‘wish it away strategies.’ The question accordingly becomes one of motivating the identity of the phenomenal and the physical. Since Hutto and Myin think the naturalistic inscrutability of phenomenality renders standard scientific identification is impossible, they argue that the practical, everyday identity between the phenomenal and the physical we implicitly assume amply warrants the required identification. And as it turns out, this implicit everyday identity is extensive or wide:

“Enactivists foreground the ways in which environment-involving activities are required for understanding and conceiving of phenomenality. They abandon attempts to explain phenemeno-physical identities in deductive terms for attempts to motivate belief in such identities by reminding us of our common ways of thinking and talking about phenomenal experience. Continued hesitance to believe in such identities stems largely from the fact that experiences—even if understood as activities—are differently encountered by us: sometimes we live them through embodied activity and sometimes we get at them only descriptively.” 177

Thus the second comment I wrote reading the above passage!

What ‘motivates’ the enactive Type B materialist’s identity claim, in other words, is simply the identity we implicitly assume in our worldly engagements, an identity that dissolves because of differences intrinsic to the activity of theoretically engaging phenomenality.

I’m assuming that Hutto and Myin use ‘motivate,’ rather than ‘justify,’ simply because it remains entirely unclear why the purported assumption of identity implicit in embodied activity should trump the distinctions made by philosophical reflection. As a result, the force of this characterization is not so much inferential as it is redemptive. It provides an elegant enough way to rationalize giving up on the Hard Problem via assumptive identity, but little more. Otherwise it redeems the priority of lived life, and, one must assume, all the now irreducible intentional phenomena that go with it.

The picture they paint has curb appeal, no doubt about that. In terms of our Master Hard Problem, you could say that Radical Enactivism uses ‘narrow inscrutability’ to ultimately counsel (as opposed to argue) wide ineliminability. All we have to be is eliminativists about qualia and non-linguistic content, and the rest of the many-coloured first-person comes for free.

The problem—and it is a decisive one—is that redemption just ain’t a goal of naturalistic inquiry, no matter how speculative. Since our cherished, prescientific assumptions are overthrown more often than not, a theory’s ability to conserve those assumptions (as opposed to explain them) should warn us away, if anything. The rational warrant of Hutto and Myin’s recommendation lies entirely in assuming the epistemic priority of our implicit assumptions, and this, unfortunately, is slender warrant indeed, presuming, as it does, that when it comes to this one particular yet monumental issue—the identity of the physical and the phenomenal—we’re better philosophers when we don’t philosophize than when we do!

Not surprisingly, questions abound:

1) What, specifically, is the difference between ‘embodied encounters’ and ‘descriptive’ ones?

2) Why are the latter so prone to distort?

3) And if the latter are so prone to distort, to what extent is this description of ‘embodied activity’ potentially distorted?

4) What is the nature of the confounds involved?

5) Is there any way to puzzle through parts of this problem given what the sciences of the brain already know?

6) Is it possible to hypothesize what might be going on in the brain, such that we find ourselves in such straits?

As it turns out, these questions are not only where Radical Enactive Cognition ends, but also where Just Plain Crazy Enactive Cognition begins. Hutto and Myin can’t pose these questions because their ‘motivation’ consists in assuming we already implicitly know all that we need to know to skirt (rather than shirk) the Hard Problem of Consciousness. Besides, their recommendation is to abandon the attempt to naturalistically answer the question of the phenomeno-physical relation. Any naturalistic inquiry into the question of how theoretical reflection distorts the presumed ‘whole’ (‘integral,’ or ‘authentic’) nature of our implicit assumption would seem to require some advance, naturalistic understanding of just what is being distorted—and we have been told that no such understanding is possible.

This is where JPCEC begins, on the other hand, because it assumes that the question of inscrutability and ineliminability is itself an empirical one. Speculative recommendations such as Hutto and Myin’s only possess the intuitive force they do because we find it impossible to imagine how the intentional and the phenomenal could be rendered compatible with the natural. Given the conservative role that failures of imagination have played in science historically, JPCEC assumes the solution lies in the same kind of dogged reimagination that has proven so successful in the past. Given that the intentional and the phenomenal are simply ‘more nature,’ then the claim that they represent something so extraordinary, either ontologically or epistemologically, as to be somehow exempt from naturalistic cognition has to be thought extravagant in the extreme. Certainly it would be far more circumspect to presume that we simply don’t know.

And here is where Just Plain Crazy Enactive Cognition sets its first, big conceptual wedge: not only does it assume that we don’t know—that the hitherto baffling question of the first person is an open question—it asks the crucial question of why we don’t know. How is it that the very thing we once implicitly and explicitly assumed was the most certain, conscious experience, has become such a dialectical swamp?

The JPCEC approach is simple: Noting the role the scarcity of information plays in the underdetermination of scientific theory more generally, it approaches this question in these very terms. It asks, 1) What kind of information is available for deliberative, theoretical metacognition? 2) What kind of cognitive resources can be brought to bear on this information? And 3) Are either of these adequate to the kinds of questions theoreticians have been asking?

And this has a remarkable effect of turning contemporary Philosophy of Mind on its head. Historically, the problem has been one of explaining how physical structure and dynamics could engender the first-person in either its phenomenal or intentional guises. The problem, in other words, is traditionally cast in terms of accomplishment. How could neural structure and dynamics generate ‘what is it likeness’? How could causal systems generate normativity? The problem of inscrutability is simply a product of our perennial inability to answer these questions in any systematically plausible fashion.[7]

Just Plain Crazy Enactive Cognition inverts this approach. Rather than asking how the brain could possibly generate this or that apparent feature of the first-person, it asks how the brain could possibly cognize any such features in the first place. After all, it takes a tremendous amount of machinery to accurately, noninferentially cognize our environments in the brute terms we do: How much machinery would be required to accurately, noninferentially cognize the most complicated mechanism in the known universe?[8]

JPCEC, in other words, begins by asking what the brain likely can and cannot metacognize. And as it turns out, we can make a number of safe bets given what we already know. Taken together, these bets constitute what I call the Blind Brain Theory, or BBT, the systematic explanation of phenomenality and intentionality via human cognitive and metacognitive—this is the important part—incapacity.

Or in other words, neglect. The best way to explain the peculiarity of our phenomenal and intentional inklings is via a systematic account of the information (construed as systematic differences making systematic differences) that our brain cannot access or process.

So consider the unity of consciousness, the feature that most convinced Descartes to adopt dualism. Where the tradition wonders how the brain could accomplish such as thing, BBT asks how the brain could accomplish anything else. Distinctions require information. Flickering lights fuse in experience once their frequency surpasses our detection threshold. What looks like paint spilled on the sidewalk from a distance turns out to be streaming ants. Given that the astronomical complexity of the brain far and away outruns its ability to cognize complexity, the miracle, from the metacognitive standpoint, would be the high-dimensional intuition of the brain as an externally related multiplicity.

As it turns out, many of the perplexing features of the first-person can be understood in terms of information privation. Neglect provides a way to causally characterize the narrative granularity of the ‘mind,’ to naturalize intentionality and phenomenality, in effect. And in doing so it provides a parsimonious and comprehensive way to understand both naturalistic inscrutability and explanatory ineliminability. What I’ve been calling JPCEC, in other words, allows us to solve the Master Hard Problem.[9]

It turns on two core claims. First, it agrees with the consensus opinion that cognition and perception are heuristic, and second, it asserts that social cognition and metacognition in particular are radically heuristic.

To say that cognition and perception are heuristic is to say they exploit the structure of a given problem ecology to effect solutions in the absence of other relevant information. This much is widely accepted, though few have considered its consequences in any detail. If all cognition is heuristic, then all cognition possesses 1) a ‘problem ecology,’ as Todd and Gigerenzer term it (2012), some specific domain of reliability, and 2) a blind spot, an insensitivity, structural or otherwise, to information pertinent to the problem.

To understand the second core claim—the idea that social cognition and metacognition are radically heuristic—one has to appreciate that wider heuristic blind spots generally mean more narrow problem ecologies (though this need not always be the case). Given the astronomical complexity of the human brain—or any brain for that matter—we must presume that our heuristic repertoire for solving brains, whether belonging to others or belonging to ourselves, involves extremely wide neglect, which in turn implies very narrow problem ecologies. So if it turns out that metacognition is primarily adapted to things like refining practical skills, consuming the activities of the default mode, and regulating social performance, then it becomes a real question whether it possesses the cognitive and/or  informational resources required to solve the kinds of problems philosophers are prone to ponder. Philosophical reflection on the ‘nature of knowledge’ could be akin to using a screwdriver to tighten bolts! The fact that we generally have no metacognitive inkling of swapping between different cognitive tools whatsoever pretty clearly suggests it very well might be—at least when it comes to theorizing things such as ‘knowledge’![10]

At this point it’s worth noting how this way of conceiving cognition and perception amounts to a kind of ‘subpersonal enactivism.’ To say cognition is heuristic and fractionate is to say that cognition cannot be understood independent of environments, no more than a screw-driver can be understood independent of screws. It’s also worth noting how this simply follows from mechanistic paradigm of the natural sciences. Humans are just another organic component of their natural environments: emphasizing the heuristic, fractionate nature of cognition and perception allows us to investigate our ‘dynamic componency’ in a more detailed way, in terms of specific environments cuing specific heuristic systems cuing specific behaviours and so on.[11]

But if this subpersonal enactivism is so obvious—if ‘cognitive componency’ simply follows from the explanatory paradigm of the natural sciences—then why all the controversy? Why should ‘enactive’ or ‘embodied’ cognition even be a matter of debate? What motivates the opportunistic eliminativism of Radical Enactive Cognition, remember, is the way content has the tendency to ‘internalize’ cognition, to narrow it to the head. Once the environment is rolled up into the representational brain, trouble-shooting the environment becomes intracranial. So, if one can find some way around the apparent explanatory ineliminability of content, one can simply assert the cognitive componency implied by the mechanistic paradigm of natural science. And this, remember, was what made Hutto and Myin’s argument more deceptive than illuminating. Rather than focus on ineliminability, they turned to inscrutability, the bullet everyone—including themselves!—has already implicitly or explicitly bitten.

Just Plain Crazy Enactive Cognition, however, diagnoses the problem in terms of metacognitive neglect. Content, as it turns out, isn’t the only way to short-circuit the apparent obviousness cognitive componency. One might ask, for instance, why it took us so damn long to realize the fractionate, heuristic nature of our own cognitive capacities. Metacognitive neglect provides an obvious answer: Absent any way of making the requisite distinctions, we simply assumed cognition was monolithic and universal. Absent the ability to discriminate environmentally dependent cognitive functions, it was difficult to see cognition as a biological component of a far larger, ‘extensive’ mechanism. A gear that can turn every wheel is no gear at all.

‘Simples’ are cheaper to manage than ‘complexes’ and evolution is a miser. We cognize/metacognize persons rather than subpersonal assemblages because this was all the information our ancestors required. Not only is metacognition blind to the subpersonal, it is blind to the fact that it is blind: as far as it’s concerned, the ‘person’ is all there is. Evolution had no clue we would begin reverse-engineering her creation, begin unearthing the very causal information that our social and metacognitive heuristic systems are adapted to neglect. Small wonder we find ourselves so perplexed! Every time we ask how this machinery could generate ‘persons’—rational, rule-following, and autonomous ‘agents’—we’re attempting to understand the cognitive artifact of a heuristic system designed to problem solve in the absence of causal information in terms of causal information. Not surprisingly, we find ourselves grinding our heuristic gears.

The person, naturalistically understood, can be seen as a kind of strategic simplification. Given the abject impossibility of accurately intuiting itself, the brain only cognizes itself so far as it once paid evolutionary dividends and no further. The person, which remains naturalistically inscrutable as an accomplishment (How could physical structure and dynamics generate ‘rational agency’?) becomes naturalistically obvious, even inevitable, when viewed as an artifact of neglect.[12] Since intuiting the radically procrustean nature of the person requires more information, more metabolic expense, evolution left us blessedly ignorant of the possibility. What little we can theoretically metacognize becomes an astounding ‘plenum,’ the sum of everything to be metacognized—a discrete and naturalistically inexplicable entity, rather than a shadowy glimpse serving obscure ancestral needs. We seem to be a ‘rational agent’ before all else

Until, that is, disease or brain injury astounds us.[13]

This explanatory pattern holds for all intentional phenomena. Intentionality isn’t so much a ‘stance’ we take to systems, as Dennett argues, as it is a particular family of heuristic mechanisms adapted to solve certain problem ecologies. Intentionality, in other words, is mechanical—which is to say, not intentional. Resorting to these radically heuristic mechanisms may be the only way to solve a great number of problems, but it doesn’t change the fact that what we are actually doing, what is actually going on in our brain, is natural like anything else, mechanical. The fact that you, me, or anyone exploits the heuristic efficiency of terms like ‘exploit’ no more presupposes any implicit commitment to the priority, let alone the ineliminability, of intentionality than reliance on naive physics implies the falsehood of quantum mechanics.

This has to be far and away the most difficult confound to surmount: the compulsion to impute efficacy to our metacognitive inklings. So it seems that what we call ‘rationality,’ even though it so obviously bears all the hallmarks of informatic underdetermination, must in some way drive ‘action.’ As the sum of what our brain can cognize of its activity, our brain assumes that it exhausts that activity. It mistakes what little it cognizes for the breath-taking complexity of what it actually is. The granular shadows—‘reasons,’ ‘rules,’ ‘goals,’ and so on—seem to cast the physical structure and dynamics of the brain, rather than vice versa. The hard won biological efficacy of the brain is attributed to some mysterious, reason-imbibing, judgment-making ‘mind.’

Metacognitive incapacity simply is not on the metacognitive menu. Thus the reflexive, question-begging assumption that any use of normative terms presupposes normativity rather than the spare mechanistic sketch provided above.

Here we can clearly see both the form of the Master Hard Problem and the way to circumvent it. Intentionality seems inscrutable to naturalistic explanation because intentional heuristics are adapted to solve problems in the absence of pertinent causal information—the very information naturalistic explanation requires. Metacognitive blindness to the fractionate, heuristic nature of cognition also means metacognitive blindness to the various problem ecologies those heuristics are adapted to solve. In the absence of information (difference making differences), we historically assumed simplicity, a single problem ecology with a single problem solving capacity. Only the repeated misapplication of various heuristics over time provided the information needed to distinguish brute subcapacities and subecologies. Eventually we came to distinguish causal and intentional problem-solving, and to recognize their peculiar, mutual antipathy as well. But so long as metacognition remained blind to metacognitive blindness, we persisted in committing the Accomplishment Fallacy, cognizing intentional phenomena as they appeared to metacognition as accomplishments, rather than side-effects of our brain’s murky sense of itself.

So instead of seeing cognition wholly in enactive terms of componency—which is to say, in terms of mechanistic covariance—we found ourselves confronted by what seemed to be obvious, existent ‘intentional properties.’ Thus explanatory ineliminability, the conviction that any adequate naturalistic account of cognition would have to naturalistically account for intentional phenomena such as evaluability—the very properties, it so happens, that underwrite the attribution of representational content to the brain.

So, where Radical Enactive Cognition is forced to ignore the Master Problem in order to opportunistically game the problem of naturalistic inscrutability (in its restricted representationalist form) to its own advantage, Just Plain Crazy Enactivist Cognition is able to tackle the problem whole by simply turning the traditional accomplishment paradigm upside down. The theoretical disarray of cognitive science, it claims, is an obvious artifact of informatic underdetermination. What distinguishes this instance of underdetermination is the degree it turns on the invisibility of metacognitive incapacity, the way cognizing the insufficiency of the information and resources available to metacognition requires more information and resources. This generates the illusion of metacognitive sufficiency, the implicit conviction that what we intuit is what there is…

That we actually possess something called a ‘mind.’

Thus the ‘Just Plain Crazy’—the Blind Brain Theory offers nothing by way of redemption, only what could be the first naturalistically plausible way out of the traditional maze. On BBT, ‘consciousness’ or ‘mind’ is just the brain seen darkly.

In Hutto and Myin’s account of Radical Enactive Cognition, considerations of the kinds of conceptual resources various positions possess to tackle various problems figure large. The more problem solving resources a position possesses the better. In this respect, the superiority of JPCEC to REC should be clear already: insofar as REC, espousing both inscrutability and ineliminability, actually turns on the Master Hard Problem, it clearly lacks the conceptual resources to solve it.

But surely more is required. Any position that throws out the baby of explanatory ineliminability with the bathwater of naturalistic inscrutability has a tremendous amount of ‘splainin’ to do. In his Radical Embodied Cognition, Anthony Chemero does an excellent job illustrating the ‘guide to discovery’ objection to antirepresentationalist approaches to cognition such as his own. He relates the famous debate between Ernst Mach and Ludwig Boltzmann regarding the role ‘atoms’ in physics. For Mach, atoms amounted to an unnecessary fairy-tale posit, something that serious physicists did not need to carry out their experimental work. In his 1900 “The Recent Development of Method in Theoretical Physics,” however, Boltzmann turned the tide of the debate by showing how positing atoms had played an instrumental role in generating a number of further discoveries.

The power of this argumentative tactic was brought home to me in a recent talk by Bill Bechtel,[14] who presented his own guide to discovery argument for representationalism by showing the way representational thinking facilitated the discovery of place and grid cells and the role they play in spatial memory and navigation. Chemero, given his pluralism, is more interested in showing that radical embodied approaches possess their own pedigree of discoveries. In Radicalizing Enactivism, Hutto and Myin seem more interested in simply blunting the edge of these arguments and moving on. In their version, they stress the fact that scientists actually don’t talk about content and representation all that much. Bechtel, however, was at pains to show that they do! And why shouldn’t they, he would ask, given that we find ‘maps’ scattered throughout the brain?

The big thing to note here is the inevitability of argumentative stalemate. Neither side possesses the ‘conceptual resources’ to do much more than argue about what actual researchers actually mean or think and how this bears on their subsequent discoveries. Insofar as it possesses the ‘he-said-she-said’form of a domestic spat, you could say this debate is tailor-made to be intractable. Who the hell knows what anyone is ‘really thinking’? And it seems we make discoveries both positing representations and positing their absence!

Just Plain Crazy Enactive Cognition, however, possesses the resources to provide a far more comprehensive, albeit entirely nonredemptive, view. It begins by reminding us that any attempt to understand the brain necessarily involves the brain. It reminds us, in other words, of the subpersonally enactive nature of all research, that it involves physical systems engaging other physical systems. Insofar as researchers have brains, this has to be the case. The question then becomes one of how representational cognition could possibly fit into this thoroughly mechanical picture.

Pointing out our subpersonal relation to our subject matter is well and fine. The problem is one of connecting this picture to our intuitive, intentional understanding of our relation. Given the appropriate resources, we could specify all the mechanical details of the former relation—we could cobble together an exhaustive account of all the systematic covariances involved—and still find ourselves unable to account for out and out crucial intentional properties such as ‘evaluability.’ Call this the ‘cognitive zombie hunch.’

Now the fact that ‘hard problems’ and ‘zombie hunches’ seem to plague all the varying forms of intentionality and phenomenality is certainly no coincidence. But if other approaches touch on this striking parallelism at all, they typically advert—the way Hutto and Myin do—to some vague notion of ‘conceptual incompatibility,’ one definitive enough to rationalize some kind of redemptive form of ‘irreducibility,’ and nothing more. On Just Plain Crazy Enactive Cognition, however, these are precisely the kinds of problems we should expect given the heuristic character of the cognitive systems involved.

To say that cognition is heuristic, recall, is to say, 1) that it possesses a given problem-ecology, and 2) that it neglects otherwise relevant information. As we have seen, (1) warrants what I’ve been calling ‘subpersonal enactivism.’ The key to unravelling the knot of representationalism, of finding some way to square the purely mechanical nature of cognition with apparently self-evident intentional properties such as evaluability lies in (2). The problem, remember, is that any exhaustive mechanical account of cognition leaves us unable to account for the intentional properties of cognition. One might ask, ‘Where do these properties come from? What makes ‘evaluability,’ say, tick?’ But the problem, of course, is that we don’t know. What is more, we can’t even fathom what it would take to find out. Thus all the second-order attempts to reinterpret obvious ignorance into arcane forms of ‘irreducibility.’ But if we can’t naturalistically explain where these extraordinary properties come from, perhaps we can naturalistically explain where our idea of these extraordinary properties comes from…

Where else, if not metacognition?

And as we saw above, metacognition involves neglect at every turn. Any human brain attempting to cognize its own cognitive capacities simply cannot—for reasons of structural complicity (the fact that it is the very thing it is attempting to cognize) and target complexity (the fact that its complexity vastly outruns its ability to cognize complexity)—cognize those capacities the same way it cognizes its natural environments, which is to say, causally. The human brain necessarily suffers what might be called proximal or medial neglect. It constitutes its own blind spot, insofar as it cannot cognize its own functions in the same manner that it cognizes environmental functions.

One minimal phenomenological claim one could make is that the neurofunctionality that enables conscious cognition and experience is in no way evident in conscious cognition and experience. On BBT, this is a clear cut artifact of medial neglect, the fact that the brain simply cannot engage the proximate mechanical complexities it requires to engage its distal environments. Solving itself, therefore, requires a special kind of heuristic, one cued to providing solutions in the abject absence of causal information pertaining to its actual neurofunctionality.

Think about it. You see trees, not trees causing you to see trees. Even though you are an environmentally engaged ‘tree cognizing’ system, phenomenologically you simply see… trees. All the mechanical details of your engagement, the empirical facts of your coupled systematicity, are walled off by neglect—occluded. Because they are occluded, ‘seeing trees’ not only becomes all that you can intuit, it becomes all that you need to intuit, apparently.

Thus ‘aboutness,’ or intentionality in Brentano’s restricted sense: given the structural occlusion of our componency, the fact that we’re simply another biomechanically embedded biomechanical system, problems involving our cognitive relation to our environments have to be solved in some other way, in terms not requiring this vast pool of otherwise relevant information. Aboutness is this alternative, the primary way our brains troubleshoot their cognitive engagements.

It’s important to note here that the ‘aboutness heuristic’ lies outside the brain’s executive purview, that its deployment is mandatory. No matter how profoundly we internalize our intellectual understanding of our componency, we see trees nevertheless. This is what makes aboutness so compelling: it constitutes our intuitive baseline.

So, when our brains are cued to troubleshoot their cognitive engagements they’re attempting to finesse an astronomically complex causal symphony via a heuristic that is insensitive to causality. This means that aboutness, even though it captures the brute cognitive relation involved, has no means of solving the constraints involved. Thus normativity, the hanging constraints (or ‘skyhooks’ as Dennett so vividly analogizes them) we somehow intuit when troubleshooting the accuracy of various aboutnesses. As a result, we cognize cognition as a veridical aboutness—in terms commensurate with subjectivity rather than componency.

Nor do we seem to have much choice. Our intuitive understanding of understanding as evaluable, intentional directedness seems to be reflexive, a kind of metacognitive version of a visual illusion. This is why thought experiments like Leibniz’s Mill or arguments like Searle’s Chinese Room rattle our intuitions so: because, for one, veridical aboutness heuristics have adapted to solve problems without causal information, and because deliberative metacognition, at least, cannot identify the heuristics as such and so assumes the universality of their application. Our intuitive understanding of understanding intuitively strikes us as the only game in town.

This is why the frame of veridical aboutness anchors countless philosophical chasses, why you find it alternately encrusted in the human condition, boiled down to its formal bones, pitched as the ground of mere experience, or painted as the whole of reality. For millennia, human philosophical thought has buzzed within it like a fly in an invisible Klein Bottle, finding ourselves caught in the self-same dichotomies of subject and object, ideal and real.

Philosophy’s inability to clarify any of its particularities attests to its metacognitive informatic penury. Intentionality is a haiku—we simply lack the information and resources to pin any one interpretation to its back. And yet, as obviously scant as this picture is, we’ve presumed the diametric opposite historically, endlessly insisting, as if afflicted with a kind of theoretical anosognosia, that it provides the very frame of intelligibility rather than a radically heuristic way to solve for cognition.

Thus the theoretical compulsion that is representationalism. Given the occlusion of componency, or medial neglect, any instance of mistaken cognition necessarily becomes binary, a relation between. To hallucinate is to be directed at something not of the world, which is to say, at something other than the world. The intuitions underwriting veridical directedness, in other words, lend themselves to further intuitions regarding the binary structure of mistaken cognition. Because veridical aboutness constitutes our mandatory default problem solving mode, any account of mistaken cognition in terms of componency—in terms of mere covariance— seems not only counter-intuitive, but hopelessly procrustean as well, to be missing something impossible to explain and yet ‘obviously essential.’ Since the mechanical functions of cognition are themselves mandatory to scientific understanding, theorists feel compelled to map veridical aboutness onto those functions.

Thus the occult notion of mental and perceptual content, the ontological attribution of veridical aboutness to various components in the brain (typically via some semantic account of information).

Given that the function of veridical aboutness is to solve in the absence of mechanical information, it is perhaps surprising that it is relatively easy to attribute to various mechanisms. Mechanistic inscrutability, it turns out, is apparently no barrier to mechanistic applicability. But this actually makes a good deal of sense. Given that any component of a mechanism is a component by virtue of its dynamic, systematic interrelations with the rest of the mechanism, it can always be argued that any downstream component possesses implicit ‘information about’ other parts of the mechanism. When that component is dedicated, however, when it simply discharges the same function come what may, the ‘veridical’ aspect becomes hard to understand, and the attribution seems arbitrary. Like our intuitive sense of agency, veridicality requires ‘wiggle room.’ This is why the attribution possesses real teeth only when the component at issue plays a variable, regulatory function like, say, a Watt governor on a steam engine. As mechanically brute as a Watt governor is, it somehow still makes ‘sense’ to say that it is ‘right or wrong,’ performing as it ‘should.’ (Make no mistake: veridical aboutness heuristics do real cognitive work, just in a way that resists mechanical analysis—short of Just Plain Crazy Enactive Cognition, that is).

The debate thus devolves into the blind (because we have no metacognitive inkling that heuristics are involved) application of competing heuristics. The representationalist generally emphasizes the component at issue, drawing attention away from the systematic nature of the whole to better leverage the sense of variability or ‘wiggle room’ required to cue our veridical intuitions. The anti-representationalist, on the other hand, will emphasize the mechanism as a whole, drawing attention to the temporally deterministic nature of the processes at work to block any intuition of variability, to deny the representationalist their wiggle room.

This was why Bechtel, in his presentation on the role representations played in the discovery of place and grid cells, remained fixated on the notion of ‘neural maps’: these are the components that, when conceived apart from the monstrously complicated neural mechanisms they functioned within, are most likely to trigger the intuition of veridical aboutness, and so seem like bits of nature possessing the extraordinary property of being true or false of the world— obvious representations.

Those bits, of course, possessed no such extraordinary properties. Certainly they recapitulate environmental information, but any aboutness they seem to possess is simply an artifact of our hardwired penchant to problem solve (or communicate our solutions) around our own pesky mechanical details.

But if anything speaks to the difficulty we have overcoming our intuitions of veridical aboutness, it is the degree to which so-called anti-representationalists like Hutto and Myin so readily concede it otherwise. Apparently, even radicals have a hard time denying its reality. Even Dennett, whose position often verges on Just Plain Crazy Enactive Cognition, insists that intentionality can be considered ‘real’ to the extent that intentional attributions pick out real patterns.[15] But do they? For instance, how could positing a fictive relationship, veridical aboutness, solve anything, let alone the cognitive operations of the most complicated machine known? There’s no doubt that solutions follow upon such posits regularly enough. But the posit only needs to be systematically related to the actual mechanical work of problem-solving for that to be the case. Perhaps the posit solves an altogether different problem, such as the need to communicate cognitive issues.

The problem, in other words, lies with metacognition. In addition to asking what informs our intentional attributions, we need to ask what informs our attributions of ‘intentional attribution’? Does adopting the ‘intentional stance’ serve to efficiently solve certain problems, or does it serve to efficiently communicate certain problems solved by other means—even if only to ourselves? Could it be a kind of orthogonal ‘meta-heuristic,’ a way to solve the problem of communicating solutions? Dennett’s ‘intentional stance’ possesses nowhere near the conceptual resources required to probe the problem of intentionality from angles such as these. In fact, it lacks the resources to tackle the problem in anything but the most superficial naturalistic terms. As often as Dennett claims that the intentional arises from the natural, he never actually provides any account of how.[16]

As intuitively appealing as the narrative granularity of Dennett’s ‘intentional stance’ might be, it leaves the problem of intentionality stranded at all the old philosophical border stations.[17] The approach advocated here, however, where we speak of the deployment of various subpersonal heuristics, is less intuitive, hewing to componency as it does, but to the extent that it poses the problem of intentionality in mechanical as opposed to intentional terms, it stamps the passport, and finally welcomes intentionality to the realm of natural science. The mechanical idiom, which allows us to scale up and down various ‘levels of description,’ to speak of proteins and organelles and cells and organisms and ecologies in ontologically continuous terms, is tailor made for dealing with the complexities raised above.

Just Plain Crazy Enactive Cognition follows through on the problem of the intentional in a ruthlessly consistent manner. The story is mechanical all the way down—as we should expect, given the successes of the natural sciences. The ‘craziness,’ by its lights, is the assumption that one can pick and choose between intentional phenomena, eliminate this, yet pin the very possibility of intelligibility on that.

Consider Andy Clark’s now famous attempt (1994, 1997) to split the difference between embodied and intellectual approaches to cognition: the notion that some systems are, as he terms it, ‘representation hungry.’[18] One of the glaring difficulties faced by ‘radical enactive’ approaches turns on the commitment to direct realism. The representationalist has no problem explaining the constructed nature of perception, the fact that we regularly ‘see more than there is’: once the brain has accumulated enough onboard environmental ‘information about,’ direct sensory information is relegated to a ‘supervisory’ role. Since this also allows them to intuitively solve the ‘hard’ problem of illusion, biting the Hard Problem of Content seems more than a fair trade.

Those enactivists who eschew perceptual content not only reject information about but all the explanatory work it seems to do. This puts them in the unenviable theoretical position of arguing that perception is direct, and that the environment, accordingly, possesses all the information required for perceptually guided behaviour. All sophisticated detection systems, neural or electronic, need to solve the Inverse Problem, the challenge of determining properties belonging to distal systems via the properties of some sensory medium. Since sensory properties are ambiguous between any number of target properties, added information is required to detect the actual property responsible. Short of the system accumulating environmental information, it becomes difficult to understand how such disambiguation could be accomplished. The dilemma becomes progressively more and more difficult the higher you climb the cognitive ladder. So with language, for instance, you simply see/hear simple patterns of shape/sound from which you derive things like murderous intent to theories of cognition!

Some forms of cognition, in other words, seem to be more representation hungry than others, with human communication appearing to be the most representation hungry of all. In all likelihood this is the primary reason Hutto and Myin opt to game naturalistic inscrutability and explanatory ineliminability the way they do, rather than argue anything truly radical.

But if this is where the theoretical opportunism of Radical Embodied Cognition stands most revealed, it is also where the theoretical resources of Just Plain Crazy Enactive Cognition—or the Blind Brain Theory—promise to totally redefine the debate as traditionally conceived. No matter how high we climb Clark’s Chain of Representational Hunger, veridical aboutness remains just as much a heuristic—and therefore just as mechanical—as before. On BBT, Clark’s Chain of Representational Hunger is actually a Chain of Mechanical Complexity: the more sophisticated the perceptually guided behaviour, the more removed from bare stimulus-response, the more sophisticated the machinery required—full stop. It’s componency all the way down. On a thoroughgoing natural enactive view—which is to say, a mechanical view—brains can be seen as devices that transform environmental risk into onboard mechanical complexity, a complexity that, given medial neglect, metacognition flattens into heuristics such as aboutness. Certainly part of that sophistication involves various recapitulations of environmental structure, numerous ‘maps,’ but only as components of larger biomechanical systems, which are themselves components of the environments they are adapted to solve. This is as much the case with ‘pinnacle cognition,’ human theoretical practice, as it is with brute stimulus and response. There’s no content to be found anywhere simply because, as inscrutability has shouted for so very long, there simply is no such thing outside of our metacognitively duped imaginations.

The degree that language seems to require content is simply the degree to which the mechanical complexities involved elude metacognition—which is to say, the degree to which language has to be heuristically cognized in noncausal terms. In the absence of cognizable causal constraints , the fact that language is a biomechanical phenomena, we cognize ‘hanging constraints,’ the ghost-systematicity of normativity. In the absence of cognizable causal componency, the fact that we are mechanically embedded in our environments, we cognize aboutness, a direct and naturalistically occult relation that somehow binds words to world. In the absence of any way to cognize these radical heuristics as such, we assume their universality and sufficiency—convince ourselves that these things are real.

On the Blind Brain Theory, or as I’ve been calling it here, Just Plain Crazy Enactive Cognition, we are natural all the way down. On this account, intentionality is simply what mechanism looks like from a particular, radically blinkered angle. There is no original intentionality, and neither is there any derived intentionality. If our brains do not ‘take as meaningful,’ then neither do we. If environmental speech cues the application of various, radically heuristic cognitive systems in our brain, then this is what we are actually doing whenever we understand any speaker.

Intentionality is a theoretical construct, the way it looks whenever we ‘descriptively encounter’ or theoretically metacognize our linguistic activity—when we take a particular, information starved perspective on ourselves. As intentionally understood, norms, reasons, symbols, and so on are the descriptions of blind anosognosiacs, individuals convinced they can see for the simple lack of any intuition otherwise. The intuition, almost universal in philosophy, that ‘rule following’ or ‘playing the game of giving and asking for reasons’ is what we implicitly do is simply a cognitive conceit. On the contrary, what we implicitly do is mechanically participate in our environments as a component of our environments.

Now because it’s neglect that we are talking here, which is to say, a cognitive incapacity that we cannot cognize, I appreciate how counter-intuitive—even crazy—this must all sound. What I’m basically saying is that the ancient skeptics were right: we simply don’t know what we are talking about when we turn to theoretical metacognition for answers. But where the skeptics were primarily limited to second-order observations of interpretative underdetermination, I have an empirical tale to tell, a natural explanation for that interpretative underdetermination (and a great deal besides), one close to what I think cognitive science will come to embrace in the course of time. Even if you disagree, I would wager that you do concede the skeptical challenge is legitimate one, that there is a reason why so much philosophy can be read as a response to it. If so, then I would entreat you to regard this as a naturalized skepticism. The fact is, we have more than enough reason to grant the skeptic the legitimacy of their worry. In this respect, Just Plain Crazy Enactive Cognition provides a possible naturalistic explanation for what is already a legitimate worry.

Just consider how remarkably frail the intuitive position is despite seeming so obvious. Given that I used the term ‘legitimate’ in the preceding paragraph, the dissenter’s reflex will be to accuse me of obvious ‘incoherence,’ to claim that I am implicitly presupposing the very normativity I claim to be explaining away.

But am I? Is ‘presupposing normativity’ really what I am implicitly doing when I use terms such as ‘legitimate’? Well, how do you know? What informs this extraordinary claim to know what I ‘necessarily mean’ better than I do? Why should I trust your particular interpretation, given that everyone seems to have their own version? Why should I trust any theoretical metacognitive interpretation, for that matter, given their manifest unreliability?

I’ll wait for your answer. In the meantime, I’m sure you’ll understand if I continue assuming that whatever I happen to be implicitly doing is straightforwardly compatible with the mechanical paradigm of natural science.

For all its craziness, Just Plain Crazy Enactive Cognition is a very tough nut to crack. The picture it paints is a troubling one, to be sure. If empirically confirmed, it will amount to an overthrow of ‘noocentrism’ comparable to the overthrow of geocentrism and biocentrism in centuries previous.[19] Given our traditional understanding of ourselves, it is without a doubt an unmitigated disaster, a worst-case scenario come true. Given the quest to genuinely understand ourselves, however, it provides a means to dissolve the Master Problem, to naturalistically understand intentionality, and so a way to finally—finally!—cognize our profound continuity with nature.

In fact, the more you ponder it, the more inevitable it seems. Evolution gave us the cognition we needed, nothing more. To the degree we relied on metacognition and casual observation to inform our self-conception, the opportunistic nature of our cognitive capacities remained all but invisible, and we could think ourselves the very rule, stamped not just the physical image of God, but in His cognitive image as well. Like God, we had no back side, nothing to render us naturally contingent. We were the motionless centre of the universe: the earth, in a very real sense, was simply enjoying our ride. The fact of our natural, evolutionarily adventitious componency escaped us because the intuition of componency requires causal information, and metacognition offered us none.

Science, in other words, was set against our bottomless metacognitive intuitions from the beginning, bound to show that our traditional understanding of our cognition, like our traditional understanding of our planet and our biology, was little more than a trick of our informatic perspective.

.

Notes

[1] I mean this in the umbrella sense of the term, which includes normative, teleological, and semantic phenomena.

[2] Of course, there are other apparent intentional properties of cognition that seem to require explanation as well, including aboutness, so-called ‘opacity,’ productivity, and systematicity.

[3] For those interested in a more detailed overview, I highly recommend Chapter 2 of Anthony Chemero’s Radical Embodied Cognitive Science.

[4] This is one reason why I far prefer Anthony Chemero’s Radical Embodied Cognition (2009), which, even though it is argued in a far more desultory fashion, seems to be far more honest to the strengths and weaknesses of the recent ‘enactive turn.’

[5] One need only consider the perpetual inability of its advocates to account for illusion. In their consideration of the Muller-Lyer Illusion, for instance, Hutto and Myin argue that perceptual illusions “depend for their very existence on high-level interpretative capacities being in play” (125), that illusion is quite literally something only humans suffer because only humans possess the linguistic capacity to interpret them as such. Without the capacity to conceptualize the disjunction between what we perceive and the way the world is there are no ‘perceptual illusions.’ In other words, even though it remains a fact that you perceive two lines of equal length as possessing different lengths in the Muller-Lyer Illusion, the ‘illusion’ is just a product of your ability to judge it so. Since the representationalist is interested in the abductive warrant provided by the fact of the mistaken perception, it becomes difficult to see the relevance of the judgment. If the only way the enactivist can deal with the problem of illusion is by arguing illusions are linguistic constructs, then they have a hard row to how indeed!

[6] Which given the subject matter, perhaps isn’t so ‘crazy’ after all, if Eric Schwitzgebel is to be believed!

[7] Hutto and Myin have identified the proper locus of the problem, but since they ultimately want to redeem intentionality and phenomenality, their diagnosis turns on the way the ‘theoretical attitude’—or the ‘descriptive encounter’ favoured by the ‘Intellectualist’—frames the problem in terms of two distinct relata. Thus their theoretical recommendation that we resist this one particular theoretical move and focus instead on the implicit identity belonging to their theoretical account of embodied activity.

[8] See “THE Something about Mary” for a detailed consideration of this specific problem.

[9] Without, it is important to note, solving the empirical question of what consciousness is. What BBT offers, rather, is a naturalistic account of why phenomenality and intentionality baffle us so.

[10] See “The Introspective Peepshow: Consciousness and the Dreaded Unknown Unknowns” for a more thorough account.

[11] Note also the way this clears away the ontological fog of Gibson’s ‘affordances’: our dynamic componency, the ways we are caught up in the stochastic machinery of nature, is as much an ‘objective’ feature of the world as anything else.

[12] See “Cognition Obscura” for a comprehensive overview.

[13] We understand ourselves via heuristics that simply do not admit the kind of information provided by a great number of neuropathologies. Dissociations such as pain asymbolia, for example, provide dramatic evidence of how profound our neglect-driven intuition of phenomenal simplicity runs.

[14] “Investigating Neural Representations: The Tale of Place Cells,” presented at the Rotman Institute of Philosophy, Sept. 19th, 2013.

[15] See “Real Patterns.”

[16] This is perhaps nowhere more apparent than in Dennett’s critical discussion of Brandom’s Making it Explicit, “The Evolution of [a] Why.”

[17] ‘Nibbling’ is what he calls his strategy in his latest book, where we “simply postpone the worrisome question of what really has a mind, about what the proper domain of the intentional stance is” and simply explore the power of this ‘good trick’ (Intuition Pumps, 79). Since he can’t definitively answer either question, the suspicion is that he’s simply attempting to recast a theoretical failure as a methodological success.

[18] See “Doing Without Representing?”

[19] In fact, it provides the resources to answer the puzzling question of why these ‘centrisms’ should constitute our default understanding in the first place.