Incomplete Cognition: An Eliminativist Reading of Terrence Deacon’s Incomplete Nature

by rsbakker

Incomplete Nature: How Mind Emerged from Matter

Goal seeking, willing, rule-following, knowing, desiring—these are just some of the things we do that we cannot make sense of in causal terms. We cite intentional phenomena all the time, attributing them the kind of causal efficacy we attribute to the more mundane elements of nature. The problem, as Terrence Deacon frames it, is that whenever we attempt to explain these explainers, we find nothing, only absence and perplexity.

“The inability to integrate these many species of absence-based causality into our scientific methodologies has not just seriously handicapped us, it has effectively left a vast fraction of the world orphaned from theories that are presumed to apply to everything. The very care that has been necessary to systematically exclude these sorts of explanations from undermining our causal analyses of physical, chemical, and biological phenomena has also stymied our efforts to penetrate beyond the descriptive surface of the phenomena of life and mind. Indeed, what might be described as the two most challenging scientific mysteries of the age—both are held hostage by this presumed incompatibility.” Incomplete Nature,12

The question, of course, is whether this incompatibility is the product of our cognitive constitution or the product of some as yet undiscovered twist in nature. Deacon argues the latter. Incomplete Nature is a magisterial attempt to complete nature, to literally rewrite physics in a way that seems to make room for goal seeking, willing, rule-following, knowing, desiring, and so on—in other words, to provide a naturalistic way to make sense of absences that cause. He wants to show how all these things are real.

My own project argues the former, that the notion of ‘absences that cause’ is actually an artifact of neglect. ‘We’ are an astronomically complicated subsystem embedded in the astronomically complicated supersystem that we call ‘nature,’ in such a way that we cannot intuitively cognize ourselves as natural.

The Blind Brain Theory claims to provide the world’s first genuine naturalization of intentionality—a parsimonious, comprehensive way to explain centuries of confusion away. What Intentionalists like Deacon think they are describing are actually twists on a family of metacognitive illusions. Crudely put, since no cognitive capacity could pluck ‘accuracy’ of any kind from the supercomplicated muck of the brain, our metacognitive system confabulates. It’s not that some (yet to be empirically determined) systematicity isn’t there: it’s that the functions discharged via our conscious access to that systematicity are compressed, formatted, and truncated. Metacognition neglects these confounds, and we begin making theoretical inferences assuming the sufficiency of compressed, formatted, and truncated information. Among many things, BBT actually predicts a discursive field clustered about families of metacognitive intuitions, but otherwise chronically incapable of resolving among their claims. When an Intentionalist gives you an account of the ‘game of giving and asking for reasons,’ say, you need only ask them why anyone should subscribe to an ontologization (whether virtual, quasi-transcendental, transcendental, or otherwise) on the basis of almost certainly unreliable metacognitive hunches.

The key conceptual distinction in BBT is that between what I’ve been calling ‘lateral sensitivity’ and ‘medial neglect.’ Lateral sensitivity refers to the brain’s capacity to be ‘imprinted’ by other systems, to be ‘pushed’ in ways that allow it to push back. Since behavioural interventions, or ‘pushing-back,’ requires some kind of systematic relation to the system or systems to be pushed, lateral sensitivity requires being pushed by the right things in the right way. Thus the Inverse Problem and the Bayesian nature of the human brain. The Inverse Problem pertains to the difficulty of inferring the structure/dynamics of some distal system (an avalanche or a wolf, say) via the structure/dynamics of some proximal system (ambient sound or light, say) that reliably co-varies with that distal system. The difficulty is typically described in terms of ambiguity: since any number of distal systems could cause the structure/dynamics of the proximal system, the brain needs some way of allowing the actual distal system to push through the proximal system, if it is to have any hope of pushing back. Unless it becomes a reliable component of its environment, it cannot reliably make components of its environments. This is an important image to keep in mind: that of the larger brainenvironment system, the way the brain is adapted to be pushed, or transformed into a component of larger environmental mechanisms, so as to push back, to ‘componentialize’ environmental mechanisms. Quite simply, we have evolved to be tyrannized by our environment in a manner that enables us to tyrannize our environment.

Lateral sensitivity refers to this ‘tyranny enabling tyranny,’ the brain’s ability to systematically covary with its environment in behaviourally advantageous ways. A system that solves via the Inverse Problem possesses a high degree of reliable covariational complexity. As it turns out, the mechanical complexity required to do this is nothing short of mind-boggling. And as we shall see, this fact possesses some rather enormous consequences. Up to this point, I’ve really only provided an alternate description of the sensorimotor loop; the theoretical dividends begin piling up once we consider lateral sensitivity in concert with medial neglect.

The machinery of lateral sensitivity is so complicated that it handily transcends its own ‘sensitivity threshold.’ This means the brain possesses a profound insensitivity to itself. This might sound daffy, given that the brain simply is a supercomplicated network of mutual sensitivities, but this is actually where the nub of cognition as a distinct biological process is laid bare. Unlike the dedicated sensitivity that underwrites mechanism generally, the sensitivity at issue here involves what might be called the systematic covariation for behaviour. Any process that systematically covaries for behaviour is a properly cognitive process. So the above could be amended to, ‘the brain possesses a profound cognitive insensitivity to itself.’ Medial neglect is this profound cognitive insensitivity.

The advantage of cognition is behaviour, the push-back. The efficacy of this behavioural push-back depends on the sensory push, which is to say, lateral sensitivity. Innumerable behavioural problems, it turns out, require that we be pushed by our pushing back: that our future behaviour (push-back) be informed (pushed) by our ongoing behaviour (pushing-back). Behavioural efficacy is a function of behavioural versatility is a function of lateral sensitivity, which is to say, the capacity to systematically covary with the environment. Medial neglect, therefore, constitutes a critical limit on behavioural efficacy: those ‘problem ecologies’ requiring sensitivity to the neurobiological apparatus of cognition to be solved effectively lay outside the capacity of the system to tackle. We are, quite literally, the ‘elephant in the room,’ a supercomplicated mechanism sensitive to most everything relevant to problem-solving in its environment except itself.

Mechanical allo-sensitivity entails mechanical auto-insensitivity, or auto-neglect. A crucial consequence of this is that efficacious systematic covariation requires unidirectional interaction, or that sensing be ‘passive.’ The degree to which the mechanical activity of tracking actually impacts the system to be tracked is the degree to which that system cannot be reliably tracked. Anticipation via systematic covariation is impossible if the mechanics of the anticipatory system impinge on the mechanics of the system to be anticipated. The insensitivity of the anticipatory system to its own activity, or medial neglect, perforce means insensitivity to systems directly mechanically entangled in that activity. Only ‘passive entanglement’ will do. This explains why so-called ‘observer effects’ confound our ability to predict the behaviour of other systems.

So the stage is set. The brain quite simply cannot cognize itself (or other brains) in the same high-dimensional way it cognizes its environments. (It would be hard to imagine any evolved metacognitive capacity that could achieve such a thing, in fact). It is simply too complex and too entangled. As a result, low-dimensional, special purpose heuristics—fast and frugal kluges—are its only recourse.

The big question I keep asking is, How could it be any other way? Given the problems of complexity and complicity, given the radical nature of the cognitive bottleneck—just how little information is available for conscious, serial processing—how could any evolved metacognitive capacity whatsoever come close to apprehending the functional truth of anything inner’? If you are an Intentionalist, say, you need to explain how the phenomena you’re convinced you intuit are free of perspectival illusions, or conversely, how your metacognitive faculties have overcome the problems posed by complexity and complicity.

On BBT, the brain possesses at least two profoundly different covariational regimes, one integrated, problem-general, and high-dimensional, mediating our engagement in the natural world, the other fractious, problem-specific and low-dimensional, mediating our engagements with ourselves and others (who are also complex and complicit), and thereby our engagement in the natural world. The twist lies in medial neglect, the fact that the latter fractious, problem-specific, and low-dimensional covariational regime is utterly insensitive to its fractious, problem-specific, and low-dimensional nature. Human metacognition is almost entirely blind to the structure of human cognition. This is why we require cognitive science: reflection on our cognitive capacities tells us little or nothing about those capacities, reflection included. Since we have no way of intuiting the insufficiency of these intuitions, we assume they’re sufficient.

We are now in a position to clearly delineate Deacon’s ‘fraction,’ what makes it vast, and why it has been perennially orphaned. Historically, natural science has been concerned with the ‘lateral problem-ecologies,’ with explicating the structure and dynamics of relatively simple systems possessing functional independence. Any problem ecology requiring the mechanistic solution of brains lay outside its purview. Only recently has it developed the capacity to tackle ‘medial problem-ecologies,’ the structure and dynamics of astronomically complex systems possessing no real functional independence. For the first time humanity finds itself confronted with integrated, high-dimensional explications of what it is. The ruckus, of course, is all about how to square these explications with our medial traditions and intuitions. All the so-called ‘hard problems’ turn on our apparent inability to naturalistically find, let alone explain, the phenomena corresponding to our intuitive, metacognitive understanding of the medial.

Why do our integrated, high-dimensional, explications of the medial congenitally ‘leave out’ the phenomena belonging to the medial-as-metacognized? Because metacognitive phenomena like goal seeking, willing, rule-following, knowing, desiring only ‘exist,’ insofar as they exist at all, in specialized problem-solving contexts. ‘Goal seeking’ is something we all do all the time. A friend has an untoward reaction to a comment of ours, so we ask ourselves, in good conscience, ‘What was I after?’ and the process of trying to determine our goal given whatever information we happen to have begins. Despite complexity and complicity, this problem is entirely soluble because we have evolved the heuristic machinery required: we can come to realize that our overture was actually meant to belittle. Likewise, the philosopher asks, ‘What is goal-seeking?’ and the process of trying to determine the nature of goal-seeking given whatever information he happens to have begins. But the problem proves insoluble, not surprisingly, given that the philosopher almost certainly lacks the requisite heuristic machinery. The capacity to solve for goal-seeking qua goal-seeking is just not something our ancestors evolved.

Deacon’s entire problematic turns on the equivocation of the first-order and second-order uses of intentional terms, on the presumption that the ‘goal-seeking’ we metacognize simply has to be the ‘goal-seeking’ referenced in first-order contexts—on the presumption, in other words, of metacognitive adequacy, which is to say something we now know to be false as a matter of empirical fact. For all its grand sweep, for all its lucid recapitulation and provocative conjecture, Incomplete Nature is itself shockingly incomplete. Nowhere does he consider the possibility that the only ‘goal-seeking phenomenon’ missing, the only absence to be explained, is this latter, philosophical goal-seeking.

At no point in the work does he reference, let alone account for, the role metacognition or introspection plays in our attempt to grapple with the incompatibility of natural and intentional phenomena. He simply declares “the obvious inversion of causal logic that distinguishes them” (139), without genuinely considering where that ‘inversion’ occurs. Because this just is the nub of the issue between the emergentist and the eliminativist: whether his ‘obvious inversion’ belongs to the systems observed or to the systems observing. As Deacon writes:

“There is no use denying there is a fundamental causal difference between these domains that must be bridged in any comprehensive theory of causality. The challenge of explaining why such a seeming reversal takes place, and exactly how it does so, must ultimately be faced. At some point in this hierarchy, the causal dynamics of teleological processes do indeed emerge from simpler blind mechanistic dynamics, but we are merely restating this bald fact unless we can identify exactly how this causal about-face is accomplished. We need to stop trying to eliminate homunculi, and to face up to the challenge of constructing teleological properties—information, function, aboutness, end-directedness, self, even conscious experience—from unambiguously non-teleological starting points.” 140

But why do we need to stop ‘trying to eliminate’ homunculi? We know that philosophical reflection on the nature of cognition is woefully unreliable. We know that intentional concepts and phenomena are the stock and trade of philosophical reflection. We know that scientific inquiry generally delegitimizes our prescientific discourses. So why shouldn’t we assume that the matter of intentionality amounts to more of the same?

Deacon never says. He acknowledges “there cannot be a literal ends-causing-the-means process involved” (109) when it comes to intentional phenomena. As he writes:

“Of course, time is neither stopped nor running backwards in any of these processes. Thermodynamic processes are proceeding uninterrupted. Future possible states are not directly causing present events to occur.” 109-110

He acknowledges, in other words, that this ‘inversion of causality’ is apparent only. He acknowledges, in other words, that metacognition is getting things wrong, just not entirely. So what recommends his project of ontologically meeting this appearance halfway over the project of doing away with it altogether? The project of rewriting nature, after all, is far more extravagant than the project of theorizing metacognitive shortcomings.

Deacon’s failure to account for observation-dependent interpretations of intentionality is more than suspiciously convenient, it actually renders the whole of Incomplete Nature an exercise in begging the question. He spends a tremendous amount of time and no little ingenuity in describing the way ‘teleodynamic systems,’ as the result of increasingly recursive complexity, emerge from ‘morphodynamic systems’ which in turn emerge from standard thermodynamic systems. Where thermodynamic systems exhibit straightforward entropy, morphodynamic systems, such as crystal formation, exhibit the tendency to become more ordered. Building on morphodynamics, teleodynamic systems then exhibit the kinds of properties we take to be intentional. A point of pride for Deacon is the way his elaborations turn, as he mentions in the extended passage quoted above, on ‘unambiguously non-teleological starting points.’

He sums this patient process of layering causal complexities in the postulation of what he calls an autogen, “a form of self-generating, self-repairing, self-replicating system that is constituted by reciprocal morphodynamic processes” (547-8), and arguably his most ingenious innovation. He then moves to conclude:

“So even these simple molecular systems have crossed a threshold in which we can say that a very basic form of value has emerged, because we can describe each of the component autogenic processes as there for the sake of autogen integrity, or for the maintenance if that particular form of autogenicity. Likewise, we can describe different features of the surrounding molecular environment as ‘beneficial’ or ‘harmful’ in the same sense that we would apply these assessments to microorganisms. More important, these are not merely glosses provided by a human observer, but intrinsic and functionally relevant features of the consequence-organized nature of the autogen itself.” 322

And the reader is once again left with the question of why. We know that the brain possesses suites of heuristic problem solvers geared to economize by exploiting various features of the environment. The obvious question becomes: How is it that any of the processes he describes do anything more than schematize the kinds of features that trigger the brain to swap out its causal cognitive systems for its intentional cognitive systems?

Time and again, one finds Deacon explicitly acknowledging the importance of the observer, and time and again one finds him dismissing that importance without a lick of argumentation—the argumentation his entire account hangs on. One can even grant him his morphodynamic and teleodynamic ‘phase transitions’ and still plausibly insist that all he’s managed to provide is a detailed description of the kinds of complex mechanical processes prone to trigger our intentional heuristics. After all, if it is the case that the future does not cause the past, then ‘end directedness,’ the ‘obvious inversion of causality,’ actually isn’t an inversion at all. The fact is Deacon’s own account of constraints and the role they play in morphodynamics and teleodynamics is entirely amenable to mechanical understanding. He continually relies on disposition talk. Even his metaphors, like the ‘negentropic ratchet’ (317), tend to be mechanical. The autogen is quite clearly a machine, one that automatically expresses the constraints that make it possible. The fact that these component constraints result in a system that behaves in ways far different than mundane thermodynamic systems speaks to nothing more extraordinary than mechanical emergence, the fact that whole mechanisms do things that their components could not (See Craver, 2007, pp. 211-17 for a consideration of the distinction between mechanical and spooky emergence). Likewise, for all the ink he spills regarding the holistic nature of teleodynamic systems, he does an excellent job explaining them in terms of their contributing components!

In the end, all Deacon really has is an analogy between the ‘intentional absence,’ our empirical inability to find intentional phenomena, and the kind of absence he attributes to constraints. Since systematicity of any kind requires constraints, defining constraints, as Deacon does, in terms of what cannot happen—in terms of what is absent—provides him the rhetorical license he needs to speak of ‘absential causes’ at pretty much any juncture. Since he has already defined intentional phenomena as ‘absential causes’ it becomes very easy thing indeed to lead the reader over the ‘epistemic cut’ and claim that he has discovered the basis of the intentional as it exists in nature, as opposed to an interpretation of those systems inclined to trigger intentional cognition in the human brain. Constraints can be understood in absential terms. Intentional phenomena can only be understood in absential terms. Since the reader, thanks to medial neglect, has no inkling whatsoever of the fractionate and specialized nature of intentional cognition, all Deacon needs to do is comb their existing intuitions in his direction. Constraints are objective, therefore intentionality is objective.

Not surprisingly, Deacon falls far short of ‘naturalizing intentionality.’ Ultimately, he provides something very similar to what Evan Thompson delivers in his equally impressive (and unconvincing) Mind in Life: a more complicated, attenuated picture of nature that seems marginally less antithetical to intentionality. Where Thompson’s “aim is not to close the explanatory gap in a reductive sense, but rather to enlarge and enrich the philosophical and scientific resources we have for addressing the gap (x), Deacon’s is to “demonstrate how a form of causality dependent on specifically absent features and unrealized potentials can be compatible with our best science” (16), the idea being that such an absential understanding will pave the way for some kind of thoroughgoing naturalization of intentionality—as metacognized—in the future.

But such a naturalization can only happen if our theoretical metacognitive intuitions regarding intentionality get intentionality right in general, as opposed to right enough for this or that. And our metacognitive intuitions regarding intentionality can only get intentionality right in general if our brain has somehow evolved the capacity to overcome medial neglect. And the possibility of this, given the problems of complexity and complicity, seems very hard to fathom.

The fact is BBT provides a very plausible and parsimonious observer dependent explanation for why metacognition attributes so many peculiar properties the medial processes. The human brain, as the frame of cognition, simply cannot cognize itself the way it does other systems. It is, as a matter of empirical necessity, not simply blind to its own mechanics, but blind to this blindness. It suffers medial neglect. Unable to access and cognize its origins, and unable to cognize this inability, it assumes that it accesses all there is to access—it confuses itself for something bottomless, an impossible exception to physics.

So when Deacon writes:

“These phenomena not only appear to arise without antecedents, they appear to be defined with respect to something nonexistent. It seems that we must explain the uncaused appearance of phenomena whose causal powers derive from something nonexistent! It should be no surprise that this most familiar and commonplace feature of our existence poses a conundrum for science.” 39

we need to take the truly holistic view that Deacon himself consistently fails to take. We need to see this very real problem in terms of one set of natural systems—namely, us—engaging the set of all natural systems, as a kind of linkage between being pushed and pushing back.

On BBT, Deacon’s ‘obvious inversion of causality’ is merely an illusory artifact of constraints pertaining to the human brain’s ability to cognize itself the way it cognizes its environments. They appear causally inverted simply because no information pertaining to their causal provenance is available to deliberative metacognition. Rules constrain us in some mysterious, orthogonal way. Goals somehow constrain us from the future. Will somehow constrains itself! Desires, like knowledge, are somehow constrained by their objects, even when they are nowhere to be seen. These apparently causally inverted phenomena vanish whenever we search for their origins because they quite simply do not exist in the high-dimensional way things in our environments exist. They baffle scientific reason because the actual neuromechanical heuristics employed are adapted to solve problems in the absence of detailed causal information, and because conscious metacognition, blind to the rank insufficiency of the information available for deliberative problem-solving, assumes that it possesses all the information it needs. Philosophical reflection is a cultural achievement, after all, an exaption of existing, more specialized cognitive resources; it seems quite implausible to assume the brain would possess the capacity to vet the relative sufficiency of information utilized in ways possessing no evolutionary provenance.

We are causally embedded in our environments in such a way that we cannot intuit ourselves as so embedded, and so intuit ourselves otherwise, as goal seeking, willing, rule-following, knowing, desiring, and so on—in ways that systematically neglect the actual, causal relations involved. Is it really just a coincidence that all these phenomena just happen to belong to the ‘medial,’ which is to say, the machinery responsible for cognition? Is it really just a coincidence that all these phenomena exhibit a profound incompatibility with causal explanation? Is it really just a coincidence that all our second-order interpretations of these terms are chronically underdetermined (a common indicator of insufficient information), even though they function quite well when used in everyday, first-order, interpersonal contexts?

Not at all. As I’ve attempted to show in a variety of ways the past couple years a great number of traditional conundrums can be resolved via BBT. All the old problems fall away once we realize that the medial—or ‘first person’—is simply what the third person looks like absent the capacity to laterally solve the third person. The time has come to leave them behind and begin the hard work of discovering what new conundrums await.