Three Pound Brain

No bells, just whistling in the dark…

Month: March, 2014

The Eliminativistic Implicit (I): The Necker Cube of Everyday and Scientific Explanation

by rsbakker

Go back to what seems the most important bit, then ask the Intentionalist this question: What makes you think you have conscious access to the information you need? They’ll twist and turn, attempt to reverse the charges, but if you hold them to this question, it should be a show-stopper.

What follows, I fear, is far longer winded.

Intentionalists, I’ve found, generally advert to one of two general strategies when dismissing eliminativism. The first is founded on what might be called the ‘Preposterous Complaint,’ the idea that eliminativism simply contradicts too many assumptions and intuitions to be considered plausible. As Uriah Kriegal puts it, “if eliminativism cannot be acceptable unless a relatively radical interpretation of cognitive science is adopted, then eliminativism is not in good shape” (“Non-phenomenal Intentionality,” 18). But where this criticism would be damning in other, more established sciences, it amounts to little more than an argument ad populum in the case of cognitive science, which as of yet lacks any consensual definition of its domain. The very naturalistic inscrutability behind the perpetual controversy also motivates the Eliminativist’s radical interpretation. The idea that something very basic is wrong with our approach to questions of experience and intentionality is by no means a ‘preposterous’ one. You could say the reality and nature of intentionality is the question. The Preposterous Complaint, in other words, doesn’t so much impugn the position as insinuate career suicide.

The second turns on what might be called the ‘Presupposition Complaint,’ the idea that eliminativism implicitly presupposes the very intentionality that it claims to undermine. The tactic generally consists of scanning the eliminativist’s claims, picking out various intentional concepts, then claiming that use of such concepts implicitly affirms the existence of intentionality. The Eliminativist, in other words, commits ‘cognitive suicide’ (as Lycan, 2005, calls it). Insofar as the use of intentional concepts is unavoidable, and insofar as the use of intentional concepts implicitly affirms the existence of intentionality, intentionality is ineliminable. The Eliminativist is thus caught in an obvious contradiction, explicitly asserting not-A on the hand, while implicitly asserting A on the other.

On BBT, intentionality as traditionally theorized, far from simply ‘making explicit’ what is ‘implicitly the case,’ is actually a kind of conceptual comedy of errors turning on heuristic misapplication and metacognitive neglect. Such appeals to ‘implicit intentionality,’ in other words, are appeals to the very thing BBT denies. They assume the sufficiency of the very metacognitive intuitions that positions such as my own call into question. The Intentionalist charge of performative contradiction simply begs the question. It amounts to nothing more than the bald assertion that intentionality cannot be eliminated because intentionality is ineliminable.

The ‘Presupposition Complaint’ is pretty clearly empty as an argumentative strategy. In dialogical terms, however, I think it remains the single biggest obstacle to the rational prosecution of the Intentionalist/Eliminativist debate—if only because of the way it allows so many theorists to summarily dismiss the threat of Eliminativism. Despite its circularity, the Presupposition Complaint remains the most persistent objection I encounter—in fact, many critics persist in making it even after its vicious circularity has been made clear. And this has led me to realize the almost spectacular importance of the notion of the implicit plays in all such debates. For many thinkers, the intentional nature of the implicit is simply self-evident, somehow obvious to intuition. This is certainly how it struck me before I began asking the kinds of questions motivating the present piece. After all, what else could the implicit be, if not the intentional ‘ground’ of our intentional ‘practices’?

In what follows, I hope to show how this characterization of the implicit, far from obvious, actually depends, not only on ignorance, but on a profound ignorance of our ignorance. On the account I want to give here, the implicit, far from naming some spooky ‘infraconceptual’ or ‘transcendental’ before of thought and cognition, simply refers to what we know is actually occluded from metacognitive appraisals of experience: namely, nature as described by science. To frame the issue in terms of a single question, what I want to ask in this post and its sequels is, What warrants the Intentionalist’s claims regarding implicit normativity, say, over an Eliminativist’s claims of implicit mechanicity?

So what is the implicit? Given the crucial role the concept plays in a variety of discourses, it’s actually remarkable how few theorists have bothered with the question of making the implicit qua implicit explicit (Stephen Turner and Eugene Gendlin are signature exceptions in this regard, of course). Etymologically, ‘implicit’ derives from the Latin, implicitus, the participle of implico, which means ‘to involve’ or ‘to entangle,’ meanings that seem to bear more on implicit’s perhaps equally mysterious relatives, ‘imply’ or ‘implicate.’ According to Wikitionary, uses that connote ‘entangled’ are now obsolete. Implicit, rather, is generally taken to mean, 1) “Implied directly, without being directly expressed,” 2) “Contained in the essential nature of something but not openly shown,” and 3) “Having no reservations or doubts; unquestioning or unconditional; usually said of faith or trust.” Implicit, in other words, is generally taken to mean unspoken, intrinsic, and unquestioned.

Prima facie, at least, these three senses are clearly related. Unless spoken about, the implicit cannot be questioned, and so must remain an intrinsic feature of our performances. The ‘implicit,’ in other words, refers to something operative within us that nonetheless remains hidden from our capacity to consciously report. Logical or material inferential implications, for instance, guide subsequent transitions within discourse, whether we are conscious of them or not. The same might be said of ‘emotional implications,’ or ‘political implications,’ or so on.

Let’s call this the Hidden Constraint Model of the implicit, the notion that something outside conscious experience somehow ‘contains’ organizing principles constraining conscious experience. The two central claims of the model can be recapitulated as:

1) The implicit lies in what conscious cognition neglects. The implicit is inscrutable.

2) The implicit somehow constrains conscious cognition. The implicit is effective.

From inscrutability and effectiveness, we can infer at least two additional features pertaining to the implicit:

3) The effective constraints on any given moment of conscious cognition require a subsequent moment of conscious cognition to be made explicit. We can only isolate the biases specific to a claim we make subsequent to that claim. The implicit, in other words, is only retrospectively accessible.

4) Effective constraints can only be consciously cognized indirectly via their effects on conscious experience. Referencing, say, the ‘implicit norms governing interpersonal conduct’ involves referencing something experienced only in effect. ‘Norms’ are not part of the catalogue of nature—at least as anything recognizable as such. The implicit, in other words, is only inferentially accessible.

So consider, as a test case, Hume’s famous meditations on causation and induction. In An Enquiry Concerning Human Understanding, Hume points out how reason, no matter how cunning, is powerless when it comes to matters of fact. Short of actual observation, we have no way of divining the causal connections between events. When we turn to experience, however, all we ever observe is the conjunction of events. So what brings about our assumptive sense of efficacy, our sense of causal power? Why should repeating the serial presentation of two phenomena produce the ‘feeling,’ as Hume terms it, that the first somehow determines the second? Hume’s ‘skeptical solution,’ of course, attributes the feeling to mere ‘custom or habit.’ As he writes, “[t]he appearance of a cause always conveys the mind, by a customary transition, the idea of an effect” (ECHU, 51, italics my own).

All four of the features enumerated above are clearly visible in the above. Hume makes no dispute of the fact that the repetition of successive events somehow produces the assumption of efficacy. “On this,” he writes, “are founded all our reasonings concerning matters of fact or existence” (51). Exposure to such repetitions fundamentally constrains our understanding of subsequent exposures, to the point where we cannot observe the one without assuming the other—to the point where the bulk of scientific knowledge is raised upon it. Efficacy is effective—to say the least!

But there’s nothing available to conscious cognition—nothing observable in these successive events—over and above their conjunction. “One event follows another,” Hume writes; “but we never can observe any tie between them. They seem conjoined, but never connected” (49). Efficacy, in other words, is inscrutable as well.

So then what explains our intuition of efficacy? The best we can do, it seems, is to pause and reflect upon the problem (as Hume does), to posit some X (as Hume does) reasoning from what information we can access. Efficacy, in other words, is only retrospectively and inferentially accessible.

We typically explain phenomena by plugging them into larger functional economies, by comprehending how their precursors constrain them and how they constrain their successors in turn. This, of course, is what made Hume’s discovery—that efficacy is inscrutable—so alarming. When it comes to environmental inquiries we can always assay more information via secondary investigation and instrumentation. As a result, we can generally solve for precursors in our environments. When it comes to metacognitive inquiries such as Hume’s, however, we very quickly stumble into our own incapacity. “And what stronger instance,” Hume asks, “can be produced of the surprising ignorance and weakness of the understanding, than the present?” (51). Efficacy, the very thing that binds phenomena to their precursors, is itself without precursors.

Not surprisingly, the comprehension of cognitive phenomena (such as efficacy) without apparent precursors poses a special kind of problem. Given efficacy, we can comprehend environmental nature. We simply revisit the phenomena and infer, over and over, accumulating the information we need to arbitrate between different posits. So how, then, are we supposed to comprehend efficacy? The empirical door is nailed shut. No matter how often we revisit and infer, we simply cannot accumulate the data we need to arbitrate between our various posits. Above, we see Hume rooting around with questions, (our primary tool for making ignorance visible) and finding no trace of what grounds his intuitions of empirical efficacy. Thus the apparent dilemma: Either we acknowledge that we simply cannot understand these intuitions, “that we have no idea of connexion or power at all, and that these words are absolutely without any meaning” (49), or we elaborate some kind of theoretical precursor, some fund of hidden constraint, that generates, at the very least, the semblance of knowledge. We posit some X that ‘reveals’ or ‘expresses’ or ‘makes explicit’ the hidden constraint at issue.

These ‘X posits’ have been the bread and butter of philosophy for some time now. Given Hume’s example it’s easy to see why: the structure and dynamics of cognition, unlike the structure and dynamics of our environment, do not allow for the accumulation of data. The myriad observational opportunities provided by environmental phenomena simply do not exist for phenomena like efficacy. Since individual (and therefore idiosyncratic) metacognitive intuitions are all we have to go on, our makings explicit are pretty much doomed to remain perpetually underdetermined—to be ‘merely philosophical.’

I take this as uncontroversial. What makes philosophy philosophy as opposed to a science is its perennial inability to arbitrate between incompatible theoretical claims. This perennial inability to arbitrate between incompatible theoretical claims, like the temporary inability to arbitrate between incompatible theoretical claims in the sciences, is in some important respect an artifact of insufficient information. But where the sciences generally possess the resources to accumulate the information required, philosophy does not. Aside from metacognition or ‘theoretical reflection,’ philosophy has precious little in the way of informational resources.

And yet we soldier on. The bulk of traditional philosophy relies on what might be called the Accessibility Conceit: the notion that, despite more than two thousand years of failure, retrospective (reflective, metacognitive) interrogations of our activities somehow access enough information pertaining to their ‘intrinsic character’ to make the inferential ‘expression’ of our implicit precursors a viable possibility. Hope, as they say, springs eternal. Rather than blame their discipline’s manifest institutional incapacity on some more basic metacognitive incapacity, philosophers generally blame the problem on the various conceptual apparatuses used. If they could only get their concepts right, the information is there for the taking. And so they tweak and they overturn, posit this precursor and that, and the parade of ‘makings explicit’ grows and grows and grows. In a very real sense, the Accessibility Conceit, the assumption that the tools and material required to cognize the implicit are available, is the core commitment of the traditional philosopher. Why show up for work, otherwise?

The question of comprehending conscious experience is the question of comprehending the constitutive and dynamic constraints on conscious experience. Since those constraints don’t appear within conscious experience, we pay certain people called ‘philosophers’ to advance speculative theories of their nature. We are a rather self-obsessed species, after all.

Advancing speculative hypotheses regarding each other’s implicit nature is something we do all the time. According to Robin Dunbar, some two thirds of human communication is devoted to gossip. We are continually replaying, revisiting—even our anticipations yoke the neural engines of memory. In fact, we continually interrogate our emotionally charged interactions, concocting rationales, searching for the springs of others’ actions, declaring things like ‘She’s just jealous,’ or ‘He’s on to you.’ There is, you might say, an ‘Everyday Implicit’ implicit in our everyday discourse.

As there has to be. Conscious experience may be ‘as wide as the sky,’ as Dickinson says, but it is little more than a peephole. Conscious experience, whatever it turns out to be, seems to be primarily adapted to deliberative behaviour in complex environments. Among other things, it operates as a training interface, where the deliberative repetition of actions can be committed to automatic systems. So perhaps it should come as no surprise that, like behaviour, it is largely serial. When peephole, serial access to a complex environment is all you have, the kind of retrospective inferential capacity possessed by humans becomes invaluable. Our ability to ‘make things explicit’ is pretty clearly a central evolutionary design feature of human consciousness.

In a fundamental sense, then, making-explicit is just what we humans do. It makes sense that with time, especially once literacy allowed for the compiling of questions—an inventory of ignorance, you might say—that we would find certain humans attempting to make making explicit itself explicit. And since making each other explicit was something that we seemed to do with some degree of reliability, it makes sense that the difficulty of this new task should confound these inquirers. The Everyday Implicit was something they used with instinctive ease, reliably attributing all manner of folk-intentional properties to individuals all the time. And yet, whenever anyone attempted to make this Everyday Implicit explicit, they seemed to come up with something different.

No one could agree on any canonical explication. And yet, aside from the ancient skeptics, they all agreed on the possibility of such a canonical explication. They all hewed to the Accessibility Conceit. And since the skeptics’ mysterian posit was as underdetermined as any of their own claims, they were inclined to be skeptical of the skeptics. Otherwise, their Philosophical Implicit remained the only game in town when it came to things human and implicit. They need only look to the theologians for confirmation of their legitimacy. At least they placed their premises before their conclusions!

But things have changed. Over the past few decades, cognitive scientists have developed a number of ingenious experimental paradigms designed to reveal the implicit underbelly of what we think and do. In the now notorious Implicit Association Test, for instance, the time subjects require to pair concepts is thought to indicate the cognitive resources required, and thus provide an indirect measure of implicit attitudes. If it takes a white individual longer to pair stereotypically black names with positive attributes than it does white names, this is presumed to evidence an ‘implicit bias’ against blacks. Actions, as the old proverb has it, speak louder than words. It does seem intuitive to suppose that the racially skewed effort involved in value identifications tokens some kind of bias. Versions of this of this paradigm continue to proliferate. Once the exclusive purview of philosophers, the implicit has now become the conceptual centerpiece of a vast empirical domain. Cognitive science has now revealed myriad processes of implicit learning, interpretation, evaluation, and even goal-setting. Taken together, these processes form what is generally referred to as System 1 cognition (see table below), an assemblage of specialized cognitive capacities—heuristics—adapted to the ‘quick and dirty’ solution of domain specific ‘problem ecologies’ (Chow, 2011; Todd and Gigerenzer, 2012), and which operate in stark contrast to what is called System 2 cognition, the slow, serial, and deliberate problem solving related to conscious access (defined in Dehaene’s operationalized sense of reportability)—what we take ourselves to be doing this very moment, in effect.

DUAL PROCESS THEORIES IN PSYCHOLOGY

System 1 Cognition (Implicit) System 2 Cognition (Explicit)
Not conscious Conscious
Not human specific Human specific
Automatic Deliberative
Fast Slow
Parallel Sequential
Effortless Effortful
Intuitive Reflective
Domain specific Domain general
Pragmatic Logical
Associative Rulish
High capacity Low capacity
Evolutionarily old Evolutionarily young

* Adapted from Frankish and Evans, “The duality of mind: A historical perspective.”

What are called ‘dual process’ or ‘dual system’ theories of cognition are essentially experimentally driven complications of the crude dichotomy between unconscious/implicit and conscious/explicit problem solving that has been pondered since ancient times. As granular as this emerging empirical picture remains, it already poses a grave threat to our traditional explicitations of the implicit. Our cognitive capacities, it turns out, are far more fractionate, contingent, and opaque than we ever imagined. Decisions can be tracked prior to a subject’s ability to report them (Haynes, 2008; or here). The feeling of willing can be readily tricked, and thus stands revealed as interpretative (Wegner, 2002; Pronin, 2009). Memory turns out to be fractionate and nonveridical (See Bechtel, 2008, for review). Moral argumentation is self-promotional rather than truth-seeking (Haidt, 2012). Various attitudes appear to be introspectively inaccessible (See Carruthers, 2011, for extensive review). The feeling of certainty has a dubious connection to rational warrant (Burton, 2008). The list of such findings continually grows, revealing an ‘implicit’ that consistently undermines and contradicts our traditional and intuitive self-image—what Sellars famously termed our Manifest Image.

As Frankish and Evans (2009) write in their historical perspective on dual system theories:

“The idea that we have ‘two minds’ only one of which corresponds to personal, volitional cognition, has also wide implications beyond cognitive science. The fact that much of our thought and behaviour is controlled by automatic, subpersonal, and inaccessible cognitive processes challenges our most fundamental and cherished notions about personal and legal responsibility. This has major ramifications for social sciences such as economics, sociology, and social policy. As implied by some contemporary researchers … dual process theory also has enormous implications for educational theory and practice. As the theory becomes better understood and more widely disseminated, its implications for many aspects of society and academia will need to be thoroughly explored. In terms of its wider significance, the story of dual-process theorizing is just beginning.” 25

Given the rhetorical constraints imposed by their genre, this amounts to the strident claim that a genuine revolution in our understanding of the human is underway, one that could humble us out of existence. The simple question is, Where does that revolution end?

Consider what might be called the ‘Worst Case Scenario’ (WCS). What if it were the case that conscious experience and cognition have evolved in such a way that the higher dimensional, natural truth of the implicit utterly exceeds our capacity to effectively cognize conscious experience and cognition outside a narrow heuristic range? In other words, what if the philosophical Accessibility Conceit were almost entirely unwarranted, because metacognition, no matter how long it retrospects or how ingeniously it infers, only accesses information pertinent to a very narrow band of problem solving?

Now I have a number of arguments for why this is very likely the case, but in lieu of those arguments, it will serve to consider the eerie way our contemporary disarray regarding the implicit actually exemplifies WCS. People, of course, continue using the Everyday Implicit the way we always have. Philosophers continue positing their incompatible versions of the Philosophical Implicit the way they have for millennia. And scientists researching the Natural Implicit continue accumulating data, articulating a picture that seems to contradict more and more of our everyday and philosophical intuitions as it gains dimensionality.

Given WCS, we might expect the increasing dimensionality of our understanding would leave the functionality of the Everyday Implicit intact, that it would continue to do what it evolved to do, simply because it functions the way it does regardless of what we learn. At the same time, however, we might expect the growing fidelity of the Natural Implicit would slowly delegitimize our philosophical explications of that implicit, not only because those explications amount to little more than guesswork, but because of the fundamental incompatibility of intentional and the causal conceptual registers.

Precisely because the Everyday Implicit is so robustly functional, however, our ability to gerrymander experimental contexts around it should come as no surprise. And we should expect that those invested in the Accessibility Conceit would take the scientific operationalization of various intentional concepts as proof of 1) their objective existence, and 2) the fact that only more cognitive labour, conceptual, empirical, or both, is required.

If WCS were true, in other words, one might expect that cognitive sciences invested in the Everyday and Philosophical Implicit, like psychology, would find themselves inexorably gravitating about the Natural Implicit as its dimensionality increased. One might expect, in other words, that the Psychological Implicit would become a kind of decaying Necker Cube, an ‘unstable bi-stable concept,’ one that would alternately appear to correspond to the Everyday and Philosophical Implicit less and less, and to the Natural Implicit more and more.

Part Two considers this process in more detail.

Davidson’s Fork: An Eliminativist Radicalization of Radical Interpretation

by rsbakker

Davidson’s primary claim to philosophical fame lies in the substitution of the hoary question of meaning qua meaning with the more tractable question of what we need to know to understand others—the question of interpretation. Transforming the question of meaning into the question of interpretation forces considerations of meaning to account for the methodologies and kinds of evidence required to understand meaning. And this evidence happens to be empirical: the kinds of sounds actual speakers make in actual environments. Radical interpretation, you might say, is useful precisely because of the way the effortlessness of everyday interpretation obscures this fact. Starting from scratch allows our actual resources to come to the fore, as well as the need to continually test our formulations.

But it immediately confronts us with a conundrum. Radical Interpretation, as Davidson points out, requires some way of bootstrapping the interdependent roles played by belief and meaning. “Since we cannot hope to interpret linguistic activity without knowing what a speaker believes,” he writes, “and cannot found a theory of what he means on a prior discovery of his beliefs and intentions, I conclude that in interpreting utterances from scratch—in radical interpretation—we must somehow deliver simultaneously a theory of belief and a theory of meaning” (“Belief and the Basis of Meaning,” Inquiries into Truth and Interpretation, 144). The problem is that the interpretation of linguistic activity seems to require that we know what a speaker believes, knowledge that we can only secure if we already know what a speaker means.

The enormously influential solution Davidson gives the problem lies in the way certain, primitive beliefs can be non-linguistically cognized on the assumption of the speaker’s rationality. If we assume that the speaker believes as he should, that he believes it is raining when it is raining, snowing when it is snowing, and so on, if we take interpretative Charity as our principle, we have a chance of gradually correlating various utterances with the various conditions that make them true, of constructing interpretations applicable in practice.

Since Charity seems to be a presupposition of any interpretation whatsoever, the question of what it consists in would seem to become a kind of transcendental battleground. This is what makes Davidson such an important fork in the philosophical road. If you think Charity involves something irreducibly normative, then you think Davidson has struck upon interpretation as the locus requiring theoretical intentional cognition to be solved, a truly transcendental domain. So Brandom, for instance, takes Dennett’s interpretation of Charity in the form of the Intentional Stance as the foundation of his grand normative metaphysics (See, Making It Explicit, 55-62). What makes this such a slick move is the way it allows the Normativist to have things both ways, to remain an interpretativist (though Brandom does ultimately ascribe to original intentionality in Making It Explicit) about the reality of norms, while nevertheless treating norms as entirely real. Charity, in other words, provides a way to at once deny the natural reality of norms, while insisting they are real properties. Fictions possessing teeth.

If, on the other hand, you think Charity is not something irreducibly normative, then you think Davidson has struck upon interpretation as the locus where the glaring shortcomings of the transcendental are made plain. The problem of Radical Interpretation is the problem of interpreting behaviour. This is the whole point of going back to translation or interpretation in the first place: to start ‘from scratch,’ asking what, at minimum, is required for successful linguistic communication. By revealing behaviour as the primary source of information, Radical Interpretation shows how the problem is wholly empirical, how observation is all we have to go on. The second-order realm postulated by the Normativist simply does not exist, and as such, has nothing useful to offer the actual, empirical problem of translation.

As Stephen Turner writes:

“For Davidson, this whole machinery of a fixed set of normative practices revealed in the enthymemes of ordinary justificatory usage is simply unnecessary. We have no privileged access to meaning which we can then expressivistically articulate, because there is nothing like this—no massive structure of normative practices—to access. Instead we try to follow our fellow beings and their reasoning and acting, including their speaking: We make them intelligible. And we have a tool other than the normal machinery of predictive science that makes this possible: our own rationality.” “Davidson’s Normativity,” 364

Certainly various normative regimes/artifacts are useful (like Decision Theory), and others indispensible (like some formulation of predicate logic), but indispensability is not necessity. And ‘following,’ as Turner calls it, requires only imagination, empathy, not the possession of some kind of concept (which is somehow efficacious even though it doesn’t exist in nature). It is an empirical matter for cognitive science, not armchair theorizing, to decide.

Turner has spent decades developing what is far and away the most comprehensive critique of what he terms Normativism that I’ve ever encountered. His most recent book, Explaining the Normative, is essential reading for anyone attempting to gain perspective on Sellarsian attempts to recoup some essential domain for philosophy. For those interested in post-intentional philosophy more generally, and of ways to recharacterize various domains without ontologizing (or ‘quasi-ontologizing’) intentionality in the form of ‘practices,’ ‘language games,’ ‘games of giving and asking for reasons,’ and so on, then Turner is the place to start.

I hope to post a review of Explaining the Normative and delve into Turner’s views in greater detail in the near future, but for the nonce, I want to stick with Davidson. Recently reading Turner’s account of Davidson’s attitude to intentionality (“Davidson’s Normativity”) was something of a revelation for me. For the first time, I think I can interpret Radical Interpretation in my own terms. Blind Brain Theory provides a way to read Davidson’s account as an early eliminativist approximation of a full-blown naturalistic theory of interpretation.

A quick way to grasp the kernel of Blind Brain Theory runs as follows (a more thorough pass can be found here). The cause of my belief of a blue sky outside today is, of course, the blue sky outside today. But it is not as though I experience the blue sky causing me to experience the blue sky—I simply experience the blue sky. The ‘externalist’ axis of causation—the medial, or enabling, axis—is entirely occluded. All the machinery responsible for conscious experience is neglected: causal provenance is a victim of what might be called medial neglect. Now the fact that we can metacognize experience means that we’ve evolved some kind of metacognitive capacity, machinery for solving problems that require the brain to interpret its own operations, problems such as, say, ‘holding your tongue at Thanksgiving dinner.’ Medial neglect, as one might imagine, imposes a profound constraint on metacognitive problem-solving: namely, that only those problems that can be solved absent causal information can be solved at all. Given the astronomical causal complexities underwriting experience, this makes metacognitive problem-solving heuristic in the extreme. Metacognition hangs sideways in a system it cannot possibly hope to cognize in anything remotely approaching a high-dimensional manner, the manner that our brain cognizes its environments more generally.

If one views philosophical reflection as an exaptation of our evolved metacognitive problem-solvers for the purposes of theorizing the nature of experience, one can assume it has inherited this constraint. If metacognition cannot access information regarding the actual processes responsible for experience for the solution of any problem, then neither can philosophical reflection on experience. And since nature is causal, this is tantamount to saying that, for the purposes of theoretical metacognition at least, experience has no nature to be solved. And this raises the question of just what—if anything—theoretical metacognition (philosophical reflection) is ‘solving.’

In essence, Blind Brain Theory provides an empirical account of the notorious intractability of those philosophical problems arising out of theoretical metacognition. Traditional philosophical reflection, it claims, trades in a variety of different metacognitive illusions—many of which can be diagnosed and explained away, given the conceptual resources Blind Brain Theory provides. On its terms, the traditional dichotomy between natural and intentional concepts/phenomena is entirely to be expected—in fact, we should expect sapient aliens possessing convergently evolved brains to suffer their own versions of the same dichotomy.

Intentionalism takes our blindness to first-person cognitive activity as a kind of ontological demarcation when it is just an artifact of the way the integrated, high-dimensional systems registering the external environment fractures into an assembly of low-dimensional hacks registering the ‘inner.’ There is no demarcation, no ‘subject/object’ dichotomy, just environmentally integrated systems that cannot automatically cognize themselves as such (and so resort to hacks). Neglect allows us to see this dichotomy as a metacognitive artifact, and to thus interpret the first-person in terms entirely continuous with the third-person. Blind Brain Theory, in other words, naturalizes the intentional. It ‘externalizes’ everything.

So how does this picture bear on the issue of Charity and Radical Interpretation? In numerous ways, I think, many of which Davidson would not approve, but which do have the virtue of making his central claims perhaps more naturalistically perspicuous.

From the standpoint of our brains linguistically solving other brains, we take it for granted that solving other organisms requires solving something in addition to the inorganic structure and dynamics of our environments. The behaviour taken as our evidential base in Radical Interpretation already requires a vast amount of machinery and work. So basically we’re talking about the machinery and work required over and above this baseline—the machinery and work required to make behaviour intentionally, as opposed to merely causally, intelligible.

The primary problem is that the activity of intentional interpretation, unlike the activity interpreted, almost escapes cognition altogether. To say, as so many philosophers so often do, that intentionality is ‘irreducible’ is to say that it is naturalistically occult. So any account of interpretation automatically trades in blind spots, in the concatenation of activities that we cannot cognize. In the terms of Blind Brain Theory, any account of interpretation has to come to grips with medial neglect.

From this perspective, one can see Davidson’s project as an attempt to bootstrap an account of interpretation that remains honest or sensitive to medial neglect, the fact that 1) our brain simply cannot immediately cognize itself as a brain, which is to say, in terms continuous with its cognition of nature; and 2) that our brain cannot immediately cognize this inability, and so assumes no such inability. Thanks to medial neglect, every act of interpretation is hopelessly obscure. And this places a profound constraint on our ability to theoretically explicate interpretation. Certainly we have a variety of medial posits drawn from the vocabulary of folk-psychology, but all of these are naturalistically obscure, and so function as unexplained explainers. So the challenge for Davidson, then, is to theorize interpretation in a manner that respects what can and cannot be cognized—to regiment our blind spots in a manner that generates real, practically applicable understanding.

In other words, Davidson begins by biting the medial inscrutability bullet. If medial neglect makes it impossible to theoretically explicate medial terms, then perhaps we can find a way to leverage what (causally inexplicable) understanding they do seem to provide into something more regimented, into an apparatus, you might say, that poses all the mysteries as effectively as possible (and in this sense, his project is a direct descendent of Quine’s).

This is the signature virtue of Tarski’s ‘Convention T.’ “[T]he striking thing about T-sentences,” Davidson writes, “is that whatever machinery must operate to produce them, and whatever ontological wheels must turn, in the end a T-sentence states the truth conditions of a sentence using resources no richer than, because the same as, those of the sentence itself” (“Radical Interpretation, 132). By modifying Tarski’s formulation so that it takes truth instead of translation as basic, he can generate a theory based on an intentional, unexplained explainer—truth—that produces empirically testable results. Given that interpretation is the practical goal, the ontological status of the theory itself is moot: “All this apparatus is properly viewed as theoretical construction, beyond the reach of direct verification,” he writes. “It has done its work provided only it entails testable results in the form of T-sentences, and these make no mention of the machinery” (133).

The apparatus is warranted only to the extent that it enables further cognition. Indeed, given medial neglect, no further metacognitive explication of the apparatus is even possible. It may prove indispensible, but only empirically so, the way a hammer is to framing, and not as, say, the breath of God is to life, or more mysterious still, in some post facto ‘virtual yet efficacious’ sense. In fact, both of these latter characterizations betray the profundity of medial neglect, how readily we intuit the absence of various dimensions of information, say those of space and time, as a positive, as some kind of inexplicable something that, as Turner has been arguing for decades, begs far more questions than it pretends to solve.

The brain’s complexity is such, once again, that it cannot maintain anything remotely approaching the high-dimensional, all-purpose covariational regime it maintains with its immediate environment with itself. Only a variety of low-dimensional, special purpose cognitive tools are possible—an assemblage of ‘hacks.’ Thus the low-dimensional parade of inexplicables that constitute the ‘first-person.’ This is why complicating your intentional regimentations beyond what is practically needed simply makes no sense. Their status as specialized hacks means we have every reason to assume their misapplication in any given theoretical context. This isn’t to say that exaptation to other problems isn’t possible, only that efficacious problem-solving is our only guide to applicability. The normative proof is in the empirical pudding. Short of practical applications, high-dimensional solutions, the theoretician is simply stacking unexplained explainers into baroque piles. There’s a reason why second-order normative architectures rise and fall as fads. Their first-order moorings are the same, but as the Only-game-in-town Effect erodes beneath waves of alternative interpretation, they eventually break apart, often to be salvaged into some new account that feels so compelling for appearing, to some handful of souls at least, to be the only game in town at a later date.

So for Davidson, characterizing Radical Interpretation in terms of truth amounts to characterizing Radical Interpretation in terms of a genuine unexplained explainer, an activity that we can pragmatically decompose and rearticulate, and nothing more. The astonishing degree to which the behaviour itself underdetermines the interpretations made, simply speaks to the radically heuristic nature of the cognitive activities underwriting interpretation. It demonstrates, in other words, the incredibly domain specific nature of the cognitive tools used. A fortiori, it calls into question the assumption that whatever information metacognition can glean is remotely sufficient for theoretically cognizing the structure and dynamics of those tools.

From the standpoint of reflection, intentional cognition or ‘mindreading’ almost entirely amounts to simply ‘getting it’ (or as Turner says, ‘following’). Given the paucity of information over and above the sensory, our behaviour cognizing activity strikes us as non-dimensional in the course of that cognizing—medial neglect renders our ongoing cognitive activity invisible. The odd invisibility of our own communicative performances—the way, for instance, the telling (or listening) ‘disappears’ into the told—simply indicates the axis of medial neglect, the fact they we’re talking about activities the brain cannot identify or situate in the high-dimensional idiom of environmental cognition. At best, evolution has provided metacognitive access to various ‘flavours of activity,’ if you will, vague ways of ‘getting our getting’ or ‘following our following’ the behaviour of others, and not much more—as the history of philosophy should attest!

‘Linguistic understanding,’ on this account, amounts to standing in certain actual and potential systematic, causal relations with another speaker—of being a machine attuned to natural and social environments in some specific way. The great theoretical virtue of Blind Brain Theory is the way it allows us to reframe apparently essential semantic activities like interpretation in mechanical terms. When an anthropologist learns the language of another speaker nothing magical is imprinted or imbibed. The anthropologist ‘understands’ that the speaker is systematically interrelated to his environment the same as he, and so begins the painstaking process of mapping the other’s relations onto his own via observationally derived information regarding the speaker’s utterances in various circumstances. The behaviour-enabling covariational regime of one individual comes to systematically covary with that of another individual and thus form a circuit between them and the world. The ‘meaning’ now ‘shared’ consists in nothing more than this entirely mechanical ‘triangulation.’ Each stands in the relation of component to the other, forming a singular superordinate system possessing efficacies that did not previously exist. The possible advantages of ‘teamwork’ increase exponentially—which is arguably the primary reason our species evolved language at all.

The perplexities pile on when we begin demanding semantic answers to our semantic questions, when we ask, What is meaning? expecting an answer that accords with our experiences of meaning. Given that we possess nothing short of our experience of meaning with which to compare any theory of meaning, the demand that such a theory accord with that experience seems, on the face of things, to be eminently reasonable. But it still behooves us to interrogate the adequacy of that ‘experience as metacognized,’ especially now, given all that we have learned the past two decades. On a converging number of accounts, human consciousness is a mechanism for selecting, preserving, and broadcasting information for more general neural consumption. When we theoretically reflect on cognitive activity, such as ‘getting’ or ‘following’ our best research tells us we are relying on the memory traces of previous broadcasts. The situation poses a metacognitive nightmare, to say the least. Even if we could trust those memory traces to provide some kind of all-purpose schema (and we can’t), we have no access to the larger neurofunctional context of the broadcast, what produced the information and what consumed it for what—all we have are low-dimensional fragments that appear to be ethereal wholes. It’s as if we’re attempting to solve for a car using only its fuse-panel diagram—worse!

Like Quine before him, Davidson has no way of getting around intentionality, and so, also like Quine, he attempts to pass through it with as much epistemic piety as possible. But his ‘intentional instrumentalism’ will only take him so far. Short of any means of naturalizing meaning, he regularly finds himself struggling to see his way clear. The problem of first-person authority provides an illustrative case in point. The assumption that some foreign language speaker ‘holds true’ making utterances the way you ‘hold true’ making utterances can only facilitate interpretation, assist in ‘following his meaning,’ if it is the case that you can follow your own meaning. A number of issues arise out of this, not the least the suggestion that interpretation seems to require the very kind of metacognitive access that I have consistently been denying!

But following one’s own meaning is every bit as mysterious as following another’s. Ownership of utterances can be catastrophically misattributed in a number of brain pathologies. When it comes to self/other speech comprehension, we know the same machinery is involved, only yoked in different ways, and we know that machinery utterly eludes metacognition. To reiterate: the cryptic peculiarities of understanding meaning (and all other intentional phenomena) are largely the result of medial neglect, the point where human cognition, overmatched by its own complexity, divides to heuristically conquer. In a profound sense, metacognition finds itself in the same straits regarding the brain as social cognition does regarding other brains.

So what does the asymmetry of ‘first-person authority,’ the fact that meanings attributed to others can be wrong while meanings attributed to oneself cannot, amount to? Nothing more than the fact that the systematic integrity of you, as a blind system, is ‘dedicated’ in a way that the systematic integrity of our interpretative relations is not. ‘Teamwork machines’ are transitory couplings requiring real work to get off the ground, and then maintain against slippages. The ‘asymmetry’ Davidson wants to explain consists in nothing more than this. No work is required to ‘follow oneself,’ whereas work is required to follow others.

For all the astronomical biological complexity involved, it really is as simple as this. The philosophical hairball presently suffocating the issue of first-person authority is an artifact of the way that theoretical metacognition, blinkered by medial neglect, retrospectively schematizes the issue in terms of meaning. The ontologization of meaning transforms the question of first-person authority into an epistemic question, a question of how one could know. This, of course, divides into the question of implicit versus explicit knowing. Since all these concepts (knowing, implicit, explicit) are naturalistically occult, interpretation can be gamed indefinitely. Despite his epistemic piety, Davidson’s attempt to solve for first-person authority using intentional idioms was doomed from the outset.

It’s worth noting an interesting connection to Heidegger in all this, a way, perhaps, to see the shadow of Blind Brain Theory operating in a quite different philosophical system. Heidegger, who harboured his own doubts regarding philosophical reflection, would see the philosophical hairball described above as yet another consequence of the ‘metaphysics of presence,’ the elision of the ‘ontological difference’ between being and beings. For him, the problem isn’t that meaning is being ontologized so much as it is being ontologized in the wrong way. His conflation of meaning with being essentially dissolves the epistemic problem the same way as my elimination of meaning, albeit in a manner that renders everything intentionally occult.

So what is meaning? A matter of intersystematic calibration. When we ask someone to ‘explain what they mean’ we are asking them to tweak our linguistic machinery so as to facilitate function. The details are, without a doubt, astronomically complex, and almost certain to surprise and trouble us. But one of the great virtues of mechanistic explanations lies in the nonmysterious way it can generalize over functions, move from proteins to organelles to cells to organs to organisms to collectives to ecologies to biospheres and so on. The ‘physical stance’ scales up with far more economy than some (like Dennett) would have you believe. And since it comprises our most reliable explanatory idiom, we should expect it to eventually yield the kind of clarity evinced above. Is it simply a coincidence that the interpretative asymmetry that Davidson and so many other philosophers have intentionally characterized directly corresponds with the kind of work required to maintain mechanical systematicity between two distinct systems? Do we just happen to ‘get the meaning wrong’ whenever covariant slippages occur, or is the former simply the latter glimpsed darkly?

Which takes us, at long last, to the issue of ‘Charity,’ the indispensability of taking others as reliably holding their utterances true to the process of interpretation. As should be clear by now, there is no such thing. We no more take Charity to the interpretation of behaviour than your wireless takes Charity to your ISP. There is no ‘attitude of holding true,’ no ‘intentional stance.’ Certainly, sometimes we ‘try’—or are at least conscious of making an effort. Otherwise understanding simply happens. The question is simply how we can fill in the blanks in a manner that converges on actual theoretical cognition, as opposed to endless regress. Behaviour is tracked, social heuristics are cued, an interpretation is neurally selected for conscious broadcasting and we say, ‘Ah! ‘Es regnet,’ means ‘It is raining’!

The Eliminativist rennovation of Radical Interpretation makes plain everything that theoretical reflection has hitherto neglected. In other words, what it makes plain is the ‘pre-established harmony’ needed to follow another, the monstrous amount of evolutionary and cultural stage-setting required simply to get to interpretative scratch. The enormity of this stage setting is directly related to the heuristic specificity of the systems we’ve developed to manage them, the very specificity that renders second-order discourse of the nature of ‘intentional phenomena’ dubious in the extreme.

As the skeptics have been arguing since antiquity.

The Ontology of Ghosts

by rsbakker

In the courtyard a shadowy giant elm

Spreads ancient boughs, her ancient arms where dreams,

False dreams, the old tale goes, beneath each leaf

Cling and are numberless.

–Virgil, The Aenied, Book VI

.

I’m always amazed, looking back, at how fucking clear things had seemed at this or that juncture of my philosophical life—how lucid. The two early conversions, stumbling into nihilism as a teenager, then climbing into Heidegger in my early twenties, seem the most ‘religious’ in retrospect. I think this is why I never failed to piss people off even back then. You have this self-promoting skin you wear when you communicate, this tactical gloss that compels you to impress. This is what non-intellectuals hear when you speak, tactics and self-promotion. This is why it’s so easy to tar intellectualism in the communal eye: insecurity and insincerity are of its essence. All value judgements are transitive in human psychology: Laugh up your sleeve at what I say, and you are laughing at me. I was an insecure, hypercritical, know-it-all. You add the interpersonal trespasses of religion—intolerance, intensity, and aggressiveness—and I think it’s safe to assume I came across as an obnoxious prick.

But if I was evangelical, it was that I could feel those transformations. Each position possessed its own, distinct metacognitive attitude toward experience, a form of that I attributed to this, whatever it might be. With my adolescent nihilism, I remember obsessively pondering the way my thoughts bubbled up out of oblivion—and being stupefied. I was some kind of inexplicable kink in the real. I was so convinced I was an illusion that I would ache for being alone, grip furniture for fear of flying.

But with Heidegger, it was like stepping into a more resonant clime, into a world rebarred with meaning, with projects and cares and rules and hopes. A world of towardness, where what you are now is a manifold of happenings, a gazing into an illuminated screen, a sitting in a world bound to you via your projects, a grasping of these very words. The intentional things, the phenomena of lived life, these were the foundation, I believed, the sine qua non of empirical inquiry. Before we can ask the question of freedom and meaning we need to ask the question of what comes first.

What could be more real than lived life?

It took a long time for me to realize just how esoteric, just how parochial, my definition of ‘lived life’ was. No matter how high you scratch your charcoal cloud, the cave wall always has the final say. It’s the doctors that keep you alive; philosophers just help you fall to sleep. Everywhere I looked across Continental philosophy, I saw all these crazy-ass interpretations, variants spanning variants, revivals and exhaustions, all trying to get the handle on the intentional ontology of a ‘lived life’ that took years of specialized training to appreciate. This is how I began asking the question of the cognitive difference. And this is how I found myself back at the beginning, my inaugural, adolescent departure from the naive.

The difference being, I am no longer stupefied.

I have a new religion, one that straightens out all the kinks, and so dispels rather than saves the soul. I am no exception. I have been chosen by nobody for nothing. I am continuous with the x-dimensional totality that we call nature—continuous in every respect. I watch images from Hubble, the most distant galactic swirls, and I tell myself, I am this, and I feel grand and empty. I am the environment that chokes, the climate that reels. I am the body that the doctor attends…

And you are too.

Thus the most trivial prophecy, the prediction that you will waver, crumble, that the florescent light will wobble to the sound of loved ones weeping… breathing. That someone, maybe, will clutch your hand.

Such hubris, when you think about it, to assume that lived life lay at your intellectual fingertips—the thing most easily grasped! For someone who has spent their life reading philosophy this stands tall among the greater insults: the knowledge that we have been duped all along, that all those profundities, that resonant world I found such joy and rancour pondering, were little more than the artifact of machines taking their shadows for reflections, the cave wall for a looking glass.

I am the residue of survival—living life. I am an astronomically complicated system, a multifarious component of superordinate systems that cannot cognize itself as such for being such. I am a serial gloss, a transmission from nowhere into nowhere, a pattern plucked from subpersonal pandemonium and broadcast to the neural horde. I am a message that I cannot conceive. As. Are. You.

I can show you pictures of dead people to prove it. Lives lived out.

The first-person is a selective precis of this totality, one that poses as the totality. And this is the trick, the way to unravel the kink and see how it is that Heidegger could confuse his semantic vision with seeing. The oblivion behind my thoughts is the oblivion of neglect. Because oblivion has no time, I have no time, and so watch amazed as my shining hands turn to leather. I breathe deep and think, Now. Because oblivion constrains nothing, I follow rules of my own will, pursue goals of my own desire. I stretch forth my hand and remake what lies before me. Because oblivion distinguishes nothing, I am one. I raise my voice and declare, Me. Because oblivion reveals nothing, I stand opposite the world, always only aimed, never connected. I squint and I squint and I ask, How do I know?

I am bottomless because my foundation was never mine to see. I am a perspective, an agent, a person, just another dude-with-a-bad-attitude—I am all these things because of the way I am not any of these things. I am not what I am because of what I am—again, the same as you.

Ghosts can be defined as a fragment cognized as a whole. In some cultures ghosts have no backs, no faces, no feet. In most all cultures they have no substance, no consistency, temporal or otherwise. The dimensions of lived life have been stripped from them; they are shades, animate shadows. As Virgil says of Aeneas attempting to embrace his father, Anchises, in the Underworld:

 Then thrice around his neck his arms he threw;

And thrice the flitting shadow slipp’d away,

Like winds, or empty dreams that fly the day.

Ghosts are the incorporeal remainder, the something shorn of substance and consistency. This is the lived life of Heidegger, an empty dream that flew the day. Insofar as Dasein lacks meat, Dasein dwells with the dead, another shade in the underworld, another passing fancy. We are not ghosts. If lived life lies in the meat, then the truth of lived life lies in the meat. The truth of what we are runs orthogonal to the being that we all swear that we must be. Consciousness is an anosognosiac broker, and we are the serial sum of deals struck between parties utterly unknown. Who are the orthogonal parties? What are the deals? These are the questions that aim us at our most essential selves, at what we are in fact. These are the answers being pursued by industry.

And yet we insist on the reality of ghosts, so profound is the glamour spun by neglect. There are no orthogonal parties, we cry, and therefore no orthogonal deals. There is no orthogonal regime. Oblivion hides only oblivion. What bubbles up from oblivion, begins with me and ends with me. Thus the enduring attempt to make sense of things sideways, to rummage through the ruin of heaven and erect parallel regimes, ones too impersonal to reek of superstition. We use ghosts of reference to bind our inklings to the world, ghosts of inference to bind our inklings to one another, ghosts of quality to give ethereal substance to experience. Ghosts and more ghosts, all to save the mad, inescapable intuition that our intuitions must be real somehow. We raise them as architecture, and demur whenever anyone poses the mundane question of building material.

‘Thought’… No word short of ‘God’ has shut down more thinking.

Content is a wraith. Freedom is a vapour. Experience is a dream. The analogy is no coincidence.

The ontology of meaning is the ontology of ghosts.

 

 

 

Incomplete Cognition: An Eliminativist Reading of Terrence Deacon’s Incomplete Nature

by rsbakker

Incomplete Nature: How Mind Emerged from Matter

Goal seeking, willing, rule-following, knowing, desiring—these are just some of the things we do that we cannot make sense of in causal terms. We cite intentional phenomena all the time, attributing them the kind of causal efficacy we attribute to the more mundane elements of nature. The problem, as Terrence Deacon frames it, is that whenever we attempt to explain these explainers, we find nothing, only absence and perplexity.

“The inability to integrate these many species of absence-based causality into our scientific methodologies has not just seriously handicapped us, it has effectively left a vast fraction of the world orphaned from theories that are presumed to apply to everything. The very care that has been necessary to systematically exclude these sorts of explanations from undermining our causal analyses of physical, chemical, and biological phenomena has also stymied our efforts to penetrate beyond the descriptive surface of the phenomena of life and mind. Indeed, what might be described as the two most challenging scientific mysteries of the age—both are held hostage by this presumed incompatibility.” Incomplete Nature,12

The question, of course, is whether this incompatibility is the product of our cognitive constitution or the product of some as yet undiscovered twist in nature. Deacon argues the latter. Incomplete Nature is a magisterial attempt to complete nature, to literally rewrite physics in a way that seems to make room for goal seeking, willing, rule-following, knowing, desiring, and so on—in other words, to provide a naturalistic way to make sense of absences that cause. He wants to show how all these things are real.

My own project argues the former, that the notion of ‘absences that cause’ is actually an artifact of neglect. ‘We’ are an astronomically complicated subsystem embedded in the astronomically complicated supersystem that we call ‘nature,’ in such a way that we cannot intuitively cognize ourselves as natural.

The Blind Brain Theory claims to provide the world’s first genuine naturalization of intentionality—a parsimonious, comprehensive way to explain centuries of confusion away. What Intentionalists like Deacon think they are describing are actually twists on a family of metacognitive illusions. Crudely put, since no cognitive capacity could pluck ‘accuracy’ of any kind from the supercomplicated muck of the brain, our metacognitive system confabulates. It’s not that some (yet to be empirically determined) systematicity isn’t there: it’s that the functions discharged via our conscious access to that systematicity are compressed, formatted, and truncated. Metacognition neglects these confounds, and we begin making theoretical inferences assuming the sufficiency of compressed, formatted, and truncated information. Among many things, BBT actually predicts a discursive field clustered about families of metacognitive intuitions, but otherwise chronically incapable of resolving among their claims. When an Intentionalist gives you an account of the ‘game of giving and asking for reasons,’ say, you need only ask them why anyone should subscribe to an ontologization (whether virtual, quasi-transcendental, transcendental, or otherwise) on the basis of almost certainly unreliable metacognitive hunches.

The key conceptual distinction in BBT is that between what I’ve been calling ‘lateral sensitivity’ and ‘medial neglect.’ Lateral sensitivity refers to the brain’s capacity to be ‘imprinted’ by other systems, to be ‘pushed’ in ways that allow it to push back. Since behavioural interventions, or ‘pushing-back,’ requires some kind of systematic relation to the system or systems to be pushed, lateral sensitivity requires being pushed by the right things in the right way. Thus the Inverse Problem and the Bayesian nature of the human brain. The Inverse Problem pertains to the difficulty of inferring the structure/dynamics of some distal system (an avalanche or a wolf, say) via the structure/dynamics of some proximal system (ambient sound or light, say) that reliably co-varies with that distal system. The difficulty is typically described in terms of ambiguity: since any number of distal systems could cause the structure/dynamics of the proximal system, the brain needs some way of allowing the actual distal system to push through the proximal system, if it is to have any hope of pushing back. Unless it becomes a reliable component of its environment, it cannot reliably make components of its environments. This is an important image to keep in mind: that of the larger brainenvironment system, the way the brain is adapted to be pushed, or transformed into a component of larger environmental mechanisms, so as to push back, to ‘componentialize’ environmental mechanisms. Quite simply, we have evolved to be tyrannized by our environment in a manner that enables us to tyrannize our environment.

Lateral sensitivity refers to this ‘tyranny enabling tyranny,’ the brain’s ability to systematically covary with its environment in behaviourally advantageous ways. A system that solves via the Inverse Problem possesses a high degree of reliable covariational complexity. As it turns out, the mechanical complexity required to do this is nothing short of mind-boggling. And as we shall see, this fact possesses some rather enormous consequences. Up to this point, I’ve really only provided an alternate description of the sensorimotor loop; the theoretical dividends begin piling up once we consider lateral sensitivity in concert with medial neglect.

The machinery of lateral sensitivity is so complicated that it handily transcends its own ‘sensitivity threshold.’ This means the brain possesses a profound insensitivity to itself. This might sound daffy, given that the brain simply is a supercomplicated network of mutual sensitivities, but this is actually where the nub of cognition as a distinct biological process is laid bare. Unlike the dedicated sensitivity that underwrites mechanism generally, the sensitivity at issue here involves what might be called the systematic covariation for behaviour. Any process that systematically covaries for behaviour is a properly cognitive process. So the above could be amended to, ‘the brain possesses a profound cognitive insensitivity to itself.’ Medial neglect is this profound cognitive insensitivity.

The advantage of cognition is behaviour, the push-back. The efficacy of this behavioural push-back depends on the sensory push, which is to say, lateral sensitivity. Innumerable behavioural problems, it turns out, require that we be pushed by our pushing back: that our future behaviour (push-back) be informed (pushed) by our ongoing behaviour (pushing-back). Behavioural efficacy is a function of behavioural versatility is a function of lateral sensitivity, which is to say, the capacity to systematically covary with the environment. Medial neglect, therefore, constitutes a critical limit on behavioural efficacy: those ‘problem ecologies’ requiring sensitivity to the neurobiological apparatus of cognition to be solved effectively lay outside the capacity of the system to tackle. We are, quite literally, the ‘elephant in the room,’ a supercomplicated mechanism sensitive to most everything relevant to problem-solving in its environment except itself.

Mechanical allo-sensitivity entails mechanical auto-insensitivity, or auto-neglect. A crucial consequence of this is that efficacious systematic covariation requires unidirectional interaction, or that sensing be ‘passive.’ The degree to which the mechanical activity of tracking actually impacts the system to be tracked is the degree to which that system cannot be reliably tracked. Anticipation via systematic covariation is impossible if the mechanics of the anticipatory system impinge on the mechanics of the system to be anticipated. The insensitivity of the anticipatory system to its own activity, or medial neglect, perforce means insensitivity to systems directly mechanically entangled in that activity. Only ‘passive entanglement’ will do. This explains why so-called ‘observer effects’ confound our ability to predict the behaviour of other systems.

So the stage is set. The brain quite simply cannot cognize itself (or other brains) in the same high-dimensional way it cognizes its environments. (It would be hard to imagine any evolved metacognitive capacity that could achieve such a thing, in fact). It is simply too complex and too entangled. As a result, low-dimensional, special purpose heuristics—fast and frugal kluges—are its only recourse.

The big question I keep asking is, How could it be any other way? Given the problems of complexity and complicity, given the radical nature of the cognitive bottleneck—just how little information is available for conscious, serial processing—how could any evolved metacognitive capacity whatsoever come close to apprehending the functional truth of anything inner’? If you are an Intentionalist, say, you need to explain how the phenomena you’re convinced you intuit are free of perspectival illusions, or conversely, how your metacognitive faculties have overcome the problems posed by complexity and complicity.

On BBT, the brain possesses at least two profoundly different covariational regimes, one integrated, problem-general, and high-dimensional, mediating our engagement in the natural world, the other fractious, problem-specific and low-dimensional, mediating our engagements with ourselves and others (who are also complex and complicit), and thereby our engagement in the natural world. The twist lies in medial neglect, the fact that the latter fractious, problem-specific, and low-dimensional covariational regime is utterly insensitive to its fractious, problem-specific, and low-dimensional nature. Human metacognition is almost entirely blind to the structure of human cognition. This is why we require cognitive science: reflection on our cognitive capacities tells us little or nothing about those capacities, reflection included. Since we have no way of intuiting the insufficiency of these intuitions, we assume they’re sufficient.

We are now in a position to clearly delineate Deacon’s ‘fraction,’ what makes it vast, and why it has been perennially orphaned. Historically, natural science has been concerned with the ‘lateral problem-ecologies,’ with explicating the structure and dynamics of relatively simple systems possessing functional independence. Any problem ecology requiring the mechanistic solution of brains lay outside its purview. Only recently has it developed the capacity to tackle ‘medial problem-ecologies,’ the structure and dynamics of astronomically complex systems possessing no real functional independence. For the first time humanity finds itself confronted with integrated, high-dimensional explications of what it is. The ruckus, of course, is all about how to square these explications with our medial traditions and intuitions. All the so-called ‘hard problems’ turn on our apparent inability to naturalistically find, let alone explain, the phenomena corresponding to our intuitive, metacognitive understanding of the medial.

Why do our integrated, high-dimensional, explications of the medial congenitally ‘leave out’ the phenomena belonging to the medial-as-metacognized? Because metacognitive phenomena like goal seeking, willing, rule-following, knowing, desiring only ‘exist,’ insofar as they exist at all, in specialized problem-solving contexts. ‘Goal seeking’ is something we all do all the time. A friend has an untoward reaction to a comment of ours, so we ask ourselves, in good conscience, ‘What was I after?’ and the process of trying to determine our goal given whatever information we happen to have begins. Despite complexity and complicity, this problem is entirely soluble because we have evolved the heuristic machinery required: we can come to realize that our overture was actually meant to belittle. Likewise, the philosopher asks, ‘What is goal-seeking?’ and the process of trying to determine the nature of goal-seeking given whatever information he happens to have begins. But the problem proves insoluble, not surprisingly, given that the philosopher almost certainly lacks the requisite heuristic machinery. The capacity to solve for goal-seeking qua goal-seeking is just not something our ancestors evolved.

Deacon’s entire problematic turns on the equivocation of the first-order and second-order uses of intentional terms, on the presumption that the ‘goal-seeking’ we metacognize simply has to be the ‘goal-seeking’ referenced in first-order contexts—on the presumption, in other words, of metacognitive adequacy, which is to say something we now know to be false as a matter of empirical fact. For all its grand sweep, for all its lucid recapitulation and provocative conjecture, Incomplete Nature is itself shockingly incomplete. Nowhere does he consider the possibility that the only ‘goal-seeking phenomenon’ missing, the only absence to be explained, is this latter, philosophical goal-seeking.

At no point in the work does he reference, let alone account for, the role metacognition or introspection plays in our attempt to grapple with the incompatibility of natural and intentional phenomena. He simply declares “the obvious inversion of causal logic that distinguishes them” (139), without genuinely considering where that ‘inversion’ occurs. Because this just is the nub of the issue between the emergentist and the eliminativist: whether his ‘obvious inversion’ belongs to the systems observed or to the systems observing. As Deacon writes:

“There is no use denying there is a fundamental causal difference between these domains that must be bridged in any comprehensive theory of causality. The challenge of explaining why such a seeming reversal takes place, and exactly how it does so, must ultimately be faced. At some point in this hierarchy, the causal dynamics of teleological processes do indeed emerge from simpler blind mechanistic dynamics, but we are merely restating this bald fact unless we can identify exactly how this causal about-face is accomplished. We need to stop trying to eliminate homunculi, and to face up to the challenge of constructing teleological properties—information, function, aboutness, end-directedness, self, even conscious experience—from unambiguously non-teleological starting points.” 140

But why do we need to stop ‘trying to eliminate’ homunculi? We know that philosophical reflection on the nature of cognition is woefully unreliable. We know that intentional concepts and phenomena are the stock and trade of philosophical reflection. We know that scientific inquiry generally delegitimizes our prescientific discourses. So why shouldn’t we assume that the matter of intentionality amounts to more of the same?

Deacon never says. He acknowledges “there cannot be a literal ends-causing-the-means process involved” (109) when it comes to intentional phenomena. As he writes:

“Of course, time is neither stopped nor running backwards in any of these processes. Thermodynamic processes are proceeding uninterrupted. Future possible states are not directly causing present events to occur.” 109-110

He acknowledges, in other words, that this ‘inversion of causality’ is apparent only. He acknowledges, in other words, that metacognition is getting things wrong, just not entirely. So what recommends his project of ontologically meeting this appearance halfway over the project of doing away with it altogether? The project of rewriting nature, after all, is far more extravagant than the project of theorizing metacognitive shortcomings.

Deacon’s failure to account for observation-dependent interpretations of intentionality is more than suspiciously convenient, it actually renders the whole of Incomplete Nature an exercise in begging the question. He spends a tremendous amount of time and no little ingenuity in describing the way ‘teleodynamic systems,’ as the result of increasingly recursive complexity, emerge from ‘morphodynamic systems’ which in turn emerge from standard thermodynamic systems. Where thermodynamic systems exhibit straightforward entropy, morphodynamic systems, such as crystal formation, exhibit the tendency to become more ordered. Building on morphodynamics, teleodynamic systems then exhibit the kinds of properties we take to be intentional. A point of pride for Deacon is the way his elaborations turn, as he mentions in the extended passage quoted above, on ‘unambiguously non-teleological starting points.’

He sums this patient process of layering causal complexities in the postulation of what he calls an autogen, “a form of self-generating, self-repairing, self-replicating system that is constituted by reciprocal morphodynamic processes” (547-8), and arguably his most ingenious innovation. He then moves to conclude:

“So even these simple molecular systems have crossed a threshold in which we can say that a very basic form of value has emerged, because we can describe each of the component autogenic processes as there for the sake of autogen integrity, or for the maintenance if that particular form of autogenicity. Likewise, we can describe different features of the surrounding molecular environment as ‘beneficial’ or ‘harmful’ in the same sense that we would apply these assessments to microorganisms. More important, these are not merely glosses provided by a human observer, but intrinsic and functionally relevant features of the consequence-organized nature of the autogen itself.” 322

And the reader is once again left with the question of why. We know that the brain possesses suites of heuristic problem solvers geared to economize by exploiting various features of the environment. The obvious question becomes: How is it that any of the processes he describes do anything more than schematize the kinds of features that trigger the brain to swap out its causal cognitive systems for its intentional cognitive systems?

Time and again, one finds Deacon explicitly acknowledging the importance of the observer, and time and again one finds him dismissing that importance without a lick of argumentation—the argumentation his entire account hangs on. One can even grant him his morphodynamic and teleodynamic ‘phase transitions’ and still plausibly insist that all he’s managed to provide is a detailed description of the kinds of complex mechanical processes prone to trigger our intentional heuristics. After all, if it is the case that the future does not cause the past, then ‘end directedness,’ the ‘obvious inversion of causality,’ actually isn’t an inversion at all. The fact is Deacon’s own account of constraints and the role they play in morphodynamics and teleodynamics is entirely amenable to mechanical understanding. He continually relies on disposition talk. Even his metaphors, like the ‘negentropic ratchet’ (317), tend to be mechanical. The autogen is quite clearly a machine, one that automatically expresses the constraints that make it possible. The fact that these component constraints result in a system that behaves in ways far different than mundane thermodynamic systems speaks to nothing more extraordinary than mechanical emergence, the fact that whole mechanisms do things that their components could not (See Craver, 2007, pp. 211-17 for a consideration of the distinction between mechanical and spooky emergence). Likewise, for all the ink he spills regarding the holistic nature of teleodynamic systems, he does an excellent job explaining them in terms of their contributing components!

In the end, all Deacon really has is an analogy between the ‘intentional absence,’ our empirical inability to find intentional phenomena, and the kind of absence he attributes to constraints. Since systematicity of any kind requires constraints, defining constraints, as Deacon does, in terms of what cannot happen—in terms of what is absent—provides him the rhetorical license he needs to speak of ‘absential causes’ at pretty much any juncture. Since he has already defined intentional phenomena as ‘absential causes’ it becomes very easy thing indeed to lead the reader over the ‘epistemic cut’ and claim that he has discovered the basis of the intentional as it exists in nature, as opposed to an interpretation of those systems inclined to trigger intentional cognition in the human brain. Constraints can be understood in absential terms. Intentional phenomena can only be understood in absential terms. Since the reader, thanks to medial neglect, has no inkling whatsoever of the fractionate and specialized nature of intentional cognition, all Deacon needs to do is comb their existing intuitions in his direction. Constraints are objective, therefore intentionality is objective.

Not surprisingly, Deacon falls far short of ‘naturalizing intentionality.’ Ultimately, he provides something very similar to what Evan Thompson delivers in his equally impressive (and unconvincing) Mind in Life: a more complicated, attenuated picture of nature that seems marginally less antithetical to intentionality. Where Thompson’s “aim is not to close the explanatory gap in a reductive sense, but rather to enlarge and enrich the philosophical and scientific resources we have for addressing the gap (x), Deacon’s is to “demonstrate how a form of causality dependent on specifically absent features and unrealized potentials can be compatible with our best science” (16), the idea being that such an absential understanding will pave the way for some kind of thoroughgoing naturalization of intentionality—as metacognized—in the future.

But such a naturalization can only happen if our theoretical metacognitive intuitions regarding intentionality get intentionality right in general, as opposed to right enough for this or that. And our metacognitive intuitions regarding intentionality can only get intentionality right in general if our brain has somehow evolved the capacity to overcome medial neglect. And the possibility of this, given the problems of complexity and complicity, seems very hard to fathom.

The fact is BBT provides a very plausible and parsimonious observer dependent explanation for why metacognition attributes so many peculiar properties the medial processes. The human brain, as the frame of cognition, simply cannot cognize itself the way it does other systems. It is, as a matter of empirical necessity, not simply blind to its own mechanics, but blind to this blindness. It suffers medial neglect. Unable to access and cognize its origins, and unable to cognize this inability, it assumes that it accesses all there is to access—it confuses itself for something bottomless, an impossible exception to physics.

So when Deacon writes:

“These phenomena not only appear to arise without antecedents, they appear to be defined with respect to something nonexistent. It seems that we must explain the uncaused appearance of phenomena whose causal powers derive from something nonexistent! It should be no surprise that this most familiar and commonplace feature of our existence poses a conundrum for science.” 39

we need to take the truly holistic view that Deacon himself consistently fails to take. We need to see this very real problem in terms of one set of natural systems—namely, us—engaging the set of all natural systems, as a kind of linkage between being pushed and pushing back.

On BBT, Deacon’s ‘obvious inversion of causality’ is merely an illusory artifact of constraints pertaining to the human brain’s ability to cognize itself the way it cognizes its environments. They appear causally inverted simply because no information pertaining to their causal provenance is available to deliberative metacognition. Rules constrain us in some mysterious, orthogonal way. Goals somehow constrain us from the future. Will somehow constrains itself! Desires, like knowledge, are somehow constrained by their objects, even when they are nowhere to be seen. These apparently causally inverted phenomena vanish whenever we search for their origins because they quite simply do not exist in the high-dimensional way things in our environments exist. They baffle scientific reason because the actual neuromechanical heuristics employed are adapted to solve problems in the absence of detailed causal information, and because conscious metacognition, blind to the rank insufficiency of the information available for deliberative problem-solving, assumes that it possesses all the information it needs. Philosophical reflection is a cultural achievement, after all, an exaption of existing, more specialized cognitive resources; it seems quite implausible to assume the brain would possess the capacity to vet the relative sufficiency of information utilized in ways possessing no evolutionary provenance.

We are causally embedded in our environments in such a way that we cannot intuit ourselves as so embedded, and so intuit ourselves otherwise, as goal seeking, willing, rule-following, knowing, desiring, and so on—in ways that systematically neglect the actual, causal relations involved. Is it really just a coincidence that all these phenomena just happen to belong to the ‘medial,’ which is to say, the machinery responsible for cognition? Is it really just a coincidence that all these phenomena exhibit a profound incompatibility with causal explanation? Is it really just a coincidence that all our second-order interpretations of these terms are chronically underdetermined (a common indicator of insufficient information), even though they function quite well when used in everyday, first-order, interpersonal contexts?

Not at all. As I’ve attempted to show in a variety of ways the past couple years a great number of traditional conundrums can be resolved via BBT. All the old problems fall away once we realize that the medial—or ‘first person’—is simply what the third person looks like absent the capacity to laterally solve the third person. The time has come to leave them behind and begin the hard work of discovering what new conundrums await.

The Closing and Opening of Covers

by rsbakker

My agent has the book, and I’m having several copies of the manuscript printed up and bound to distribute to some keen-eyed friends today. That’s as much as I can say detail-wise, at the moment. As soon as my publishers and my agent and I have the details hashed out I will post them here post-haste.

I also finally managed to trap True Detective on my PVR. People have sent me so many links (such as this and this) to mainstream articles on the character of Cohle and his creator Nic Pizzolatto’s inspirations that I thought it worth a looksee. I haven’t watched an episode yet, but the notion of Mathew McConaughy (a devote believer) playing a nihilistic prophet appeals to my sense of cosmic perversity. I suppose he would make a good Disciple Manning. Who knows, maybe a thunderbolt will strike someone at HBO–they’ll take a sip of latte and wonder, “Egad! What if we take True Detective and Game of Thrones  and mash them together!” Either way, given the way society continues to inexorably creep toward Golgotterath, the popularization of this fact has got to be a good thing… if it’s true that informed gamblers enjoy better odds than sleepwalkers, that is.