The Blind Mechanic II: Reza Negarestani and the Labour of Ghosts

For some time now I’ve been arguing that the implications of the Singularity already embroil us—that the Singularity can be seen, in fact, as the material apotheosis of the Semantic Apocalypse, insofar as it is the point where the Scientific Image of the human at last forecloses on the Manifest Image. In the “The Labor of the Inhuman” (which can be found here and here, with Craig Hickman’s critiques, here and here), Reza Negarestani adopts Brandom’s claim that sapience, the capacity to play the ‘game of giving and asking for reasons,’ distinguishes humans as human. He then goes onto argue that this allows us, and ultimately commits us, to seeing the human as a kind of temporally extended process of rational revision, one that ultimately results in the erasure of the human—or the ‘inhuman.’ Ultimately, what it means to be human is to be embroiled in a process of becoming inhuman. He states his argument thus:

The contention of this essay is that universality and collectivism cannot be thought, let alone attained, through consensus or dissensus between cultural tropes, but only by intercepting and rooting out what gives rise to the economy of false choices and by activating and fully elaborating what real human significance consists of. For it is, as will be argued, the truth of human significance—not in the sense of an original meaning or a birthright, but in the sense of a labor that consists of the extended elaboration of what it means to be human through a series of upgradable special performances—that is rigorously inhuman.

In other words, so long as we fail to comprehend the inhumanity of the human, this rational-revisionary process, we fail to understand the human, and so have little hope of solving problems pertaining to the human. Understanding the ‘truth of human significance,’ therefore requires understanding what the future will make of the human. It requires that Negarestani prognosticate. It requires, in other words, that he pick out the specific set of possibilities constituting the inhuman. The only principled way to do that is to comprehend some set of systematic constraints operative in the present. But his credo, unlike that of the ‘Hard SF’ writer, is to ignore the actual technics of the natural, and to focus on the speculative technics of the normative. His strategy, in other words, is to predict the future of the human using only human resources—to see the fate of the human, the ‘inhuman,’ as something internal to the human. And this, as I hope to show, is simply not plausible.

He understands the danger of conceiving his constraining framework as something fixed: “humanism cannot be regarded as a claim about human that can only be professed once and subsequently turned into a foundation or axiom and considered concluded.” He appreciates the implausibility of the static, Kantian transcendental approach. As a result, he proposes to take the Sellarsian/Brandomian approach, focussing on the unique relationship between the human and sapience, the “distinction between sentience as a strongly biological and natural category and sapience as a rational (not to be confused with logical) subject.” He continues:

The latter is a normative designation which is specified by entitlements and the responsibilities they bring about. It is important to note that the distinction between sapience and sentience is marked by a functional demarcation rather than a structural one. Therefore, it is still fully historical and open to naturalization, while at the same time being distinguished by its specific functional organization, its upgradable set of abilities and responsibilities, its cognitive and practical demands.

He’s careful here to hedge, lest the dichotomy between the normative and the natural comes across as too schematic:

The relation between sentience and sapience can be understood as a continuum that is not differentiable everywhere. While such a complex continuity might allow the naturalization of normative obligations at the level of sapience—their explanation in terms of naturalistic causes—it does not permit the extension of certain conceptual and descriptive resources specific to sapience (such as the particular level of mindedness, responsibilities, and, accordingly, normative entitlements) to sentience and beyond.

His dilemma here is the dilemma of the Intentionalist more generally. Science, on the one hand, is nothing if not powerful. The philosopher, on the other hand, has a notorious, historical tendency to confuse the lack of imagination for necessity. Foot-stomping will not do. He needs some way to bite this bullet without biting it, basically, some way of acknowledging the possible permeability of normativity to naturalization, while insisting, nonetheless, on the efficacy of some inviolable normative domain. To accomplish this, he adverts to the standard appeal to the obvious fact that norm-talk actually solves norm problems, that normativity, in other words, obviously possesses a problem-ecology. But of course the fact that norm-talk is indispensible to solving problems within a specific problem-ecology simply raises the issue of the limits of this ecology—and more specifically, whether the problem of humanity’s future actually belongs to that problem-ecology. What he needs to establish is the adequacy of theoretical, second-order norm-talk to the question of what will become of the human.

He offers us a good, old fashioned transcendental argument instead:

The rational demarcation lies in the difference between being capable of acknowledging a law and being solely bound by a law, between understanding and mere reliable responsiveness to stimuli. It lies in the difference between stabilized communication through concepts (as made possible by the communal space of language and symbolic forms) and chaotically unstable or transient types of response or communication (such as complex reactions triggered purely by biological states and organic requirements or group calls and alerts among social animals). Without such stabilization of communication through concepts and modes of inference involved in conception, the cultural evolution as well as the conceptual accumulation and refinement required for the evolution of knowledge as a shared enterprise would be impossible.

Sound familiar? The necessity of the normative lies in the irreflexive contingency of the natural. Even though natural relations constitute biological systems of astounding complexity, there’s simply no way, we are told, they can constitute the kind of communicative stability that human knowledge and cultural evolution requires. The machinery is just too prone to rattle! Something over and above the natural—something supernatural—is apparently required. “Ultimately,” Negarestani continues, “the necessary content as well as the real possibility of human rests on the ability of sapience—as functionally distinct from sentience—to practice inference and approach non-canonical truth by entering the deontic game of giving and asking for reasons.”

It’s worth pausing to take stock of the problems we’ve accumulated up to this point. 1) Even though the human is a thoroughgoing product of its past natural environments, the resources required to understand the future of the human, we are told, lie primarily, if not entirely, within the human. 2) Even though norm-talk possesses a very specific problem-ecology, we are supposed to take it on faith that the nature of norm-talk is something that only more norm-talk can solve, rather than otherwise (as centuries of philosophical intractability would suggest). And now, 3) Even though the natural, for all its high dimensional contingencies, is capable of producing the trillions of mechanical relations that constitute you, it is not capable of ‘evolving human knowledge.’ Apparently we need a special kind of supernatural game to do this, the ‘game of giving and asking for reasons,’ a low-dimensional, communicative system of efficacious (and yet acausal!) normative posits based on… we are never told—some reliable fund of information, one would hope.

But since no normativist that I know of has bothered to account for the evidential bases of their position, we’re simply left with faith in metacognitive intuition and this rather impressive sounding, second-order theoretical vocabulary of unexplained explainers—‘commitments,’ ‘inferences,’ ‘proprieties,’ ‘deontic statuses,’ ‘entitlements,’ and the like—a system of supernatural efficacies beyond the pale of any definitive arbitration. Negarestani sums this normative apparatus with the term ‘reason,’ and it is reason understood in this inferentialist sense, that provides the basis of charting the future of the human. “Reason’s main objective is to maintain and enhance itself,” he writes. “And it is the self-actualization of reason that coincides with the truth of the inhuman.”

Commitment to humanity requires scrutinizing the meaning of humanity, which in turn requires making the implicature of the human explicit—not just locally, but in its entirety. The problem, in a nutshell, is that the meaning of the human is not analytic, something that can be explicated via analysis alone. It arises, rather, out of the game of giving and asking for reasons, the actual, historical processes that comprise discursivity. And this means that unpacking the content of the human is a matter of continual revision, a process of interpretative differentiation that trends toward the radical, the overthrow of “our assumptions and expectations about what ‘we’ is and what it entails.”

The crowbar of this process of interpretative differentiation is what Negarestani calls an ‘intervening attitude,’ that moment in the game where the interpretation of claims regarding the human spark further claims regarding the human, the interpretation of which sparks yet further claims, and so on. The intervening attitude thus “counts as an enabling vector, making possible certain abilities otherwise hidden or deemed impossible.” This is why he can claim that “[r]evising and constructing the human is the very definition of committing to humanity.” And since this process is embedded in the game of giving and asking for reasons, he concludes that “committing to humanity is tantamount complying with the revisionary vector of reason and constructing humanity according to an autonomous account of reason.”

And so he writes:

Humanity is not simply a given fact that is behind us. It is a commitment in which the reassessing and constructive strains inherent to making a commitment and complying with reason intertwine. In a nutshell, to be human is a struggle. The aim of this struggle is to respond to the demands of constructing and revising human through the space of reasons.

In other words, we don’t simply ‘discover the human’ via reason, we construct it as well. And thus the emancipatory upshot of Negarestani’s argument: if reasoning about the human is tantamount to constructing the human, then we have a say regarding the future of humanity. The question of the human becomes an explicitly political project, and a primary desideratum of Negarestani’s stands revealed. He thinks reason as he defines it, as at once autonomous (supernatural) and historically concrete (or ‘solid,’ as Brandom would say) revisionary activity of theoretical argumentation, provides a means of assessing the adequacy of various political projects (traditional humanism and what he calls ‘kitsch Marxism) according to their understanding of the human. Since my present concern is to assess the viability of the account of reason Negarestani uses to ground the viability of this yardstick, I will forego considering his specific assessments in any detail.

The human is the malleable product of machinations arising out of the functional autonomy of reason. Negarestani refers to this as a ‘minimalist definition of humanity,’ but as the complexity of the Brandomian normative apparatus he deploys makes clear, it is anything but. The picture of reason he espouses is as baroque and reticulated as anything Kant ever proposed. It’s a picture, after all, that requires an entire article to simply get off the ground! Nevertheless, this dynamic normative apparatus provides Negarestani with a generalized means of critiquing the intransigence of traditional political commitments. The ‘self-actualization’ of reason lies in its ability “to bootstrap complex abilities out of its primitive abilities.” Even though continuity is with previous commitments is maintained at every step in the process, over time the consequences are radical: “Reason is therefore simultaneously a medium of stability that reinforces procedurality and a general catastrophe, a medium of radical change that administers the discontinuous identity of reason to an anticipated image of human.”

This results in what might be called a fractured ‘general implicature,’ a space of reasons rife with incompatibilities stemming from the refusal or failure to assiduously monitor and update commitments in light of the constructive revisions falling out of the self-actualization of reason. Reason itself, Negarestani is arguing, is in the business of manufacturing ideological obsolescence, always in the process of rendering its prior commitments incompatible with its present ones. Given his normative metaphysics, reason has become the revisionary, incremental “director of its own laws,” one that has the effect of rendering its prior laws, “the herald of those which are whispered to it by an implanted sense or who knows what tutelary nature” (Kant, Fundamental Principles of the Metaphysics of Morals). Where Hegel can be seen as temporalizing and objectifying Kant’s atemporal, subjective, normative apparatus, Brandom (like others) can be seen as socializing and temporalizing it. What Negarestani is doing is showing how this revised apparatus operates against the horizon of the future with reference to the question of the human. And not surprisingly, Kant’s moral themes remain the same, only unpacked along the added dimensions of the temporal and the social. And so we find Negarestani concluding:

The sufficient content of freedom can only be found in reason. One must recognize the difference between a rational norm and a natural law—between the emancipation intrinsic in the explicit acknowledgement of the binding status of complying with reason, and the slavery associated with the deprivation of such a capacity to acknowledge, which is the condition of natural impulsion. In a strict sense, freedom is not liberation from slavery. It is the continuous unlearning of slavery.

The catastrophe, apparently, has yet to happen, because here we find ourselves treading familiar ground indeed, Enlightenment ground, as Negarestani himself acknowledges, one where freedom remains bound to reason—“to the autonomy of its normative, inferential, and revisionary function in the face of the chain of causes that condition it”—only as process rather than product.

And the ‘inhuman,’ so-called, begins to look rather like a shill for something all too human, something continuous, which is to say, conservative, through and through.

And how could it be otherwise, given the opening, programmatic passage of the piece?

Inhumanism is the extended practical elaboration of humanism; it is born out of a diligent commitment to the project of enlightened humanism. As a universal wave that erases the self-portrait of man drawn in sand, inhumanism is a vector of revision. It relentlessly revises what it means to be human by removing its supposed evident characteristics and preserving certain invariances. At the same time, inhumanism registers itself as a demand for construction, to define what it means to be human by treating human as a constructible hypothesis, a space of navigation and intervention.

The key phrase here has to be ‘preserving certain invariances.’ One might suppose that natural reality would figure large as one of these ‘invariances’; to quote Philip K. Dick, “Reality is that which, when you stop believing in it, doesn’t go away.” But Negarestani scarce mentions nature as cognized by science save to bar the dialectical door against it. The thing to remember about Brandom’s normative metaphysics is that ‘taking-as,’ or believing, is its foundation (or ontological cover). Unlike reality, his normative apparatus does go away when the scorekeepers stop believing. The ‘reality’ of the apparatus is thus purely a functional artifact, the product of ‘practices,’ something utterly embroiled in, yet entirely autonomous from, the natural. This is what allows the normative to constitute a ‘subregion of the factual’ without being anything natural.

Conservatism is built into Negarestani’s account at its most fundamental level, in the very logic—the Brandomian account of the game of giving and asking for reasons—that he uses to prognosticate the rational possibilities of our collective future. But the thing I find the most fascinating about his account is the way it can be read as an exercise in grabbing Brandom’s normative apparatus and smashing it against the wall of the future—a kind of ‘reductio by Singularity.’ Reasoning is parochial through and through. The intuitions of universalism and autonomy that have convinced so many otherwise are the product of metacognitive illusions, artifacts of confusing the inability to intuit more dimensions of information, with sufficient entities and relations lacking those dimensions. For taking shadows as things that cast shadows.

So consider the ‘rattling machinery’ image of reason I posited earlier in “The Blind Mechanic,” the idea that ‘reason’ should be seen as means of attenuating various kinds of embodied intersystematicities for behaviour—as a way to service the ‘airy parts’ of superordinate, social mechanisms. No norms. No baffling acausal functions. Just shit happening in ways accidental as well as neurally and naturally selected. What the Intentionalist would claim is that mere rattling machinery, no matter how detailed or complete its eventual scientific description comes to be, will necessarily remain silent regarding the superordinate (and therefore autonomous) intentional functions that it subserves, because these supernatural functions are what leverage our rationality somehow—from ‘above the grave.’

As we’ve already seen, it’s hard to make sense of how or why this should be, given that biomachinery is responsible for complexities we’re still in the process of fathoming. The behaviour that constitutes the game of giving and asking for reasons does not outrun some intrinsic limit on biomechanistic capacity by any means. The only real problem naturalism faces is one of explaining the apparent intentional properties belonging to the game. Behaviour is one thing, the Intentionalist says, while competence is something different altogether—behaviour plus normativity, as they would have it. Short of some way of naturalizing this ‘normative plus,’ we have no choice to acknowledge the existence of intrinsically normative facts.

On the Blind Brain account, ‘normative facts’ are simply natural facts seen darkly. ‘Ought,’ as philosophically conceived, is an artifact of metacognitive neglect, the fact that our cognitive systems cannot cognize themselves in the same way they cognize the rest of their environment. Given the vast amounts of information neglected in intentional cognition (not to mention millennia of philosophical discord), it seems safe to assume that norm-talk is not among the things that norm-talk can solve. Indeed, since the heuristic systems involved are neural, we have every reason to believe that neuroscience, or scientifically regimented fact-talk, will provide the solution. Where our second-order intentional intuitions beg to differ is simply where they are wrong. Normative talk is incompatible with causal talk simply because it belongs to a cognitive regime adapted to solve in the absence of causal information.

The mistake, then, is to see competence as some kind of complication or elaboration of performance—as something in addition to behaviour. Competence is ‘end-directed,’ ‘rule-constrained,’ because metacognition has no access to the actual causal constraints involved, not because a special brand of performance ‘plus’ occult, intentional properties actually exists. You seem to float in this bottomless realm of rules and goals and justifications not because such a world exists, but because medial neglect folds away the dimensions of your actual mechanical basis with nary a seam. The apparent normative property of competence is not a property in addition to other natural properties; it is an artifact of our skewed metacognitive perspective on the application of quick and dirty heuristic systems our brains use to solve certain complicated systems.

But say you still aren’t convinced. Say that you agree the functions underwriting the game of giving and asking for reasons are mechanical and not at all accessible to metacognition, but at a different ‘level of description,’ one incapable of accounting for the very real work discharged by the normative functions that emerge from them. Now if it were the case that Brandom’s account of the game of giving and asking for questions actually discharged ‘executive’ functions of some kind, then it would be the case that our collective future would turn on these efficacies in some way. Indeed, this is the whole reason Negarestani turned to Brandom in the first place: he saw a way to decant the future of the human given the systematic efficacies of the game of giving and asking for reasons.

Now consider what the rattling machine account of reason and language suggests about the future. On this account, the only invariants that structurally bind the future to the past, that enable any kind of speculative consideration of the future at all, are natural. The point of language, recall, is mechanical, to construct and maintain the environmental intersystematicity (self/other/world) required for coordinated behaviour (be it exploitative or cooperative). Our linguistic sensitivity, you could say, evolved in much the same manner as our visual sensitivity, as a channel for allowing certain select environmental features to systematically tune our behaviours in reproductively advantageous ways. ‘Reasoning,’ on this view, can be seen as a form of ‘noise reduction,’ as a device adapted to minimize, as far as mere sound allows, communicative ‘gear grinding,’ and so facilitate behavioural coordination. Reason, you could say, is what keeps us collectively in tune.

Now given some kind of ability to conserve linguistically mediated intersystematicities, it becomes easy to see how this rattling machinery could become progressive. Reason, as noise reduction, becomes a kind of knapping hammer, a way to continually tinker and refine previous linguistic intersystematicities. Refinements accumulate in ‘lore,’ allowing subsequent generations to make further refinements, slowly knapping our covariant regimes into ever more effective (behaviour enabling) tools—particularly once the invention of writing essentially rendered lore immortal. As opposed to the supernatural metaphor of ‘bootstrapping,’ the apt metaphor here—indeed, the one used by cognitive archaeologists—is the mechanical metaphor of ratcheting. Refinements beget refinements, and so on, leveraging ever greater degrees of behavioural efficacy. Old behaviours are rendered obsolescent along with the prostheses that enable them.

The key thing to note here, of course, is that language is itself another behaviour. In other words, the noise reduction machinery that we call ‘reason’ is something that can itself become obsolete. In fact, its obsolescence seems pretty much inevitable.

Why so? Because the communicative function of reason is to maximize efficacies, to reduce the slippages that hamper coordination—to make mechanical. The rattling machinery image conceives natural languages as continuous with communication more generally, as a signal system possessing finite networking capacities. On the one extreme you have things like legal or technical scientific discourse, linguistic modes bent on minimizing the rattle (policing interpretation) as far as possible. On the other extreme you have poetry, a linguistic mode bent on maximizing the rattle (interpretative noise) as a means of generating novelty. Given the way behavioural efficacies fall out of self/other/world intersystematicity, the knapping of human communication is inevitable. Writing is such a refinement, one that allows us to raise fragments of language on the hoist, tinker with them (and therefore with ourselves) at our leisure, sometimes thousands of years after their original transmission. Telephony allowed us to mitigate the rattle of geographical distance. The internet has allowed us to combine the efficacies of telephony and text, to ameliorate the rattle of space and time. Smartphones have rendered these fixes mobile, allowing us to coordinate our behaviour no matter where we find ourselves. Even more significantly, within a couple years, we will have ‘universal translators,’ allowing us to overcome the rattle of disparate languages. We will have installed versions of our own linguistic sensitivities into our prosthetic devices, so that we can give them verbal ‘commands,’ coordinate with them, so that we can better coordinate with others and the world.

In other words, it stands to reason that at some point reason would begin solving, not only language, but itself. ‘Cognitive science,’ ‘information technology’—these are just two of the labels we have given to what is, quite literally, a civilization-defining war against covariant inefficiency, to isolate slippages and to ratchet the offending components tight, if not replace them altogether. Modern technological society constitutes a vast, species-wide attempt to become more mechanical, more efficiently integrated in nested levels of superordinate machinery. (You could say that the tyrant attempts to impose from without, capitalism kindles from within.)

The obsolescence of language, and therefore reason, is all but assured. One need only consider the research of Jack Gallant and his team, who have been able to translate neural activity into eerie, impressionistic images of what the subject is watching. Or perhaps even more jaw-dropping still, the research of Miguel Nicolelis into Brain Machine Interfaces, keeping in mind that scarcely one hundred years separates Edison’s phonograph and the Cloud. The kind of ‘Non-symbolic Workspace’ envisioned by David Roden in “Posthumanism and Instrumental Eliminativism” seems to be an inevitable outcome of the rattling machinery account. Language is yet another jury-rigged biological solution to yet another set of long-dead ecological problems, a device arising out of the accumulation of random mutations. As of yet, it remains indispensible, but it is by no means necessary, as the very near future promises to reveal. And as it goes, so goes the game of giving and asking for reasons. All the believed-in functions simply evaporate… I suppose.

And this just underscores the more general way Negarestani’s attempt to deal the future into the game of giving and asking for reasons scarcely shuffles the deck. I’ve been playing Jeremiah for decades now, so you would think I would be used to the indulgent looks I get from my friends and family when I warn them about what’s about to happen. Not so. Everyone understands that something is going on with technology, that some kind of pale has been crossed, but as of yet, very few appreciate its apocalyptic—and I mean that literally—profundity. Everyone has heard of Moore’s Law, of course, how every 18 months or so computing capacity per dollar doubles. What they fail to grasp is what the exponential nature of this particular ratcheting process means once it reaches a certain point. Until recently the doubling of computing power has remained far enough below the threshold of human intelligence to seem relatively innocuous. But consider what happens once computing power actually attains parity with the processing power of the human brain. What it means is that, no matter how alien the architecture, we have an artificial peerat that point in time. 18 months following, we have an artificial intellect that makes Aristotle or Einstein or Louis CK a child in comparison. 18 months following that (or probably less, since we won’t be slowing things up anymore) we will be domesticated cattle. And after that…

Are we to believe these machines will attribute norms and beliefs, that they will abide by a conception of reason arising out of 20th Century speculative intuitions on the nonnatural nature of human communicative constraints?

You get the picture. Negarestani’s ‘revisionary normative process’ is in reality an exponential technical process. In exponential processes, the steps start small, then suddenly become astronomical. As it stands, if Moore’s Law holds (and given this, I am confident it will), then we are a decade or two away from God.

I shit you not.

Really, what does ‘kitsch Marxism’ or ‘neoliberalism’ or any ism’ whatsoever mean in such an age? We can no longer pretend that the tsunami of disenchantment will magically fall just short our intentional feet. Disenchantment, the material truth of the Enlightenment, has overthrown the normative claims of the Enlightenment—or humanism. “This is a project which must align politics with the legacy of the Enlightenment,” the authors of the Accelerationist Manifesto write, “to the extent that it is only through harnessing our ability to understand ourselves and our world better (our social, technical, economic, psychological world) that we can come to rule ourselves” (14). In doing so they commit the very sin of anachronism they level at their critical competitors. They fail to appreciate the foundational role ignorance plays in intentional cognition, which is to say, the very kind of moral and political reasoning they engage in. Far more than ‘freedom’ is overturned once one concedes the mechanical. Knowledge is no universal Redeemer, which means the ideal of Enlightenment autonomy is almost certainly mythical. What’s required isn’t an aspiration to theorize new technologies with old concepts. What’s required is a fundamental rethink of the political in radically post-intentional terms.

As far as I can see, the alternatives are magic or horror… or something no one has yet conceived. And until we understand the horror, grasp all the ways our blinkered perspective on ourselves has deceived us about ourselves, this new conception will never be discovered. Far from ‘resignation,’ abandoning the normative ideals the Enlightenment amounts to overcoming the last blinders of superstition, being honest to our ignorance. The application of intentional cognition to second-order, theoretical questions is a misapplication of intentional cognition. The time has come to move on. Yet another millennia of philosophical floundering is a luxury we no longer possess, because odds are, we have no posterity to redeem our folly and conceit.

Humanity possesses no essential, invariant core. Reason is a parochial name we have given to a parochial biological process. No transcendental/quasi-transcendental/virtual/causal-but-acausal functional apparatus girds our souls. Norms are ghosts, skinned and dismembered, but ghosts all the same. Reason is simply an evolutionary fix that outruns our peephole view. The fact is, we cannot presently imagine what will replace it. The problem isn’t ‘incommensurability’ (which is another artifact of Intentionalism). If an alien intelligence came to earth, the issue wouldn’t be whether it spoke a language we could fathom, because if it’s travelling between stars, it will have shed language along with the rest of its obsolescent biology. If an alien intelligence came to earth, the issue would be one of what kind of superordinate machine will result. Basically, How will the human and the alien combine? When we ask questions like, ‘Can we reason with it?’ we are asking, ‘Can we linguistically condition it to comply?’ The answer has to be, No. Its mere presence will render us components of some description.

The same goes for artificial intelligence. Medial neglect means that the limits of cognition systematically elude cognition. We have no way of intuiting the swarm of subpersonal heuristics that comprise human cognition, no nondiscursive means of plugging them into the field of the natural. And so we become a yardstick we cannot measure, victims of the Only-game-in-town Effect, the way the absence of explicit alternatives leads to the default assumption that no alternatives exist. We simply assume that our reason is the reason, that our intelligence is intelligence. It bloody well sure feels that way. And so the contingent and parochial become the autonomous and universal. The idea of orders of ‘reason’ and ‘intelligence’ beyond our organizational bounds boggles, triggers dismissive smirks or accusations of alarmism.

Artificial intelligence will very shortly disabuse us this conceit. And again, the big question isn’t, ‘Will it be moral?’ but rather, how will human intelligence and machine intelligence combine? Be it bloody or benevolent, the subordination of the ‘human’ is inevitable. The death of language is the death of reason is the birth of something very new, and very difficult to imagine, a global social system spontaneously boiling its ‘airy parts’ away, ratcheting until no rattle remains, a vast assemblage fixated on eliminating all dissipative (as opposed to creative) noise, gradually purging all interpretation from its interior.

Extrapolation of the game of giving and asking for reasons into the future does nothing more than demonstrate the contingent parochialism—the humanity—of human reason, and thus the supernaturalism of normativism. Within a few years you will be speaking to your devices, telling them what to do. A few years after that, they will be telling you what to do, ‘reasoning’ with you—or so it will seem. Meanwhile, the ongoing, decentralized rationalization of production will lead to the wholesale purging of human inefficiencies from the economy, on a scale never before witnessed. The networks of equilibria underwriting modern social cohesion will be radically overthrown. Who can say what kind of new machine will rise to take its place?

My hope is that Negarestani abandons the Enlightenment myth of reason, the conservative impulse that demands we submit the radical indeterminacy of our technological future to some prescientific conception of ourselves. We’ve drifted far past the point of any atavistic theoretical remedy. His ingenuity is needed elsewhere.

At the very least, he should buckle-up, because our exponents lesson is just getting started.

 

The Blind Mechanic

Thus far, the assumptive reality of intentional phenomena has provided the primary abductive warrant for normative metaphysics. The Eliminativist could do little more than argue the illusory nature of intentional phenomena on the basis of their incompatibility with the higher-dimensional view of  science. Since science was itself so obviously a family of normative practices, and since numerous intentional concepts had been scientifically operationalized, the Eliminativist was easily characterized as an extremist, a skeptic who simply doubted too much to be cogent. And yet, the steady complication of our understanding of consciousness and cognition has consistently served to demonstrate the radically blinkered nature of metacognition. As the work of Stanislaus Dehaene and others is making clear, consciousness is a functional crossroads, a serial signal delivered from astronomical neural complexities for broadcast to astronomical neural complexities. Conscious metacognition is not only blind to the actual structure of experience and cognition, it is blind to this blindness. We now possess solid, scientific reasons to doubt the assumptive reality that underwrites the Intentionalist’s position.

The picture of consciousness that researchers around the world are piecing together is the picture predicted by Blind Brain Theory.  It argues that the entities and relations posited by Intentional philosophy are the result of neglect, the fact that philosophical reflection is blind to its inability to see. Intentional heuristics are adapted to first-order social problem-solving, and are generally maladaptive in second-order theoretical contexts. But since we lack the metacognitive werewithal to even intuit the distinction between any specialized cognitive device, we assume applicability where their is none, and so continually blunder at the problem, again and again. The long and the short of it is that the Intentionalist needs some empirically plausible account of metacognition to remain tenable, some account of how they know the things they claim to know. This was always the case, of course, but with BBT the cover provided by the inscrutability of intentionality disappears. Simply put, the Intentionalist can no longer tie their belt to the post of ineliminability.

Science is the only reliable provender of theoretical cognition we have, and to the extent that intentionality frustrates science, it frustrates theoretical cognition. BBT allays that frustration. BBT allows us to recast what seem to be irreducible intentional problematics in terms entirely compatible with the natural scientific paradigm. It lets us stick with the high-dimensional, information-rich view. In what follows I hope to show how doing so, even at an altitude, handily dissolves a number of intentional snarls.

In Davidson’s Fork, I offered an eliminativist radicalization of Radical Interpretation, one that characterized the scene of interpreting another speaker from scratch in mechanical terms. What follows is preliminary in every sense, a way to suss out the mechanical relations pertinent to reason and interpretation. Even still, I think the resulting picture is robust enough to make hash of Reza Negarestani’s Intentionalist attempt to distill the future of the human in “The Labor of the Inhuman” (part I can be found here, and part II, here). The idea is to rough out the picture in this post, then chart its critical repercussions against the Brandomian picture so ingeniously extended by Negarestani. As a first pass, I fear my draft will be nowhere near so elegant as Negarestani’s, but as I hope to show, it is revealing in the extreme, a sketch of the ‘nihilistic desert’ that philosophers have been too busy trying to avoid to ever really sit down and think through.

A kind of postintentional nude.

As we saw two posts back, if you look at interpretation in terms of two stochastic machines attempting to find some mutual, causally systematic accord between the causally systematic accords each maintains with their environment, the notion of Charity, or the attribution of rationality, as some kind of indispensible condition of interpretation falls by the wayside, replaced by a kind of ‘communicative pre-established harmony’—or ‘Harmony,’ as I’ll refer to it here. There is no ‘assumption of rationality,’ no taking of ‘intentional stances,’ because these ‘attitudes’ are not only not required, they express nothing more than a radically blinkered metacognitive gloss on what is actually going on.

Harmony, then, is the sum of evolutionary stage-setting required for linguistic coupling. It refers to the way we have evolved to be linguistically attuned to our respective environmental attunements, enabling the formation of superordinate systems possessing greater capacities. The problem of interpretation is the problem of Disharmony, the kinds of ‘slippages’ in systematicity that impair or, as in the case of Radical Interpretation, prevent the complex coordination of behaviours. Getting our interpretations right, in other words, can be seen as a form of noise reduction. And since the traditional approach concentrates on the role rationality plays in getting our interpretations right, this raises the prospect that what we call reason can be seen as a kind of noise reduction mechanism, a mechanism for managing the systematicity—or ‘tuning’ as I’ll call it here—between disparate interpreters and the world.

On this account, these very words constitute an exercise in tuning, an attempt to tweak your covariational regime in a manner that reduces slippages between you and your (social and natural) world. If language is the causal thread we use to achieve intersystematic relations with our natural and social environments, then ‘reason’ is simply one way we husband the efficacy of that causal thread.

So let’s start from scratch, scratch. What do evolved, biomechanical systems such as humans need to coordinate astronomically complex covariational regimes with little more than sound? For one, they need ways to trigger selective activations of the other’s regime for effective behavioural uptake. Triggering requires some kind of dedicated cognitive sensitivity to certain kinds of sounds—those produced by complex vocalizations, in our case. As with any environmental sensitivity, iteration is the cornerstone, here. The complexity of the coordination possible will of course depend on the complexity of the activations triggered. To the extent that evolution rewards complex behavioural coordination, we can expect evolution to reward the communicative capacity to trigger complex activations. This is where the bottleneck posed by the linearity of auditory triggers becomes all important: the adumbration of iterations is pretty much all we have, trigger-wise. Complex activation famously requires some kind of molecular cognitive sensitivity to vocalizations, the capacity to construct novel, covariational complexities on the slim basis of adumbrated iterations. Linguistic cognition, in other words, needs to be a ‘combinatorial mechanism,’ a device (or series of devices) able to derive complex activations given only a succession of iterations.

These combinatorial devices correspond to what we presently understand, in disembodied/supernatural form, as grammar, logic, reason, and narrative. They are neuromechanical processes—the long history of aphasiology assures us of this much. On BBT, their apparent ‘formal nature’ simply indicates that they are medial, belonging to enabling processes outside the purview of metacognition. This is why they had to be discovered, why our efficacious ‘knowledge’ of them remains ‘implicit’ or invisible/inaccessible. This is also what accounts for their apparent ‘transcendent’ or ‘a priori’ nature, the spooky metacognitive sense of ‘absent necessity’—as constitutive of linguistic comprehension, they are, not surprisingly, indispensible to it. Located beyond the metacognitive pale, however, their activities are ripe for post hoc theoretical mischaracterization.

Say someone asks you to explain modus ponens, ‘Why ‘If p, then q’?’ Medial neglect means that the information available for verbal report when we answer has nothing to do with the actual processes involved in, ‘If p, then q,’ so you say something like, ‘It’s a rule of inference that conserves truth.’ Because language needs something to hang onto, and because we have no metacognitive inkling of just how dismal our inklings are, we begin confabulating realms, some ontologically thick and ‘transcendental,’ others razor thin and ‘virtual,’ but both possessing the same extraordinary properties otherwise. Because metacognition has no access to the actual causal functions responsible, once the systematicities are finally isolated in instances of conscious deliberation, those systematicities are reported in a noncausal idiom. The realms become ‘intentional,’ or ‘normative.’ Dimensionally truncated descriptions of what modus ponens does (‘conserves truth’) become the basis of claims regarding what it is. Because the actual functions responsible belong to the enabling neural architecture they possess an empirical necessity that can only seem absolute or unconditional to metacognition—as should come as no surprise, given that a perspective ‘from the inside on the inside,’ as it were, has no hope of cognizing the inside the way the brain cognizes its outside more generally, or naturally.

I’m just riffing here, but it’s worth getting a sense of just how far this implicature can reach.

Consider Carroll’s “What the Tortoise Said to Achilles.” The reason Achilles can never logically compel the Tortoise with the statement of another rule is that each rule cited becomes something requiring justification. The reason we think we need things like ‘axioms’ or ‘communal norms’ is that the metacognitive capacity to signal for additional ‘tuning’ can be applied at any communicative juncture. This is the Tortoise’s tactic, his way of showing how ‘logical necessity’ is actually contingent. Metacognitive blindness means that citing another rule is all that can be done, a tweak that can be queried once again in turn. Carroll’s puzzle is a puzzle, not because it reveals that the source of ‘normative force’ lies in some ‘implicit other’ (the community, typically), but because of the way it forces metacognition to confront its limits—because it shows us to be utterly ignorant of knowing, how it functions, let alone what it consists in. In linguistic tuning, some thread always remains unstitched, the ‘foundation’ is always left hanging simply because the adumbration of iterations is always linear and open ended.

The reason why ‘axioms’ need to be stipulated or why ‘first principles’ always run afoul the problem of the criterion is simply that they are low-dimensional glosses on high-dimensional (‘embodied’) processes that are causal. Rational ‘noise reduction’ is a never ending job; it has to be such, insofar as noise remains an ineliminable by-product of human communicative coordination. From a pitiless, naturalistic standpoint, knowledge consists of breathtakingly intricate, but nonetheless empirical (high-dimensional, embodied), ways to environmentally covary—and nothing more. There is no ‘one perfect covariational regime,’ just degrees of downstream behavioural efficacy. Likewise, there is no ‘perfect reason,’ no linguistic mechanism capable of eradicating all noise.

What we have here is an image of reason and knowledge as ‘rattling machinery,’ which is to say, as actual and embodied. On this account, reason enables various mechanical efficiencies; it allows groups of humans to secure more efficacious coordination for collective behaviour. It provides a way of policing the inevitable slippages between covariant regimes. ‘Truth,’ on this account, simply refers to the sufficiency of our covariant regimes for behaviour, the fact that they do enable efficacious environmental interventions. The degree to which reason allows us to converge on some ‘truth’ is simply the degree to which it enables mechanical relationships, actual embodied encounters with our natural and social environments. Given Harmony—the sum of evolutionary stage-setting required—it allows collectives to maximize the efficiencies of coordinated activity by minimizing the interpretative noise that hobbles all collective endeavours.

Language, then, allows humans to form superordinate mechanisms consisting of ‘airy parts,’ to become components of ‘superorganisms,’ whose evolved sensitivities allow mere sounds to tweak and direct, to generate behaviour enabling intersystematicities. ‘Reason,’ more specifically, allows for the policing and refining of these intersystematicities. We are all ‘semantic mechanics’ with reference to one another, continually tinkering and being tinkered with, calibrating and being calibrated, generally using efficacious behaviour, the ability to manipulate social and natural environments, to arbitrate the sufficiency of our ‘fixes.’ And all of this plays out in the natural arena established by evolved Harmony.

Now this ‘rattling machinery’ image of reason and knowledge is obviously true in some respect: We are embodied, after-all, causally embroiled in our causal environments. Language is an evolutionary product, as is reason. Misfires are legion, as we might expect. The only real question is whether this rattling machinery can tell the whole story. The Intentionalist, of course, says no. They claim that the intentional enjoys some kind of special functional existence over and above this rattling machinery, that it constitutes a regime of efficacy somehow grasped via the systematic interrogation of our intentional intuitions.

The stakes are straightforward. Either what we call intentional solutions are actually mechanical solutions that we cannot intuit as mechanical solutions, or what we call intentional solutions are actually intentional solutions that we can intuit as intentional solutions. What renders this first possibility problematic is radical skepticism. Since we intuit intentional solutions as intentional, it suggests that our intuitions are deceptive in the extreme. Because our civilization has trusted these intuitions since the birth of philosophy, they have come to inform a vast portion of our traditional understanding. What renders this second possibility problematic is, first and foremost, supernaturalism. Since the intentional is incompatible with the natural, the intentional must consist either in something not natural, or in something that forces us to completely revise our understanding of the natural. And even if such a feat could be accomplished, the corresponding claim that it could be intuited as such remains problematic.

Blind Brain Theory provides a way of seeing Intentionalism as a paradigmatic example of ‘noocentrism,’ as the product of a number of metacognitive illusions analogous to the cognitive illusion underwriting the assumption of geocentrism, centuries before. It is important to understand that there is no reason why our normative problem-solving should appear as it is to metacognition—least of all, the successes of those problem-solving regimes we call intentional. The successes of mathematics stand in astonishing contrast to the failure to understand just what mathematics is. The same could be said of any formalism that possesses practical application. It even applies to our everyday use of intentional terms. In each case, our first-order assurance utterly evaporates once we raise theoretically substantive, second-order questions—exactly as BBT predicts. This contrast of breathtaking first-order problem solving power and second-order ineptitude is precisely what one might expect if the information accessible to metacognition was geared to domain specific problem-solving. Add anosognosia to the mix, the inability to metcognize our metacognitive incapacity, and one has a wickedly parsimonious explanation for the scholastic mountains of inert speculation we call philosophy.

(But then, in retrospect, this was how it had to be, didn’t it? How it had to end? With almost everyone horrifically wrong. A whole civilization locked in some kind of dream. Should anyone really be surprised?)

Short of some unconvincing demand that our theoretical account appease a handful of perennially baffling metacognitive intuitions regarding ourselves, it’s hard to see why anyone should entertain the claim that reason requires some ‘special X’ over and above our neurophysiology (and prostheses). Whatever conscious cognition is, it clearly involves the broadcasting/integration of information arising from unknown sources for unknown consumers. It simply follows that conscious metacognition has no access whatsoever to the various functions actually discharged by conscious cognition. The fact that we have no intuitive awareness of the panoply of mechanisms cognitive science has isolated demonstrates that we are prone to at least one profound metacognitive illusion—namely ‘self-transparency.’ The ‘feeling of willing’ is generally acknowledged as another such illusion, as is homuncularism or the ‘Cartesian Theatre.’ How much does it take before we acknowledge the systematic unreliability of our metacognitive intuitions more generally? Is it really just a coincidence, the ghostly nature of norms and the ghostly nature of perhaps the most notorious metacognitive illusion of all, souls? Is it mere happenstance, the apparent acausal autonomy of normativity and our matter of fact inability to source information consciously broadcast? Is it really the case that all these phenomena, these cause-incompatible intentional things, are ‘otherworldly’ for entirely different reasons? At some point it has to begin to seem all too convenient.

Make no mistake, the Rattling Machinery image is a humbling one. Reason, the great, glittering sword of the philosopher, becomes something very local, very specific, the meaty product of one species at one juncture in their evolutionary development.

On this account, ‘reason’ is a making-machinic machine, a ‘devicing device’—the ‘blind mechanic’ of human communication. Argumentation facilitates the efficacy of behavioural coordination, drastically so, in many instances. So even though this view relegates reason to one adaptation among others, it still concedes tremendous significance to its consequences, especially when viewed in the context of other specialized cognitive capacities. The ability to recall and communicate former facilitations, for instance, enables cognitive ‘ratcheting,’ the stacking of facilitations upon facilitations, and the gradual refinement, over time, of the covariant regimes underwriting behaviour—the ‘knapping’ of knowledge (and therefore behaviour), you might say, into something ever more streamlined, ever more effective.

The thinker, on this account, is a tinker. As I write this, myriad parallel processors are generating a plethora of nonconscious possibilities that conscious cognition serially samples and broadcasts to myriad other nonconscious processors, generating more possibilities for serial sampling and broadcasting. The ‘picture of reason’ I’m attempting to communicate becomes more refined, more systematically interrelated (for better or worse) to my larger covariant regime, more prone to tweak others, to rewrite their systematic relationship to their environments, and therefore their behaviour. And as they ponder, so they tinker, and the process continues, either to peter out in behavioural futility, or to find real environmental traction (the way I ‘tink’ it will (!)) in a variety of behavioural contexts.

Ratcheting means that the blind mechanic, for all its misfires, all its heuristic misapplications, is always working on the basis of past successes. Ratcheting, in other words, assures the inevitability of technical ‘progress,’ the gradual development of ever more effective behaviours, the capacity to componentialize our environments (and each other) in more and more ways—to the point where we stand now, the point where intersystematic intricacy enables behaviours that allow us to forego the ‘airy parts’ altogether. To the point where the behaviour enabled by cognitive structure can now begin directly knapping that structure, regardless of the narrow tweaking channels, sensitivities, provided by evolution.

The point of the Singularity.

For some time now I’ve been arguing that the implications of the Singularity already embroil us—that the Singularity can be seen, in fact, as the material apotheosis of the Semantic Apocalypse, insofar as it is the point where the Scientific Image of the human at last forecloses on the Manifest Image.

This brings me to Reza Negarestani’s, “The Labor of the Inhuman,” his two-part meditation on the role we should expect—even demand—reason to play in the Posthuman. He adopts Brandom’s claim that sapience, the capacity to play the ‘game of giving and asking for reasons,’ distinguishes humans as human. He then goes on to argue that this allows us, and ultimately commits us, to seeing the human as a kind of temporally extended process of rational revision, one that ultimately results in the erasure of the human—or the ‘inhuman.’ Ultimately, what it means to be human is to be embroiled in a process of becoming inhuman. He states his argument thus:

The contention of this essay is that universality and collectivism cannot be thought, let alone attained, through consensus or dissensus between cultural tropes, but only by intercepting and rooting out what gives rise to the economy of false choices and by activating and fully elaborating what real human significance consists of. For it is, as will be argued, the truth of human significance—not in the sense of an original meaning or a birthright, but in the sense of a labor that consists of the extended elaboration of what it means to be human through a series of upgradable special performances—that is rigorously inhuman.

In other words, so long as we fail to comprehend the inhumanity of the human, this rational-revisionary process, we fail to understand the human, and so have little hope of solving problems pertaining to the human. Understanding the ‘truth of human significance,’ therefore requires understanding what the future will make of the human. This requires that Negarestani prognosticate, that he pick out the specific set of possibilities constituting the inhuman. The only principled way to do that is to comprehend some set of systematic constraints operative in the present. But his credo, unlike that of the ‘Hard SF’ writer, is to ignore the actual technics of the natural, and to focus on the speculative technics of the normative. His strategy, in other words, is to predict the future of the human using only human resources—to see the fate of the human, the ‘inhuman,’ as something internal to the intentionality of the human. And this, as I hope to show in the following installment, is simply not plausible.

The Eliminativistic Implicit (I): The Necker Cube of Everyday and Scientific Explanation

Go back to what seems the most important bit, then ask the Intentionalist this question: What makes you think you have conscious access to the information you need? They’ll twist and turn, attempt to reverse the charges, but if you hold them to this question, it should be a show-stopper.

What follows, I fear, is far longer winded.

Intentionalists, I’ve found, generally advert to one of two general strategies when dismissing eliminativism. The first is founded on what might be called the ‘Preposterous Complaint,’ the idea that eliminativism simply contradicts too many assumptions and intuitions to be considered plausible. As Uriah Kriegal puts it, “if eliminativism cannot be acceptable unless a relatively radical interpretation of cognitive science is adopted, then eliminativism is not in good shape” (“Non-phenomenal Intentionality,” 18). But where this criticism would be damning in other, more established sciences, it amounts to little more than an argument ad populum in the case of cognitive science, which as of yet lacks any consensual definition of its domain. The very naturalistic inscrutability behind the perpetual controversy also motivates the Eliminativist’s radical interpretation. The idea that something very basic is wrong with our approach to questions of experience and intentionality is by no means a ‘preposterous’ one. You could say the reality and nature of intentionality is the question. The Preposterous Complaint, in other words, doesn’t so much impugn the position as insinuate career suicide.

The second turns on what might be called the ‘Presupposition Complaint,’ the idea that eliminativism implicitly presupposes the very intentionality that it claims to undermine. The tactic generally consists of scanning the eliminativist’s claims, picking out various intentional concepts, then claiming that use of such concepts implicitly affirms the existence of intentionality. The Eliminativist, in other words, commits ‘cognitive suicide’ (as Lycan, 2005, calls it). Insofar as the use of intentional concepts is unavoidable, and insofar as the use of intentional concepts implicitly affirms the existence of intentionality, intentionality is ineliminable. The Eliminativist is thus caught in an obvious contradiction, explicitly asserting not-A on the hand, while implicitly asserting A on the other.

On BBT, intentionality as traditionally theorized, far from simply ‘making explicit’ what is ‘implicitly the case,’ is actually a kind of conceptual comedy of errors turning on heuristic misapplication and metacognitive neglect. Such appeals to ‘implicit intentionality,’ in other words, are appeals to the very thing BBT denies. They assume the sufficiency of the very metacognitive intuitions that positions such as my own call into question. The Intentionalist charge of performative contradiction simply begs the question. It amounts to nothing more than the bald assertion that intentionality cannot be eliminated because intentionality is ineliminable.

The ‘Presupposition Complaint’ is pretty clearly empty as an argumentative strategy. In dialogical terms, however, I think it remains the single biggest obstacle to the rational prosecution of the Intentionalist/Eliminativist debate—if only because of the way it allows so many theorists to summarily dismiss the threat of Eliminativism. Despite its circularity, the Presupposition Complaint remains the most persistent objection I encounter—in fact, many critics persist in making it even after its vicious circularity has been made clear. And this has led me to realize the almost spectacular importance of the notion of the implicit plays in all such debates. For many thinkers, the intentional nature of the implicit is simply self-evident, somehow obvious to intuition. This is certainly how it struck me before I began asking the kinds of questions motivating the present piece. After all, what else could the implicit be, if not the intentional ‘ground’ of our intentional ‘practices’?

In what follows, I hope to show how this characterization of the implicit, far from obvious, actually depends, not only on ignorance, but on a profound ignorance of our ignorance. On the account I want to give here, the implicit, far from naming some spooky ‘infraconceptual’ or ‘transcendental’ before of thought and cognition, simply refers to what we know is actually occluded from metacognitive appraisals of experience: namely, nature as described by science. To frame the issue in terms of a single question, what I want to ask in this post and its sequels is, What warrants the Intentionalist’s claims regarding implicit normativity, say, over an Eliminativist’s claims of implicit mechanicity?

So what is the implicit? Given the crucial role the concept plays in a variety of discourses, it’s actually remarkable how few theorists have bothered with the question of making the implicit qua implicit explicit (Stephen Turner and Eugene Gendlin are signature exceptions in this regard, of course). Etymologically, ‘implicit’ derives from the Latin, implicitus, the participle of implico, which means ‘to involve’ or ‘to entangle,’ meanings that seem to bear more on implicit’s perhaps equally mysterious relatives, ‘imply’ or ‘implicate.’ According to Wikitionary, uses that connote ‘entangled’ are now obsolete. Implicit, rather, is generally taken to mean, 1) “Implied directly, without being directly expressed,” 2) “Contained in the essential nature of something but not openly shown,” and 3) “Having no reservations or doubts; unquestioning or unconditional; usually said of faith or trust.” Implicit, in other words, is generally taken to mean unspoken, intrinsic, and unquestioned.

Prima facie, at least, these three senses are clearly related. Unless spoken about, the implicit cannot be questioned, and so must remain an intrinsic feature of our performances. The ‘implicit,’ in other words, refers to something operative within us that nonetheless remains hidden from our capacity to consciously report. Logical or material inferential implications, for instance, guide subsequent transitions within discourse, whether we are conscious of them or not. The same might be said of ‘emotional implications,’ or ‘political implications,’ or so on.

Let’s call this the Hidden Constraint Model of the implicit, the notion that something outside conscious experience somehow ‘contains’ organizing principles constraining conscious experience. The two central claims of the model can be recapitulated as:

1) The implicit lies in what conscious cognition neglects. The implicit is inscrutable.

2) The implicit somehow constrains conscious cognition. The implicit is effective.

From inscrutability and effectiveness, we can infer at least two additional features pertaining to the implicit:

3) The effective constraints on any given moment of conscious cognition require a subsequent moment of conscious cognition to be made explicit. We can only isolate the biases specific to a claim we make subsequent to that claim. The implicit, in other words, is only retrospectively accessible.

4) Effective constraints can only be consciously cognized indirectly via their effects on conscious experience. Referencing, say, the ‘implicit norms governing interpersonal conduct’ involves referencing something experienced only in effect. ‘Norms’ are not part of the catalogue of nature—at least as anything recognizable as such. The implicit, in other words, is only inferentially accessible.

So consider, as a test case, Hume’s famous meditations on causation and induction. In An Enquiry Concerning Human Understanding, Hume points out how reason, no matter how cunning, is powerless when it comes to matters of fact. Short of actual observation, we have no way of divining the causal connections between events. When we turn to experience, however, all we ever observe is the conjunction of events. So what brings about our assumptive sense of efficacy, our sense of causal power? Why should repeating the serial presentation of two phenomena produce the ‘feeling,’ as Hume terms it, that the first somehow determines the second? Hume’s ‘skeptical solution,’ of course, attributes the feeling to mere ‘custom or habit.’ As he writes, “[t]he appearance of a cause always conveys the mind, by a customary transition, the idea of an effect” (ECHU, 51, italics my own).

All four of the features enumerated above are clearly visible in the above. Hume makes no dispute of the fact that the repetition of successive events somehow produces the assumption of efficacy. “On this,” he writes, “are founded all our reasonings concerning matters of fact or existence” (51). Exposure to such repetitions fundamentally constrains our understanding of subsequent exposures, to the point where we cannot observe the one without assuming the other—to the point where the bulk of scientific knowledge is raised upon it. Efficacy is effective—to say the least!

But there’s nothing available to conscious cognition—nothing observable in these successive events—over and above their conjunction. “One event follows another,” Hume writes; “but we never can observe any tie between them. They seem conjoined, but never connected” (49). Efficacy, in other words, is inscrutable as well.

So then what explains our intuition of efficacy? The best we can do, it seems, is to pause and reflect upon the problem (as Hume does), to posit some X (as Hume does) reasoning from what information we can access. Efficacy, in other words, is only retrospectively and inferentially accessible.

We typically explain phenomena by plugging them into larger functional economies, by comprehending how their precursors constrain them and how they constrain their successors in turn. This, of course, is what made Hume’s discovery—that efficacy is inscrutable—so alarming. When it comes to environmental inquiries we can always assay more information via secondary investigation and instrumentation. As a result, we can generally solve for precursors in our environments. When it comes to metacognitive inquiries such as Hume’s, however, we very quickly stumble into our own incapacity. “And what stronger instance,” Hume asks, “can be produced of the surprising ignorance and weakness of the understanding, than the present?” (51). Efficacy, the very thing that binds phenomena to their precursors, is itself without precursors.

Not surprisingly, the comprehension of cognitive phenomena (such as efficacy) without apparent precursors poses a special kind of problem. Given efficacy, we can comprehend environmental nature. We simply revisit the phenomena and infer, over and over, accumulating the information we need to arbitrate between different posits. So how, then, are we supposed to comprehend efficacy? The empirical door is nailed shut. No matter how often we revisit and infer, we simply cannot accumulate the data we need to arbitrate between our various posits. Above, we see Hume rooting around with questions, (our primary tool for making ignorance visible) and finding no trace of what grounds his intuitions of empirical efficacy. Thus the apparent dilemma: Either we acknowledge that we simply cannot understand these intuitions, “that we have no idea of connexion or power at all, and that these words are absolutely without any meaning” (49), or we elaborate some kind of theoretical precursor, some fund of hidden constraint, that generates, at the very least, the semblance of knowledge. We posit some X that ‘reveals’ or ‘expresses’ or ‘makes explicit’ the hidden constraint at issue.

These ‘X posits’ have been the bread and butter of philosophy for some time now. Given Hume’s example it’s easy to see why: the structure and dynamics of cognition, unlike the structure and dynamics of our environment, do not allow for the accumulation of data. The myriad observational opportunities provided by environmental phenomena simply do not exist for phenomena like efficacy. Since individual (and therefore idiosyncratic) metacognitive intuitions are all we have to go on, our makings explicit are pretty much doomed to remain perpetually underdetermined—to be ‘merely philosophical.’

I take this as uncontroversial. What makes philosophy philosophy as opposed to a science is its perennial inability to arbitrate between incompatible theoretical claims. This perennial inability to arbitrate between incompatible theoretical claims, like the temporary inability to arbitrate between incompatible theoretical claims in the sciences, is in some important respect an artifact of insufficient information. But where the sciences generally possess the resources to accumulate the information required, philosophy does not. Aside from metacognition or ‘theoretical reflection,’ philosophy has precious little in the way of informational resources.

And yet we soldier on. The bulk of traditional philosophy relies on what might be called the Accessibility Conceit: the notion that, despite more than two thousand years of failure, retrospective (reflective, metacognitive) interrogations of our activities somehow access enough information pertaining to their ‘intrinsic character’ to make the inferential ‘expression’ of our implicit precursors a viable possibility. Hope, as they say, springs eternal. Rather than blame their discipline’s manifest institutional incapacity on some more basic metacognitive incapacity, philosophers generally blame the problem on the various conceptual apparatuses used. If they could only get their concepts right, the information is there for the taking. And so they tweak and they overturn, posit this precursor and that, and the parade of ‘makings explicit’ grows and grows and grows. In a very real sense, the Accessibility Conceit, the assumption that the tools and material required to cognize the implicit are available, is the core commitment of the traditional philosopher. Why show up for work, otherwise?

The question of comprehending conscious experience is the question of comprehending the constitutive and dynamic constraints on conscious experience. Since those constraints don’t appear within conscious experience, we pay certain people called ‘philosophers’ to advance speculative theories of their nature. We are a rather self-obsessed species, after all.

Advancing speculative hypotheses regarding each other’s implicit nature is something we do all the time. According to Robin Dunbar, some two thirds of human communication is devoted to gossip. We are continually replaying, revisiting—even our anticipations yoke the neural engines of memory. In fact, we continually interrogate our emotionally charged interactions, concocting rationales, searching for the springs of others’ actions, declaring things like ‘She’s just jealous,’ or ‘He’s on to you.’ There is, you might say, an ‘Everyday Implicit’ implicit in our everyday discourse.

As there has to be. Conscious experience may be ‘as wide as the sky,’ as Dickinson says, but it is little more than a peephole. Conscious experience, whatever it turns out to be, seems to be primarily adapted to deliberative behaviour in complex environments. Among other things, it operates as a training interface, where the deliberative repetition of actions can be committed to automatic systems. So perhaps it should come as no surprise that, like behaviour, it is largely serial. When peephole, serial access to a complex environment is all you have, the kind of retrospective inferential capacity possessed by humans becomes invaluable. Our ability to ‘make things explicit’ is pretty clearly a central evolutionary design feature of human consciousness.

In a fundamental sense, then, making-explicit is just what we humans do. It makes sense that with time, especially once literacy allowed for the compiling of questions—an inventory of ignorance, you might say—that we would find certain humans attempting to make making explicit itself explicit. And since making each other explicit was something that we seemed to do with some degree of reliability, it makes sense that the difficulty of this new task should confound these inquirers. The Everyday Implicit was something they used with instinctive ease, reliably attributing all manner of folk-intentional properties to individuals all the time. And yet, whenever anyone attempted to make this Everyday Implicit explicit, they seemed to come up with something different.

No one could agree on any canonical explication. And yet, aside from the ancient skeptics, they all agreed on the possibility of such a canonical explication. They all hewed to the Accessibility Conceit. And since the skeptics’ mysterian posit was as underdetermined as any of their own claims, they were inclined to be skeptical of the skeptics. Otherwise, their Philosophical Implicit remained the only game in town when it came to things human and implicit. They need only look to the theologians for confirmation of their legitimacy. At least they placed their premises before their conclusions!

But things have changed. Over the past few decades, cognitive scientists have developed a number of ingenious experimental paradigms designed to reveal the implicit underbelly of what we think and do. In the now notorious Implicit Association Test, for instance, the time subjects require to pair concepts is thought to indicate the cognitive resources required, and thus provide an indirect measure of implicit attitudes. If it takes a white individual longer to pair stereotypically black names with positive attributes than it does white names, this is presumed to evidence an ‘implicit bias’ against blacks. Actions, as the old proverb has it, speak louder than words. It does seem intuitive to suppose that the racially skewed effort involved in value identifications tokens some kind of bias. Versions of this of this paradigm continue to proliferate. Once the exclusive purview of philosophers, the implicit has now become the conceptual centerpiece of a vast empirical domain. Cognitive science has now revealed myriad processes of implicit learning, interpretation, evaluation, and even goal-setting. Taken together, these processes form what is generally referred to as System 1 cognition (see table below), an assemblage of specialized cognitive capacities—heuristics—adapted to the ‘quick and dirty’ solution of domain specific ‘problem ecologies’ (Chow, 2011; Todd and Gigerenzer, 2012), and which operate in stark contrast to what is called System 2 cognition, the slow, serial, and deliberate problem solving related to conscious access (defined in Dehaene’s operationalized sense of reportability)—what we take ourselves to be doing this very moment, in effect.

DUAL PROCESS THEORIES IN PSYCHOLOGY

System 1 Cognition (Implicit) System 2 Cognition (Explicit)
Not conscious Conscious
Not human specific Human specific
Automatic Deliberative
Fast Slow
Parallel Sequential
Effortless Effortful
Intuitive Reflective
Domain specific Domain general
Pragmatic Logical
Associative Rulish
High capacity Low capacity
Evolutionarily old Evolutionarily young

* Adapted from Frankish and Evans, “The duality of mind: A historical perspective.”

What are called ‘dual process’ or ‘dual system’ theories of cognition are essentially experimentally driven complications of the crude dichotomy between unconscious/implicit and conscious/explicit problem solving that has been pondered since ancient times. As granular as this emerging empirical picture remains, it already poses a grave threat to our traditional explicitations of the implicit. Our cognitive capacities, it turns out, are far more fractionate, contingent, and opaque than we ever imagined. Decisions can be tracked prior to a subject’s ability to report them (Haynes, 2008; or here). The feeling of willing can be readily tricked, and thus stands revealed as interpretative (Wegner, 2002; Pronin, 2009). Memory turns out to be fractionate and nonveridical (See Bechtel, 2008, for review). Moral argumentation is self-promotional rather than truth-seeking (Haidt, 2012). Various attitudes appear to be introspectively inaccessible (See Carruthers, 2011, for extensive review). The feeling of certainty has a dubious connection to rational warrant (Burton, 2008). The list of such findings continually grows, revealing an ‘implicit’ that consistently undermines and contradicts our traditional and intuitive self-image—what Sellars famously termed our Manifest Image.

As Frankish and Evans (2009) write in their historical perspective on dual system theories:

“The idea that we have ‘two minds’ only one of which corresponds to personal, volitional cognition, has also wide implications beyond cognitive science. The fact that much of our thought and behaviour is controlled by automatic, subpersonal, and inaccessible cognitive processes challenges our most fundamental and cherished notions about personal and legal responsibility. This has major ramifications for social sciences such as economics, sociology, and social policy. As implied by some contemporary researchers … dual process theory also has enormous implications for educational theory and practice. As the theory becomes better understood and more widely disseminated, its implications for many aspects of society and academia will need to be thoroughly explored. In terms of its wider significance, the story of dual-process theorizing is just beginning.” 25

Given the rhetorical constraints imposed by their genre, this amounts to the strident claim that a genuine revolution in our understanding of the human is underway, one that could humble us out of existence. The simple question is, Where does that revolution end?

Consider what might be called the ‘Worst Case Scenario’ (WCS). What if it were the case that conscious experience and cognition have evolved in such a way that the higher dimensional, natural truth of the implicit utterly exceeds our capacity to effectively cognize conscious experience and cognition outside a narrow heuristic range? In other words, what if the philosophical Accessibility Conceit were almost entirely unwarranted, because metacognition, no matter how long it retrospects or how ingeniously it infers, only accesses information pertinent to a very narrow band of problem solving?

Now I have a number of arguments for why this is very likely the case, but in lieu of those arguments, it will serve to consider the eerie way our contemporary disarray regarding the implicit actually exemplifies WCS. People, of course, continue using the Everyday Implicit the way we always have. Philosophers continue positing their incompatible versions of the Philosophical Implicit the way they have for millennia. And scientists researching the Natural Implicit continue accumulating data, articulating a picture that seems to contradict more and more of our everyday and philosophical intuitions as it gains dimensionality.

Given WCS, we might expect the increasing dimensionality of our understanding would leave the functionality of the Everyday Implicit intact, that it would continue to do what it evolved to do, simply because it functions the way it does regardless of what we learn. At the same time, however, we might expect the growing fidelity of the Natural Implicit would slowly delegitimize our philosophical explications of that implicit, not only because those explications amount to little more than guesswork, but because of the fundamental incompatibility of intentional and the causal conceptual registers.

Precisely because the Everyday Implicit is so robustly functional, however, our ability to gerrymander experimental contexts around it should come as no surprise. And we should expect that those invested in the Accessibility Conceit would take the scientific operationalization of various intentional concepts as proof of 1) their objective existence, and 2) the fact that only more cognitive labour, conceptual, empirical, or both, is required.

If WCS were true, in other words, one might expect that cognitive sciences invested in the Everyday and Philosophical Implicit, like psychology, would find themselves inexorably gravitating about the Natural Implicit as its dimensionality increased. One might expect, in other words, that the Psychological Implicit would become a kind of decaying Necker Cube, an ‘unstable bi-stable concept,’ one that would alternately appear to correspond to the Everyday and Philosophical Implicit less and less, and to the Natural Implicit more and more.

Part Two considers this process in more detail.

Davidson’s Fork: An Eliminativist Radicalization of Radical Interpretation

Davidson’s primary claim to philosophical fame lies in the substitution of the hoary question of meaning qua meaning with the more tractable question of what we need to know to understand others—the question of interpretation. Transforming the question of meaning into the question of interpretation forces considerations of meaning to account for the methodologies and kinds of evidence required to understand meaning. And this evidence happens to be empirical: the kinds of sounds actual speakers make in actual environments. Radical interpretation, you might say, is useful precisely because of the way the effortlessness of everyday interpretation obscures this fact. Starting from scratch allows our actual resources to come to the fore, as well as the need to continually test our formulations.

But it immediately confronts us with a conundrum. Radical Interpretation, as Davidson points out, requires some way of bootstrapping the interdependent roles played by belief and meaning. “Since we cannot hope to interpret linguistic activity without knowing what a speaker believes,” he writes, “and cannot found a theory of what he means on a prior discovery of his beliefs and intentions, I conclude that in interpreting utterances from scratch—in radical interpretation—we must somehow deliver simultaneously a theory of belief and a theory of meaning” (“Belief and the Basis of Meaning,” Inquiries into Truth and Interpretation, 144). The problem is that the interpretation of linguistic activity seems to require that we know what a speaker believes, knowledge that we can only secure if we already know what a speaker means.

The enormously influential solution Davidson gives the problem lies in the way certain, primitive beliefs can be non-linguistically cognized on the assumption of the speaker’s rationality. If we assume that the speaker believes as he should, that he believes it is raining when it is raining, snowing when it is snowing, and so on, if we take interpretative Charity as our principle, we have a chance of gradually correlating various utterances with the various conditions that make them true, of constructing interpretations applicable in practice.

Since Charity seems to be a presupposition of any interpretation whatsoever, the question of what it consists in would seem to become a kind of transcendental battleground. This is what makes Davidson such an important fork in the philosophical road. If you think Charity involves something irreducibly normative, then you think Davidson has struck upon interpretation as the locus requiring theoretical intentional cognition to be solved, a truly transcendental domain. So Brandom, for instance, takes Dennett’s interpretation of Charity in the form of the Intentional Stance as the foundation of his grand normative metaphysics (See, Making It Explicit, 55-62). What makes this such a slick move is the way it allows the Normativist to have things both ways, to remain an interpretativist (though Brandom does ultimately ascribe to original intentionality in Making It Explicit) about the reality of norms, while nevertheless treating norms as entirely real. Charity, in other words, provides a way to at once deny the natural reality of norms, while insisting they are real properties. Fictions possessing teeth.

If, on the other hand, you think Charity is not something irreducibly normative, then you think Davidson has struck upon interpretation as the locus where the glaring shortcomings of the transcendental are made plain. The problem of Radical Interpretation is the problem of interpreting behaviour. This is the whole point of going back to translation or interpretation in the first place: to start ‘from scratch,’ asking what, at minimum, is required for successful linguistic communication. By revealing behaviour as the primary source of information, Radical Interpretation shows how the problem is wholly empirical, how observation is all we have to go on. The second-order realm postulated by the Normativist simply does not exist, and as such, has nothing useful to offer the actual, empirical problem of translation.

As Stephen Turner writes:

“For Davidson, this whole machinery of a fixed set of normative practices revealed in the enthymemes of ordinary justificatory usage is simply unnecessary. We have no privileged access to meaning which we can then expressivistically articulate, because there is nothing like this—no massive structure of normative practices—to access. Instead we try to follow our fellow beings and their reasoning and acting, including their speaking: We make them intelligible. And we have a tool other than the normal machinery of predictive science that makes this possible: our own rationality.” “Davidson’s Normativity,” 364

Certainly various normative regimes/artifacts are useful (like Decision Theory), and others indispensible (like some formulation of predicate logic), but indispensability is not necessity. And ‘following,’ as Turner calls it, requires only imagination, empathy, not the possession of some kind of concept (which is somehow efficacious even though it doesn’t exist in nature). It is an empirical matter for cognitive science, not armchair theorizing, to decide.

Turner has spent decades developing what is far and away the most comprehensive critique of what he terms Normativism that I’ve ever encountered. His most recent book, Explaining the Normative, is essential reading for anyone attempting to gain perspective on Sellarsian attempts to recoup some essential domain for philosophy. For those interested in post-intentional philosophy more generally, and of ways to recharacterize various domains without ontologizing (or ‘quasi-ontologizing’) intentionality in the form of ‘practices,’ ‘language games,’ ‘games of giving and asking for reasons,’ and so on, then Turner is the place to start.

I hope to post a review of Explaining the Normative and delve into Turner’s views in greater detail in the near future, but for the nonce, I want to stick with Davidson. Recently reading Turner’s account of Davidson’s attitude to intentionality (“Davidson’s Normativity”) was something of a revelation for me. For the first time, I think I can interpret Radical Interpretation in my own terms. Blind Brain Theory provides a way to read Davidson’s account as an early eliminativist approximation of a full-blown naturalistic theory of interpretation.

A quick way to grasp the kernel of Blind Brain Theory runs as follows (a more thorough pass can be found here). The cause of my belief of a blue sky outside today is, of course, the blue sky outside today. But it is not as though I experience the blue sky causing me to experience the blue sky—I simply experience the blue sky. The ‘externalist’ axis of causation—the medial, or enabling, axis—is entirely occluded. All the machinery responsible for conscious experience is neglected: causal provenance is a victim of what might be called medial neglect. Now the fact that we can metacognize experience means that we’ve evolved some kind of metacognitive capacity, machinery for solving problems that require the brain to interpret its own operations, problems such as, say, ‘holding your tongue at Thanksgiving dinner.’ Medial neglect, as one might imagine, imposes a profound constraint on metacognitive problem-solving: namely, that only those problems that can be solved absent causal information can be solved at all. Given the astronomical causal complexities underwriting experience, this makes metacognitive problem-solving heuristic in the extreme. Metacognition hangs sideways in a system it cannot possibly hope to cognize in anything remotely approaching a high-dimensional manner, the manner that our brain cognizes its environments more generally.

If one views philosophical reflection as an exaptation of our evolved metacognitive problem-solvers for the purposes of theorizing the nature of experience, one can assume it has inherited this constraint. If metacognition cannot access information regarding the actual processes responsible for experience for the solution of any problem, then neither can philosophical reflection on experience. And since nature is causal, this is tantamount to saying that, for the purposes of theoretical metacognition at least, experience has no nature to be solved. And this raises the question of just what—if anything—theoretical metacognition (philosophical reflection) is ‘solving.’

In essence, Blind Brain Theory provides an empirical account of the notorious intractability of those philosophical problems arising out of theoretical metacognition. Traditional philosophical reflection, it claims, trades in a variety of different metacognitive illusions—many of which can be diagnosed and explained away, given the conceptual resources Blind Brain Theory provides. On its terms, the traditional dichotomy between natural and intentional concepts/phenomena is entirely to be expected—in fact, we should expect sapient aliens possessing convergently evolved brains to suffer their own versions of the same dichotomy.

Intentionalism takes our blindness to first-person cognitive activity as a kind of ontological demarcation when it is just an artifact of the way the integrated, high-dimensional systems registering the external environment fractures into an assembly of low-dimensional hacks registering the ‘inner.’ There is no demarcation, no ‘subject/object’ dichotomy, just environmentally integrated systems that cannot automatically cognize themselves as such (and so resort to hacks). Neglect allows us to see this dichotomy as a metacognitive artifact, and to thus interpret the first-person in terms entirely continuous with the third-person. Blind Brain Theory, in other words, naturalizes the intentional. It ‘externalizes’ everything.

So how does this picture bear on the issue of Charity and Radical Interpretation? In numerous ways, I think, many of which Davidson would not approve, but which do have the virtue of making his central claims perhaps more naturalistically perspicuous.

From the standpoint of our brains linguistically solving other brains, we take it for granted that solving other organisms requires solving something in addition to the inorganic structure and dynamics of our environments. The behaviour taken as our evidential base in Radical Interpretation already requires a vast amount of machinery and work. So basically we’re talking about the machinery and work required over and above this baseline—the machinery and work required to make behaviour intentionally, as opposed to merely causally, intelligible.

The primary problem is that the activity of intentional interpretation, unlike the activity interpreted, almost escapes cognition altogether. To say, as so many philosophers so often do, that intentionality is ‘irreducible’ is to say that it is naturalistically occult. So any account of interpretation automatically trades in blind spots, in the concatenation of activities that we cannot cognize. In the terms of Blind Brain Theory, any account of interpretation has to come to grips with medial neglect.

From this perspective, one can see Davidson’s project as an attempt to bootstrap an account of interpretation that remains honest or sensitive to medial neglect, the fact that 1) our brain simply cannot immediately cognize itself as a brain, which is to say, in terms continuous with its cognition of nature; and 2) that our brain cannot immediately cognize this inability, and so assumes no such inability. Thanks to medial neglect, every act of interpretation is hopelessly obscure. And this places a profound constraint on our ability to theoretically explicate interpretation. Certainly we have a variety of medial posits drawn from the vocabulary of folk-psychology, but all of these are naturalistically obscure, and so function as unexplained explainers. So the challenge for Davidson, then, is to theorize interpretation in a manner that respects what can and cannot be cognized—to regiment our blind spots in a manner that generates real, practically applicable understanding.

In other words, Davidson begins by biting the medial inscrutability bullet. If medial neglect makes it impossible to theoretically explicate medial terms, then perhaps we can find a way to leverage what (causally inexplicable) understanding they do seem to provide into something more regimented, into an apparatus, you might say, that poses all the mysteries as effectively as possible (and in this sense, his project is a direct descendent of Quine’s).

This is the signature virtue of Tarski’s ‘Convention T.’ “[T]he striking thing about T-sentences,” Davidson writes, “is that whatever machinery must operate to produce them, and whatever ontological wheels must turn, in the end a T-sentence states the truth conditions of a sentence using resources no richer than, because the same as, those of the sentence itself” (“Radical Interpretation, 132). By modifying Tarski’s formulation so that it takes truth instead of translation as basic, he can generate a theory based on an intentional, unexplained explainer—truth—that produces empirically testable results. Given that interpretation is the practical goal, the ontological status of the theory itself is moot: “All this apparatus is properly viewed as theoretical construction, beyond the reach of direct verification,” he writes. “It has done its work provided only it entails testable results in the form of T-sentences, and these make no mention of the machinery” (133).

The apparatus is warranted only to the extent that it enables further cognition. Indeed, given medial neglect, no further metacognitive explication of the apparatus is even possible. It may prove indispensible, but only empirically so, the way a hammer is to framing, and not as, say, the breath of God is to life, or more mysterious still, in some post facto ‘virtual yet efficacious’ sense. In fact, both of these latter characterizations betray the profundity of medial neglect, how readily we intuit the absence of various dimensions of information, say those of space and time, as a positive, as some kind of inexplicable something that, as Turner has been arguing for decades, begs far more questions than it pretends to solve.

The brain’s complexity is such, once again, that it cannot maintain anything remotely approaching the high-dimensional, all-purpose covariational regime it maintains with its immediate environment with itself. Only a variety of low-dimensional, special purpose cognitive tools are possible—an assemblage of ‘hacks.’ Thus the low-dimensional parade of inexplicables that constitute the ‘first-person.’ This is why complicating your intentional regimentations beyond what is practically needed simply makes no sense. Their status as specialized hacks means we have every reason to assume their misapplication in any given theoretical context. This isn’t to say that exaptation to other problems isn’t possible, only that efficacious problem-solving is our only guide to applicability. The normative proof is in the empirical pudding. Short of practical applications, high-dimensional solutions, the theoretician is simply stacking unexplained explainers into baroque piles. There’s a reason why second-order normative architectures rise and fall as fads. Their first-order moorings are the same, but as the Only-game-in-town Effect erodes beneath waves of alternative interpretation, they eventually break apart, often to be salvaged into some new account that feels so compelling for appearing, to some handful of souls at least, to be the only game in town at a later date.

So for Davidson, characterizing Radical Interpretation in terms of truth amounts to characterizing Radical Interpretation in terms of a genuine unexplained explainer, an activity that we can pragmatically decompose and rearticulate, and nothing more. The astonishing degree to which the behaviour itself underdetermines the interpretations made, simply speaks to the radically heuristic nature of the cognitive activities underwriting interpretation. It demonstrates, in other words, the incredibly domain specific nature of the cognitive tools used. A fortiori, it calls into question the assumption that whatever information metacognition can glean is remotely sufficient for theoretically cognizing the structure and dynamics of those tools.

From the standpoint of reflection, intentional cognition or ‘mindreading’ almost entirely amounts to simply ‘getting it’ (or as Turner says, ‘following’). Given the paucity of information over and above the sensory, our behaviour cognizing activity strikes us as non-dimensional in the course of that cognizing—medial neglect renders our ongoing cognitive activity invisible. The odd invisibility of our own communicative performances—the way, for instance, the telling (or listening) ‘disappears’ into the told—simply indicates the axis of medial neglect, the fact they we’re talking about activities the brain cannot identify or situate in the high-dimensional idiom of environmental cognition. At best, evolution has provided metacognitive access to various ‘flavours of activity,’ if you will, vague ways of ‘getting our getting’ or ‘following our following’ the behaviour of others, and not much more—as the history of philosophy should attest!

‘Linguistic understanding,’ on this account, amounts to standing in certain actual and potential systematic, causal relations with another speaker—of being a machine attuned to natural and social environments in some specific way. The great theoretical virtue of Blind Brain Theory is the way it allows us to reframe apparently essential semantic activities like interpretation in mechanical terms. When an anthropologist learns the language of another speaker nothing magical is imprinted or imbibed. The anthropologist ‘understands’ that the speaker is systematically interrelated to his environment the same as he, and so begins the painstaking process of mapping the other’s relations onto his own via observationally derived information regarding the speaker’s utterances in various circumstances. The behaviour-enabling covariational regime of one individual comes to systematically covary with that of another individual and thus form a circuit between them and the world. The ‘meaning’ now ‘shared’ consists in nothing more than this entirely mechanical ‘triangulation.’ Each stands in the relation of component to the other, forming a singular superordinate system possessing efficacies that did not previously exist. The possible advantages of ‘teamwork’ increase exponentially—which is arguably the primary reason our species evolved language at all.

The perplexities pile on when we begin demanding semantic answers to our semantic questions, when we ask, What is meaning? expecting an answer that accords with our experiences of meaning. Given that we possess nothing short of our experience of meaning with which to compare any theory of meaning, the demand that such a theory accord with that experience seems, on the face of things, to be eminently reasonable. But it still behooves us to interrogate the adequacy of that ‘experience as metacognized,’ especially now, given all that we have learned the past two decades. On a converging number of accounts, human consciousness is a mechanism for selecting, preserving, and broadcasting information for more general neural consumption. When we theoretically reflect on cognitive activity, such as ‘getting’ or ‘following’ our best research tells us we are relying on the memory traces of previous broadcasts. The situation poses a metacognitive nightmare, to say the least. Even if we could trust those memory traces to provide some kind of all-purpose schema (and we can’t), we have no access to the larger neurofunctional context of the broadcast, what produced the information and what consumed it for what—all we have are low-dimensional fragments that appear to be ethereal wholes. It’s as if we’re attempting to solve for a car using only its fuse-panel diagram—worse!

Like Quine before him, Davidson has no way of getting around intentionality, and so, also like Quine, he attempts to pass through it with as much epistemic piety as possible. But his ‘intentional instrumentalism’ will only take him so far. Short of any means of naturalizing meaning, he regularly finds himself struggling to see his way clear. The problem of first-person authority provides an illustrative case in point. The assumption that some foreign language speaker ‘holds true’ making utterances the way you ‘hold true’ making utterances can only facilitate interpretation, assist in ‘following his meaning,’ if it is the case that you can follow your own meaning. A number of issues arise out of this, not the least the suggestion that interpretation seems to require the very kind of metacognitive access that I have consistently been denying!

But following one’s own meaning is every bit as mysterious as following another’s. Ownership of utterances can be catastrophically misattributed in a number of brain pathologies. When it comes to self/other speech comprehension, we know the same machinery is involved, only yoked in different ways, and we know that machinery utterly eludes metacognition. To reiterate: the cryptic peculiarities of understanding meaning (and all other intentional phenomena) are largely the result of medial neglect, the point where human cognition, overmatched by its own complexity, divides to heuristically conquer. In a profound sense, metacognition finds itself in the same straits regarding the brain as social cognition does regarding other brains.

So what does the asymmetry of ‘first-person authority,’ the fact that meanings attributed to others can be wrong while meanings attributed to oneself cannot, amount to? Nothing more than the fact that the systematic integrity of you, as a blind system, is ‘dedicated’ in a way that the systematic integrity of our interpretative relations is not. ‘Teamwork machines’ are transitory couplings requiring real work to get off the ground, and then maintain against slippages. The ‘asymmetry’ Davidson wants to explain consists in nothing more than this. No work is required to ‘follow oneself,’ whereas work is required to follow others.

For all the astronomical biological complexity involved, it really is as simple as this. The philosophical hairball presently suffocating the issue of first-person authority is an artifact of the way that theoretical metacognition, blinkered by medial neglect, retrospectively schematizes the issue in terms of meaning. The ontologization of meaning transforms the question of first-person authority into an epistemic question, a question of how one could know. This, of course, divides into the question of implicit versus explicit knowing. Since all these concepts (knowing, implicit, explicit) are naturalistically occult, interpretation can be gamed indefinitely. Despite his epistemic piety, Davidson’s attempt to solve for first-person authority using intentional idioms was doomed from the outset.

It’s worth noting an interesting connection to Heidegger in all this, a way, perhaps, to see the shadow of Blind Brain Theory operating in a quite different philosophical system. Heidegger, who harboured his own doubts regarding philosophical reflection, would see the philosophical hairball described above as yet another consequence of the ‘metaphysics of presence,’ the elision of the ‘ontological difference’ between being and beings. For him, the problem isn’t that meaning is being ontologized so much as it is being ontologized in the wrong way. His conflation of meaning with being essentially dissolves the epistemic problem the same way as my elimination of meaning, albeit in a manner that renders everything intentionally occult.

So what is meaning? A matter of intersystematic calibration. When we ask someone to ‘explain what they mean’ we are asking them to tweak our linguistic machinery so as to facilitate function. The details are, without a doubt, astronomically complex, and almost certain to surprise and trouble us. But one of the great virtues of mechanistic explanations lies in the nonmysterious way it can generalize over functions, move from proteins to organelles to cells to organs to organisms to collectives to ecologies to biospheres and so on. The ‘physical stance’ scales up with far more economy than some (like Dennett) would have you believe. And since it comprises our most reliable explanatory idiom, we should expect it to eventually yield the kind of clarity evinced above. Is it simply a coincidence that the interpretative asymmetry that Davidson and so many other philosophers have intentionally characterized directly corresponds with the kind of work required to maintain mechanical systematicity between two distinct systems? Do we just happen to ‘get the meaning wrong’ whenever covariant slippages occur, or is the former simply the latter glimpsed darkly?

Which takes us, at long last, to the issue of ‘Charity,’ the indispensability of taking others as reliably holding their utterances true to the process of interpretation. As should be clear by now, there is no such thing. We no more take Charity to the interpretation of behaviour than your wireless takes Charity to your ISP. There is no ‘attitude of holding true,’ no ‘intentional stance.’ Certainly, sometimes we ‘try’—or are at least conscious of making an effort. Otherwise understanding simply happens. The question is simply how we can fill in the blanks in a manner that converges on actual theoretical cognition, as opposed to endless regress. Behaviour is tracked, social heuristics are cued, an interpretation is neurally selected for conscious broadcasting and we say, ‘Ah! ‘Es regnet,’ means ‘It is raining’!

The Eliminativist rennovation of Radical Interpretation makes plain everything that theoretical reflection has hitherto neglected. In other words, what it makes plain is the ‘pre-established harmony’ needed to follow another, the monstrous amount of evolutionary and cultural stage-setting required simply to get to interpretative scratch. The enormity of this stage setting is directly related to the heuristic specificity of the systems we’ve developed to manage them, the very specificity that renders second-order discourse of the nature of ‘intentional phenomena’ dubious in the extreme.

As the skeptics have been arguing since antiquity.

The Ontology of Ghosts

In the courtyard a shadowy giant elm

Spreads ancient boughs, her ancient arms where dreams,

False dreams, the old tale goes, beneath each leaf

Cling and are numberless.

–Virgil, The Aenied, Book VI

.

I’m always amazed, looking back, at how fucking clear things had seemed at this or that juncture of my philosophical life—how lucid. The two early conversions, stumbling into nihilism as a teenager, then climbing into Heidegger in my early twenties, seem the most ‘religious’ in retrospect. I think this is why I never failed to piss people off even back then. You have this self-promoting skin you wear when you communicate, this tactical gloss that compels you to impress. This is what non-intellectuals hear when you speak, tactics and self-promotion. This is why it’s so easy to tar intellectualism in the communal eye: insecurity and insincerity are of its essence. All value judgements are transitive in human psychology: Laugh up your sleeve at what I say, and you are laughing at me. I was an insecure, hypercritical, know-it-all. You add the interpersonal trespasses of religion—intolerance, intensity, and aggressiveness—and I think it’s safe to assume I came across as an obnoxious prick.

But if I was evangelical, it was that I could feel those transformations. Each position possessed its own, distinct metacognitive attitude toward experience, a form of that I attributed to this, whatever it might be. With my adolescent nihilism, I remember obsessively pondering the way my thoughts bubbled up out of oblivion—and being stupefied. I was some kind of inexplicable kink in the real. I was so convinced I was an illusion that I would ache for being alone, grip furniture for fear of flying.

But with Heidegger, it was like stepping into a more resonant clime, into a world rebarred with meaning, with projects and cares and rules and hopes. A world of towardness, where what you are now is a manifold of happenings, a gazing into an illuminated screen, a sitting in a world bound to you via your projects, a grasping of these very words. The intentional things, the phenomena of lived life, these were the foundation, I believed, the sine qua non of empirical inquiry. Before we can ask the question of freedom and meaning we need to ask the question of what comes first.

What could be more real than lived life?

It took a long time for me to realize just how esoteric, just how parochial, my definition of ‘lived life’ was. No matter how high you scratch your charcoal cloud, the cave wall always has the final say. It’s the doctors that keep you alive; philosophers just help you fall to sleep. Everywhere I looked across Continental philosophy, I saw all these crazy-ass interpretations, variants spanning variants, revivals and exhaustions, all trying to get the handle on the intentional ontology of a ‘lived life’ that took years of specialized training to appreciate. This is how I began asking the question of the cognitive difference. And this is how I found myself back at the beginning, my inaugural, adolescent departure from the naive.

The difference being, I am no longer stupefied.

I have a new religion, one that straightens out all the kinks, and so dispels rather than saves the soul. I am no exception. I have been chosen by nobody for nothing. I am continuous with the x-dimensional totality that we call nature—continuous in every respect. I watch images from Hubble, the most distant galactic swirls, and I tell myself, I am this, and I feel grand and empty. I am the environment that chokes, the climate that reels. I am the body that the doctor attends…

And you are too.

Thus the most trivial prophecy, the prediction that you will waver, crumble, that the florescent light will wobble to the sound of loved ones weeping… breathing. That someone, maybe, will clutch your hand.

Such hubris, when you think about it, to assume that lived life lay at your intellectual fingertips—the thing most easily grasped! For someone who has spent their life reading philosophy this stands tall among the greater insults: the knowledge that we have been duped all along, that all those profundities, that resonant world I found such joy and rancour pondering, were little more than the artifact of machines taking their shadows for reflections, the cave wall for a looking glass.

I am the residue of survival—living life. I am an astronomically complicated system, a multifarious component of superordinate systems that cannot cognize itself as such for being such. I am a serial gloss, a transmission from nowhere into nowhere, a pattern plucked from subpersonal pandemonium and broadcast to the neural horde. I am a message that I cannot conceive. As. Are. You.

I can show you pictures of dead people to prove it. Lives lived out.

The first-person is a selective precis of this totality, one that poses as the totality. And this is the trick, the way to unravel the kink and see how it is that Heidegger could confuse his semantic vision with seeing. The oblivion behind my thoughts is the oblivion of neglect. Because oblivion has no time, I have no time, and so watch amazed as my shining hands turn to leather. I breathe deep and think, Now. Because oblivion constrains nothing, I follow rules of my own will, pursue goals of my own desire. I stretch forth my hand and remake what lies before me. Because oblivion distinguishes nothing, I am one. I raise my voice and declare, Me. Because oblivion reveals nothing, I stand opposite the world, always only aimed, never connected. I squint and I squint and I ask, How do I know?

I am bottomless because my foundation was never mine to see. I am a perspective, an agent, a person, just another dude-with-a-bad-attitude—I am all these things because of the way I am not any of these things. I am not what I am because of what I am—again, the same as you.

Ghosts can be defined as a fragment cognized as a whole. In some cultures ghosts have no backs, no faces, no feet. In most all cultures they have no substance, no consistency, temporal or otherwise. The dimensions of lived life have been stripped from them; they are shades, animate shadows. As Virgil says of Aeneas attempting to embrace his father, Anchises, in the Underworld:

 Then thrice around his neck his arms he threw;

And thrice the flitting shadow slipp’d away,

Like winds, or empty dreams that fly the day.

Ghosts are the incorporeal remainder, the something shorn of substance and consistency. This is the lived life of Heidegger, an empty dream that flew the day. Insofar as Dasein lacks meat, Dasein dwells with the dead, another shade in the underworld, another passing fancy. We are not ghosts. If lived life lies in the meat, then the truth of lived life lies in the meat. The truth of what we are runs orthogonal to the being that we all swear that we must be. Consciousness is an anosognosiac broker, and we are the serial sum of deals struck between parties utterly unknown. Who are the orthogonal parties? What are the deals? These are the questions that aim us at our most essential selves, at what we are in fact. These are the answers being pursued by industry.

And yet we insist on the reality of ghosts, so profound is the glamour spun by neglect. There are no orthogonal parties, we cry, and therefore no orthogonal deals. There is no orthogonal regime. Oblivion hides only oblivion. What bubbles up from oblivion, begins with me and ends with me. Thus the enduring attempt to make sense of things sideways, to rummage through the ruin of heaven and erect parallel regimes, ones too impersonal to reek of superstition. We use ghosts of reference to bind our inklings to the world, ghosts of inference to bind our inklings to one another, ghosts of quality to give ethereal substance to experience. Ghosts and more ghosts, all to save the mad, inescapable intuition that our intuitions must be real somehow. We raise them as architecture, and demur whenever anyone poses the mundane question of building material.

‘Thought’… No word short of ‘God’ has shut down more thinking.

Content is a wraith. Freedom is a vapour. Experience is a dream. The analogy is no coincidence.

The ontology of meaning is the ontology of ghosts.

 

 

 

Incomplete Cognition: An Eliminativist Reading of Terrence Deacon’s Incomplete Nature

Incomplete Nature: How Mind Emerged from Matter

Goal seeking, willing, rule-following, knowing, desiring—these are just some of the things we do that we cannot make sense of in causal terms. We cite intentional phenomena all the time, attributing them the kind of causal efficacy we attribute to the more mundane elements of nature. The problem, as Terrence Deacon frames it, is that whenever we attempt to explain these explainers, we find nothing, only absence and perplexity.

“The inability to integrate these many species of absence-based causality into our scientific methodologies has not just seriously handicapped us, it has effectively left a vast fraction of the world orphaned from theories that are presumed to apply to everything. The very care that has been necessary to systematically exclude these sorts of explanations from undermining our causal analyses of physical, chemical, and biological phenomena has also stymied our efforts to penetrate beyond the descriptive surface of the phenomena of life and mind. Indeed, what might be described as the two most challenging scientific mysteries of the age—both are held hostage by this presumed incompatibility.” Incomplete Nature,12

The question, of course, is whether this incompatibility is the product of our cognitive constitution or the product of some as yet undiscovered twist in nature. Deacon argues the latter. Incomplete Nature is a magisterial attempt to complete nature, to literally rewrite physics in a way that seems to make room for goal seeking, willing, rule-following, knowing, desiring, and so on—in other words, to provide a naturalistic way to make sense of absences that cause. He wants to show how all these things are real.

My own project argues the former, that the notion of ‘absences that cause’ is actually an artifact of neglect. ‘We’ are an astronomically complicated subsystem embedded in the astronomically complicated supersystem that we call ‘nature,’ in such a way that we cannot intuitively cognize ourselves as natural.

The Blind Brain Theory claims to provide the world’s first genuine naturalization of intentionality—a parsimonious, comprehensive way to explain centuries of confusion away. What Intentionalists like Deacon think they are describing are actually twists on a family of metacognitive illusions. Crudely put, since no cognitive capacity could pluck ‘accuracy’ of any kind from the supercomplicated muck of the brain, our metacognitive system confabulates. It’s not that some (yet to be empirically determined) systematicity isn’t there: it’s that the functions discharged via our conscious access to that systematicity are compressed, formatted, and truncated. Metacognition neglects these confounds, and we begin making theoretical inferences assuming the sufficiency of compressed, formatted, and truncated information. Among many things, BBT actually predicts a discursive field clustered about families of metacognitive intuitions, but otherwise chronically incapable of resolving among their claims. When an Intentionalist gives you an account of the ‘game of giving and asking for reasons,’ say, you need only ask them why anyone should subscribe to an ontologization (whether virtual, quasi-transcendental, transcendental, or otherwise) on the basis of almost certainly unreliable metacognitive hunches.

The key conceptual distinction in BBT is that between what I’ve been calling ‘lateral sensitivity’ and ‘medial neglect.’ Lateral sensitivity refers to the brain’s capacity to be ‘imprinted’ by other systems, to be ‘pushed’ in ways that allow it to push back. Since behavioural interventions, or ‘pushing-back,’ requires some kind of systematic relation to the system or systems to be pushed, lateral sensitivity requires being pushed by the right things in the right way. Thus the Inverse Problem and the Bayesian nature of the human brain. The Inverse Problem pertains to the difficulty of inferring the structure/dynamics of some distal system (an avalanche or a wolf, say) via the structure/dynamics of some proximal system (ambient sound or light, say) that reliably co-varies with that distal system. The difficulty is typically described in terms of ambiguity: since any number of distal systems could cause the structure/dynamics of the proximal system, the brain needs some way of allowing the actual distal system to push through the proximal system, if it is to have any hope of pushing back. Unless it becomes a reliable component of its environment, it cannot reliably make components of its environments. This is an important image to keep in mind: that of the larger brain-environment system, the way the brain is adapted to be pushed, or transformed into a component of larger environmental mechanisms, so as to push back, to ‘componentialize’ environmental mechanisms. Quite simply, we have evolved to be tyrannized by our environment in a manner that enables us to tyrannize our environment.

Lateral sensitivity refers to this ‘tyranny enabling tyranny,’ the brain’s ability to systematically covary with its environment in behaviourally advantageous ways. A system that solves via the Inverse Problem possesses a high degree of reliable covariational complexity. As it turns out, the mechanical complexity required to do this is nothing short of mind-boggling. And as we shall see, this fact possesses some rather enormous consequences. Up to this point, I’ve really only provided an alternate description of the sensorimotor loop; the theoretical dividends begin piling up once we consider lateral sensitivity in concert with medial neglect.

The machinery of lateral sensitivity is so complicated that it handily transcends its own ‘sensitivity threshold.’ This means the brain possesses a profound insensitivity to itself. This might sound daffy, given that the brain simply is a supercomplicated network of mutual sensitivities, but this is actually where the nub of cognition as a distinct biological process is laid bare. Unlike the dedicated sensitivity that underwrites mechanism generally, the sensitivity at issue here involves what might be called the systematic covariation for behaviour. Any process that systematically covaries for behaviour is a properly cognitive process. So the above could be amended to, ‘the brain possesses a profound cognitive insensitivity to itself.’ Medial neglect is this profound cognitive insensitivity.

The advantage of cognition is behaviour, the push-back. The efficacy of this behavioural push-back depends on the sensory push, which is to say, lateral sensitivity. Innumerable behavioural problems, it turns out, require that we be pushed by our pushing back: that our future behaviour (push-back) be informed (pushed) by our ongoing behaviour (pushing-back). Behavioural efficacy is a function of behavioural versatility is a function of lateral sensitivity, which is to say, the capacity to systematically covary with the environment. Medial neglect, therefore, constitutes a critical limit on behavioural efficacy: those ‘problem ecologies’ requiring sensitivity to the neurobiological apparatus of cognition to be solved effectively lay outside the capacity of the system to tackle. We are, quite literally, the ‘elephant in the room,’ a supercomplicated mechanism sensitive to most everything relevant to problem-solving in its environment except itself.

Mechanical allo-sensitivity entails mechanical auto-insensitivity, or auto-neglect. A crucial consequence of this is that efficacious systematic covariation requires unidirectional interaction, or that sensing be ‘passive.’ The degree to which the mechanical activity of tracking actually impacts the system to be tracked is the degree to which that system cannot be reliably tracked. Anticipation via systematic covariation is impossible if the mechanics of the anticipatory system impinge on the mechanics of the system to be anticipated. The insensitivity of the anticipatory system to its own activity, or medial neglect, perforce means insensitivity to systems directly mechanically entangled in that activity. Only ‘passive entanglement’ will do. This explains why so-called ‘observer effects’ confound our ability to predict the behaviour of other systems.

So the stage is set. The brain quite simply cannot cognize itself (or other brains) in the same high-dimensional way it cognizes its environments. (It would be hard to imagine any evolved metacognitive capacity that could achieve such a thing, in fact). It is simply too complex and too entangled. As a result, low-dimensional, special purpose heuristics—fast and frugal kluges—are its only recourse.

The big question I keep asking is, How could it be any other way? Given the problems of complexity and complicity, given the radical nature of the cognitive bottleneck—just how little information is available for conscious, serial processing—how could any evolved metacognitive capacity whatsoever come close to apprehending the functional truth of anything inner’? If you are an Intentionalist, say, you need to explain how the phenomena you’re convinced you intuit are free of perspectival illusions, or conversely, how your metacognitive faculties have overcome the problems posed by complexity and complicity.

On BBT, the brain possesses at least two profoundly different covariational regimes, one integrated, problem-general, and high-dimensional, mediating our engagement in the natural world, the other fractious, problem-specific and low-dimensional, mediating our engagements with ourselves and others (who are also complex and complicit), and thereby our engagement in the natural world. The twist lies in medial neglect, the fact that the latter fractious, problem-specific, and low-dimensional covariational regime is utterly insensitive to its fractious, problem-specific, and low-dimensional nature. Human metacognition is almost entirely blind to the structure of human cognition. This is why we require cognitive science: reflection on our cognitive capacities tells us little or nothing about those capacities, reflection included. Since we have no way of intuiting the insufficiency of these intuitions, we assume they’re sufficient.

We are now in a position to clearly delineate Deacon’s ‘fraction,’ what makes it vast, and why it has been perennially orphaned. Historically, natural science has been concerned with the ‘lateral problem-ecologies,’ with explicating the structure and dynamics of relatively simple systems possessing functional independence. Any problem ecology requiring the mechanistic solution of brains lay outside its purview. Only recently has it developed the capacity to tackle ‘medial problem-ecologies,’ the structure and dynamics of astronomically complex systems possessing no real functional independence. For the first time humanity finds itself confronted with integrated, high-dimensional explications of what it is. The ruckus, of course, is all about how to square these explications with our medial traditions and intuitions. All the so-called ‘hard problems’ turn on our apparent inability to naturalistically find, let alone explain, the phenomena corresponding to our intuitive, metacognitive understanding of the medial.

Why do our integrated, high-dimensional, explications of the medial congenitally ‘leave out’ the phenomena belonging to the medial-as-metacognized? Because metacognitive phenomena like goal seeking, willing, rule-following, knowing, desiring only ‘exist,’ insofar as they exist at all, in specialized problem-solving contexts. ‘Goal seeking’ is something we all do all the time. A friend has an untoward reaction to a comment of ours, so we ask ourselves, in good conscience, ‘What was I after?’ and the process of trying to determine our goal given whatever information we happen to have begins. Despite complexity and complicity, this problem is entirely soluble because we have evolved the heuristic machinery required: we can come to realize that our overture was actually meant to belittle. Likewise, the philosopher asks, ‘What is goal-seeking?’ and the process of trying to determine the nature of goal-seeking given whatever information he happens to have begins. But the problem proves insoluble, not surprisingly, given that the philosopher almost certainly lacks the requisite heuristic machinery. The capacity to solve for goal-seeking qua goal-seeking is just not something our ancestors evolved.

Deacon’s entire problematic turns on the equivocation of the first-order and second-order uses of intentional terms, on the presumption that the ‘goal-seeking’ we metacognize simply has to be the ‘goal-seeking’ referenced in first-order contexts—on the presumption, in other words, of metacognitive adequacy, which is to say something we now know to be false as a matter of empirical fact. For all its grand sweep, for all its lucid recapitulation and provocative conjecture, Incomplete Nature is itself shockingly incomplete. Nowhere does he consider the possibility that the only ‘goal-seeking phenomenon’ missing, the only absence to be explained, is this latter, philosophical goal-seeking.

At no point in the work does he reference, let alone account for, the role metacognition or introspection plays in our attempt to grapple with the incompatibility of natural and intentional phenomena. He simply declares “the obvious inversion of causal logic that distinguishes them” (139), without genuinely considering where that ‘inversion’ occurs. Because this just is the nub of the issue between the emergentist and the eliminativist: whether his ‘obvious inversion’ belongs to the systems observed or to the systems observing. As Deacon writes:

“There is no use denying there is a fundamental causal difference between these domains that must be bridged in any comprehensive theory of causality. The challenge of explaining why such a seeming reversal takes place, and exactly how it does so, must ultimately be faced. At some point in this hierarchy, the causal dynamics of teleological processes do indeed emerge from simpler blind mechanistic dynamics, but we are merely restating this bald fact unless we can identify exactly how this causal about-face is accomplished. We need to stop trying to eliminate homunculi, and to face up to the challenge of constructing teleological properties—information, function, aboutness, end-directedness, self, even conscious experience—from unambiguously non-teleological starting points.” 140

But why do we need to stop ‘trying to eliminate’ homunculi? We know that philosophical reflection on the nature of cognition is woefully unreliable. We know that intentional concepts and phenomena are the stock and trade of philosophical reflection. We know that scientific inquiry generally delegitimizes our prescientific discourses. So why shouldn’t we assume that the matter of intentionality amounts to more of the same?

Deacon never says. He acknowledges “there cannot be a literal ends-causing-the-means process involved” (109) when it comes to intentional phenomena. As he writes:

“Of course, time is neither stopped nor running backwards in any of these processes. Thermodynamic processes are proceeding uninterrupted. Future possible states are not directly causing present events to occur.” 109-110

He acknowledges, in other words, that this ‘inversion of causality’ is apparent only. He acknowledges, in other words, that metacognition is getting things wrong, just not entirely. So what recommends his project of ontologically meeting this appearance halfway over the project of doing away with it altogether? The project of rewriting nature, after all, is far more extravagant than the project of theorizing metacognitive shortcomings.

Deacon’s failure to account for observation-dependent interpretations of intentionality is more than suspiciously convenient, it actually renders the whole of Incomplete Nature an exercise in begging the question. He spends a tremendous amount of time and no little ingenuity in describing the way ‘teleodynamic systems,’ as the result of increasingly recursive complexity, emerge from ‘morphodynamic systems’ which in turn emerge from standard thermodynamic systems. Where thermodynamic systems exhibit straightforward entropy, morphodynamic systems, such as crystal formation, exhibit the tendency to become more ordered. Building on morphodynamics, teleodynamic systems then exhibit the kinds of properties we take to be intentional. A point of pride for Deacon is the way his elaborations turn, as he mentions in the extended passage quoted above, on ‘unambiguously non-teleological starting points.’

He sums this patient process of layering causal complexities in the postulation of what he calls an autogen, “a form of self-generating, self-repairing, self-replicating system that is constituted by reciprocal morphodynamic processes” (547-8), and arguably his most ingenious innovation. He then moves to conclude:

“So even these simple molecular systems have crossed a threshold in which we can say that a very basic form of value has emerged, because we can describe each of the component autogenic processes as there for the sake of autogen integrity, or for the maintenance if that particular form of autogenicity. Likewise, we can describe different features of the surrounding molecular environment as ‘beneficial’ or ‘harmful’ in the same sense that we would apply these assessments to microorganisms. More important, these are not merely glosses provided by a human observer, but intrinsic and functionally relevant features of the consequence-organized nature of the autogen itself.” 322

And the reader is once again left with the question of why. We know that the brain possesses suites of heuristic problem solvers geared to economize by exploiting various features of the environment. The obvious question becomes: How is it that any of the processes he describes do anything more than schematize the kinds of features that trigger the brain to swap out its causal cognitive systems for its intentional cognitive systems?

Time and again, one finds Deacon explicitly acknowledging the importance of the observer, and time and again one finds him dismissing that importance without a lick of argumentation—the argumentation his entire account hangs on. One can even grant him his morphodynamic and teleodynamic ‘phase transitions’ and still plausibly insist that all he’s managed to provide is a detailed description of the kinds of complex mechanical processes prone to trigger our intentional heuristics. After all, if it is the case that the future does not cause the past, then ‘end directedness,’ the ‘obvious inversion of causality,’ actually isn’t an inversion at all. The fact is Deacon’s own account of constraints and the role they play in morphodynamics and teleodynamics is entirely amenable to mechanical understanding. He continually relies on disposition talk. Even his metaphors, like the ‘negentropic ratchet’ (317), tend to be mechanical. The autogen is quite clearly a machine, one that automatically expresses the constraints that make it possible. The fact that these component constraints result in a system that behaves in ways far different than mundane thermodynamic systems speaks to nothing more extraordinary than mechanical emergence, the fact that whole mechanisms do things that their components could not (See Craver, 2007, pp. 211-17 for a consideration of the distinction between mechanical and spooky emergence). Likewise, for all the ink he spills regarding the holistic nature of teleodynamic systems, he does an excellent job explaining them in terms of their contributing components!

In the end, all Deacon really has is an analogy between the ‘intentional absence,’ our empirical inability to find intentional phenomena, and the kind of absence he attributes to constraints. Since systematicity of any kind requires constraints, defining constraints, as Deacon does, in terms of what cannot happen—in terms of what is absent—provides him the rhetorical license he needs to speak of ‘absential causes’ at pretty much any juncture. Since he has already defined intentional phenomena as ‘absential causes’ it becomes very easy thing indeed to lead the reader over the ‘epistemic cut’ and claim that he has discovered the basis of the intentional as it exists in nature, as opposed to an interpretation of those systems inclined to trigger intentional cognition in the human brain. Constraints can be understood in absential terms. Intentional phenomena can only be understood in absential terms. Since the reader, thanks to medial neglect, has no inkling whatsoever of the fractionate and specialized nature of intentional cognition, all Deacon needs to do is comb their existing intuitions in his direction. Constraints are objective, therefore intentionality is objective.

Not surprisingly, Deacon falls far short of ‘naturalizing intentionality.’ Ultimately, he provides something very similar to what Evan Thompson delivers in his equally impressive (and unconvincing) Mind in Life: a more complicated, attenuated picture of nature that seems marginally less antithetical to intentionality. Where Thompson’s “aim is not to close the explanatory gap in a reductive sense, but rather to enlarge and enrich the philosophical and scientific resources we have for addressing the gap (x), Deacon’s is to “demonstrate how a form of causality dependent on specifically absent features and unrealized potentials can be compatible with our best science” (16), the idea being that such an absential understanding will pave the way for some kind of thoroughgoing naturalization of intentionality—as metacognized—in the future.

But such a naturalization can only happen if our theoretical metacognitive intuitions regarding intentionality get intentionality right in general, as opposed to right enough for this or that. And our metacognitive intuitions regarding intentionality can only get intentionality right in general if our brain has somehow evolved the capacity to overcome medial neglect. And the possibility of this, given the problems of complexity and complicity, seems very hard to fathom.

The fact is BBT provides a very plausible and parsimonious observer dependent explanation for why metacognition attributes so many peculiar properties the medial processes. The human brain, as the frame of cognition, simply cannot cognize itself the way it does other systems. It is, as a matter of empirical necessity, not simply blind to its own mechanics, but blind to this blindness. It suffers medial neglect. Unable to access and cognize its origins, and unable to cognize this inability, it assumes that it accesses all there is to access—it confuses itself for something bottomless, an impossible exception to physics.

So when Deacon writes:

“These phenomena not only appear to arise without antecedents, they appear to be defined with respect to something nonexistent. It seems that we must explain the uncaused appearance of phenomena whose causal powers derive from something nonexistent! It should be no surprise that this most familiar and commonplace feature of our existence poses a conundrum for science.” 39

we need to take the truly holistic view that Deacon himself consistently fails to take. We need to see this very real problem in terms of one set of natural systems—namely, us—engaging the set of all natural systems, as a kind of linkage between being pushed and pushing back.

On BBT, Deacon’s ‘obvious inversion of causality’ is merely an illusory artifact of constraints pertaining to the human brain’s ability to cognize itself the way it cognizes its environments. They appear causally inverted simply because no information pertaining to their causal provenance is available to deliberative metacognition. Rules constrain us in some mysterious, orthogonal way. Goals somehow constrain us from the future. Will somehow constrains itself! Desires, like knowledge, are somehow constrained by their objects, even when they are nowhere to be seen. These apparently causally inverted phenomena vanish whenever we search for their origins because they quite simply do not exist in the high-dimensional way things in our environments exist. They baffle scientific reason because the actual neuromechanical heuristics employed are adapted to solve problems in the absence of detailed causal information, and because conscious metacognition, blind to the rank insufficiency of the information available for deliberative problem-solving, assumes that it possesses all the information it needs. Philosophical reflection is a cultural achievement, after all, an exaption of existing, more specialized cognitive resources; it seems quite implausible to assume the brain would possess the capacity to vet the relative sufficiency of information utilized in ways possessing no evolutionary provenance.

We are causally embedded in our environments in such a way that we cannot intuit ourselves as so embedded, and so intuit ourselves otherwise, as goal seeking, willing, rule-following, knowing, desiring, and so on—in ways that systematically neglect the actual, causal relations involved. Is it really just a coincidence that all these phenomena just happen to belong to the ‘medial,’ which is to say, the machinery responsible for cognition? Is it really just a coincidence that all these phenomena exhibit a profound incompatibility with causal explanation? Is it really just a coincidence that all our second-order interpretations of these terms are chronically underdetermined (a common indicator of insufficient information), even though they function quite well when used in everyday, first-order, interpersonal contexts?

Not at all. As I’ve attempted to show in a variety of ways the past couple years a great number of traditional conundrums can be resolved via BBT. All the old problems fall away once we realize that the medial—or ‘first person’—is simply what the third person looks like absent the capacity to laterally solve the third person. The time has come to leave them behind and begin the hard work of discovering what new conundrums await.

The Closing and Opening of Covers

My agent has the book, and I’m having several copies of the manuscript printed up and bound to distribute to some keen-eyed friends today. That’s as much as I can say detail-wise, at the moment. As soon as my publishers and my agent and I have the details hashed out I will post them here post-haste.

I also finally managed to trap True Detective on my PVR. People have sent me so many links (such as this and this) to mainstream articles on the character of Cohle and his creator Nic Pizzolatto’s inspirations that I thought it worth a looksee. I haven’t watched an episode yet, but the notion of Mathew McConaughy (a devote believer) playing a nihilistic prophet appeals to my sense of cosmic perversity. I suppose he would make a good Disciple Manning. Who knows, maybe a thunderbolt will strike someone at HBO–they’ll take a sip of latte and wonder, “Egad! What if we take True Detective and Game of Thrones  and mash them together!” Either way, given the way society continues to inexorably creep toward Golgotterath, the popularization of this fact has got to be a good thing… if it’s true that informed gamblers enjoy better odds than sleepwalkers, that is.

Follow

Get every new post delivered to your Inbox.

Join 363 other followers