Three Pound Brain

No bells, just whistling in the dark…

Tag: Normativism

Discontinuity Thesis: A ‘Birds of a Feather’ Argument Against Intentionalism

by rsbakker

A hallmark of intentional phenomena is what might be called ‘discontinuity,’ the idea that the intentional somehow stands outside the contingent natural order, that it possesses some as-yet-occult ‘orthogonal efficacy.’ Here’s how some prominent intentionalists characterize it:

“Scholars who study intentional phenomena generally tend to consider them as processes and relationships that can be characterized irrespective of any physical objects, material changes, or motive forces. But this is exactly what poses a fundamental problem for the natural sciences. Scientific explanation requires that in order to have causal consequences, something must be susceptible of being involved in material and energetic interactions with other physical objects and forces.” Terrence Deacon, Incomplete Nature, 28

“Exactly how are consciousness and subjective experience related to brain and body? It is one thing to be able to establish correlations between consciousness and brain activity; it is another thing to have an account that explains exactly how certain biological processes generate and realize consciousness and subjectivity. At the present time, we not only lack such an account, but are also unsure about the form it would need to have in order to bridge the conceptual and epistemological gap between life and mind as objects of scientific investigation and life and mind as we subjectively experience them.” Evan Thompson, Mind in Life, x

“Norms (in the sense of normative statuses) are not objects in the causal order. Natural science, eschewing categories of social practice, will never run across commitments in its cataloguing of the furniture of the world; they are not by themselves causally efficacious—no more than strikes or outs are in baseball. Nonetheless, according to the account presented here, there are norms, and their existence is neither supernatural nor mysterious. Normative statuses are domesticated by being understood in terms of normative attitudes, which are in the causal order.” Robert Brandom, Making It Explicit, 626

What I would like to do is run through a number of different discontinuities you find in various intentional phenomena as a means of raising the question: What are the chances? What’s worth noting is how continuous these alleged phenomena are with each other, not simply in terms of their low-dimensionality and natural discontinuity, but in terms of mutual conceptual dependence as well. I made a distinction between ‘ontological’ and ‘functional’ exemptions from the natural even though I regard them as differences of degree because of the way it maps stark distinctions in the different kinds of commitments you find among various parties of believers. And ‘low-dimensionality’ simply refers to the scarcity of the information intentional phenomena give us to work with—whatever finds its way into the ‘philosopher’s lab,’ basically.

So with regard to all of the following, my question is simply, are these not birds of a feather? If not, then what distinguishes them? Why are low-dimensionality and supernaturalism fatal only for some and not others?


Soul – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts of the Soul, you will find it consistently related to Ghost, Choice, Subjectivity, Value, Content, God, Agency, Mind, Purpose, Responsibility, and Good/Evil.

Game – Anthropic. Low-dimensional. Functionally exempt from natural continuity (insofar as ‘rule governed’). Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Game is consistently related to Correctness, Rules/Norms, Value, Agency, Purpose, Practice, and Reason.

Aboutness – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Aboutness is consistently related to Correctness, Rules/Norms, Inference, Content, Reason, Subjectivity, Mind, Truth, and Representation.

Correctness – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Correctness is consistently related to Game, Aboutness, Rules/Norms, Inference, Content, Reason, Agency, Mind, Purpose, Truth, Representation, Responsibility, and Good/Evil.

Ghost – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts of Ghosts, you will find it consistently related to God, Soul, Mind, Agency, Choice, Subjectivity Value, and Good/Evil.

Rules/Norms – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Rules and Norms are consistently related to Game, Aboutness, Correctness, Inference, Content, Reason, Agency, Mind, Truth, and Representation.

Choice – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Embodies inexplicable efficacy. Choice is typically discussed in relation to God, Agency, Responsibility, and Good/Evil.

Inference – Anthropic. Low-dimensional. Functionally exempt (‘irreducible,’ ‘autonomous’) from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Inference is consistently related to Game, Aboutness, Correctness, Rules/Norms, Value, Content, Reason, Mind, A priori, Truth, and Representation.

Subjectivity – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Subjectivity is typically discussed in relation to Soul, Rules/Norms, Choice, Phenomenality, Value, Agency, Reason, Mind, Purpose, Representation, and Responsibility.

Phenomenality – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. Phenomenality is typically discussed in relation to Subjectivity, Content, Mind, and Representation.

Value – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Value discussed in concert with Correctness, Rules/Norms, Subjectivity, Agency, Practice, Reason, Mind, Purpose, and Responsibility.

Content – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Content discussed in relation with Aboutness, Correctness, Rules/Norms, Inference, Phenomenality, Reason, Mind, A priori, Truth, and Representation.

Agency – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Agency is discussed in concert with Games, Correctness, Rules/Norms, Choice, Inference, Subjectivity, Value, Practice, Reason, Mind, Purpose, Representation, and Responsibility.

God – Anthropic. Low-dimensional. Ontologically exempt from natural continuity (as the condition of everything natural!). Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds God discussed in relation to Soul, Correctness, Ghosts, Rules/Norms, Choice, Value, Agency, Purpose, Truth, Responsibility, and Good/Evil.

Practices – Anthropic. Low-dimensional. Functionally exempt from natural continuity insofar as ‘rule governed.’ Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Practices are discussed in relation to Games, Correctness, Rules/Norms, Value, Agency, Reason, Purpose, Truth, and Responsibility.

Reason – Anthropic. Low-dimensional. Functionally exempt from natural continuity insofar as ‘rule governed.’ Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Reason discussed in concert with Games, Correctness, Rules/Norms, Inference, Value, Content, Agency, Practices, Mind, Purpose, A priori, Truth, Representation, and Responsibility.

Mind – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Mind considered in relation to Souls, Subjectivity, Value, Content, Agency, Reason, Purpose, and Representation.

Purpose – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Purpose discussed along with Game, Correctness, Value, God, Reason, and Representation.

A priori – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One often finds the A priori discussed in relation to Correctness, Rules/Norms, Inference, Subjectivity, Content, Reason, Truth, and Representation.

Truth – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Truth discussed in concert with Games, Correctness, Aboutness, Rules/Norms, Inference, Subjectivity, Value, Content, Practices, Mind, A priori, Truth, and Representation.

Representation – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Representation discussed in relation with Aboutness, Correctness, Rules/Norms, Inference, Subjectivity, Phenomenality, Content, Reason, Mind, A priori, and Truth.

Responsibility – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Responsibility is consistently related to Game, Correctness, Aboutness, Rules/Norms, Inference, Subjectivity, Reason, Agency, Mind, Purpose, Truth, Representation, and Good/Evil.

Good/Evil – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Good/Evil consistently related to Souls, Correctness, Subjectivity, Value, Reason, Agency, God, Purpose, Truth, and Responsibility.


The big question here, from a naturalistic standpoint, is whether all of these characteristics are homologous or merely analogous. Are the similarities ontogenetic, the expression of some shared ‘deep structure,’ or merely coincidental? For me this has to be what I think is one of the most significant questions that never get’s asked in cognitive science. Why? Because everybody has their own way of divvying up the intentional pie (including interpretavists like Dennett). Some of these items are good, and some of them are bad, depending on whom you talk to. If these phenomena were merely analogous, then this division need not be problematic—we’re just talking fish and whales. But if these phenomena are homologous—if we’re talking whales and whales—then the kinds of discursive barricades various theorists erect to shelter their ‘good’ intentional phenomena from ‘bad’ intentional phenomena need to be powerfully motivated.

Pointing out the apparent functionality of certain phenomena versus others simply will not do. The fact that these phenomena discharge some kind of function somehow seems pretty clear. It seems to be the case that God anchors the solution to any number of social problems—that even Souls discharge some function in certain, specialized problem-ecologies. The same can be said of Truth, Rule/Norm, Agency—every item on this list, in fact.

And this is precisely what one might expect given a purely biomechanical, heuristic interpretation of these terms as well (with the added advantage of being able to explain why our phenomenological inheritance finds itself mired in the kinds of problems it does). None of these need count as anything resembling what our phenomenological tradition claims to explain the kinds of behaviour that accompanies them. God doesn’t need to be ‘real’ to explain church-going, no more than Rules/Norms do to explain rule-following. Meanwhile, the growing mountain of cognitive scientific discovery looms large: cognitive functions generally run ulterior to what we can metacognize for report. Time and again, in context after context, empirical research reveals that human cognition is simply not what we think it is. As ‘Dehaene’s Law’ states, “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79). Perhaps this is simply what intentionality amounts to: a congenital ‘overestimation of awareness,’ a kind of WYSIATI or ‘what-you-see-is-all-there-is’ illusion. Perhaps anthropic, low-dimensional, functionally exempt from natural continuity, inscrutable in terms of natural continuity, source of perennial controversy, and possesses inexplicable efficacy are all expressions of various kinds of neglect. Perhaps it isn’t just a coincidence that we are entirely blind to our neuromechanical embodiment and that we suffer this compelling sense that we are more than merely neuromechanical.

How could we cognize the astronomical causal complexities of cognition? What evolutionary purpose would it serve?

What impact does our systematic neglect of those capacities have on philosophical reflection?

Does anyone really think the answer is going to be ‘minimal to nonexistent’?

The Eliminativistic Implicit II: Brandom in the Pool of Shiloam

by rsbakker

norm brain

In “The Eliminativistic Implicit I,” we saw how the implicit anchors the communicative solution of humans and their activities. Since comprehension consists in establishing connections between behaviours and their precursors, the inscrutability of those precursors requires we use explanatory posits, suppositional surrogate precursors, to comprehend ourselves and our fellows. The ‘implicit’ is a kind of compensatory mechanism, a communicative prosthesis for neglect, a ‘blank box’ for the post facto proposal of various, abductively warranted precursors.

We also saw how the implicit possessed a number of different incarnations:

1) The Everyday Implicit: The regime of folk posits adapted to solve various practical problems involving humans (and animals).

2) The Philosophical Implicit: The regime of intentional posits thought to solve aspects of the human in general.

3) The Psychological Implicit: The regime of functional posits thought to solve various aspects of the human in general.

4) The Mechanical Implicit: The regime of neurobiological posits thought to solve various aspects of the human in general.

The overarching argument I’m pressing is that only (4) holds the key to any genuine theoretical understanding of (1-3). On my account of (4), (1) is an adaptive component of socio-communicative cognition, (2) is largely an artifact of theoretical misapplications of those heuristic systems, and (3) represents an empirical attempt to approximate (4) on the basis of indirect behavioural evidence.

In this episode, the idea is to illustrate how both the problems and the apparent successes of the Philosophical Implicit can be parsimoniously explained in terms of neglect and heuristic misapplication via Robert Brandom’s magisterial Making-it-Explicit. We’ll consider what motivates Brandom’s normative pragmatism, why he thinks that only normative cognition can explain normative cognition. Without this motivation, the explanation of normative cognition defaults to natural cognition (epitomized by science), and Brandom quite simply has no subject matter. The cornerstone of his case is the Wittgensteinian gerrymandering argument against Regularism. As I hope to show, Blind Brain Theory dismantles this argument with surprising facility. And it does so, moreover, in a manner that explains why so many theorists (including myself at one time!) are so inclined to find the argument convincing. As it turns out, the intuitions that motivate Normativism turn on a cluster of quite inevitable metacognitive illusions.

norm pentagon

Blind Agents

Making-it-Explicit: Reasoning, Representing, and Discursive Commitment is easily the most sustained and nuanced philosophical consideration of the implicit I’ve encountered. I was gobsmacked I was when I first read it in the late 90s. Stylistically, it had a combination of Heideggerean density and Analytic clarity that I found narcotic. Argumentatively, I was deeply impressed by the way Brandom’s interpretive functionalism seemed to actually pull intentional facts from natural hats, how his account of communal taking as seemed to render normativity at once ‘natural’ and autonomous. For a time, I bought into a great deal of what Brandom had to say—I was particularly interested in working my ‘frame ontology’ into his normative framework. Making It Explicit had become a big part of my dissertation… ere epic fantasy saved my life!

I now think I was deluded.

In this work, Brandom takes nothing less than the explication of the ‘game of giving and asking for reasons’ as his task, “making explicit the implicit structure characteristic of discursive practice as such” (649). He wants to make explicit the role that making explicit plays in discursive cognition. It’s worth pausing to ponder the fact that we do so very many things with only the most hazy or granular second-order understanding. It might seem so platitudinal as to go without saying, but it’s worth noting in passing at least: Looming large in the implicature of all accounts such as Brandom’s is the claim that we somehow know the world without ever knowing how we know the world.

As we saw in the previous installment, the implicit designates a kind of profound cognitive incapacity, a lack of knowledge regarding our own activities. The implicit entails what might be called a Blind Agent Thesis, or BAT. Brandom, by his own admission, is attempting to generalize the behaviour of the most complicated biomechanical system known to science almost entirely blind to the functioning of that system. (He just thinks he’s operating at an ‘autonomous social functional’ level). He is, as we shall see, effectively arguing his own particular BAT.

Insofar as every theoretician, myself included, is trying to show ‘what everyone is missing,’ there’s a sense in which something like BAT is hard to deny. Why all the blather, otherwise? But this negative characterization clearly has a problem: How could we do anything without knowing how to do it? Obviously we have to ‘know how’ in some manner, otherwise we wouldn’t be able to do anything at all! This is the sense in which the implicit can be positively characterized as a species of knowing in its own right. And this leads us to the quasi-paradoxical understanding of the implicit as ‘knowing without knowing,’ a knowing how to do something without knowing how to discursively explain that doing.

Making explicit, Brandom is saying, has never been adequately made explicit—this despite millennia of philosophical disputation. He (unlike Kant, say) never offers any reason why this is the case, any consideration of what it is about making explicit in particular that should render it so resistant to explication—but then philosophers are generally prone to take the difficulty of their problems as a given. (I’m not the only one out there shouting the problem I happen to working on is like, the most important problem ever!) I mention this because any attempt to assay the difficulty of the problem of making making-explicit explicit would have explicitly begged the question of whether he (or anyone else) possessed the resources required to solve the problem.

You know, as blind and all.

What Brandom provides instead is an elegant reprise of the problem’s history, beginning with Kant’s fundamental ‘transformation of perspective,’ the way he made explicit the hitherto implicit normative dimension of making explicit, what allowed him “to talk about the natural necessity whose recognition is implicit in cognitive or theoretical activity, and the moral necessity whose recognition is implicit in practical activity, as species of one genus” (10).

Kant, in effect, had discovered something that humanity had been all but oblivious to: the essentially prescriptive nature of making explicit. Of course, Brandom almost entirely eschews Kant’s metaphysical commitments: for him, normative constraint lies in the attributions of other agents and nowhere else. Kant, in other words, had not so much illuminated the darkness of the implicit (which he baroquely misconstrues as ‘transcendental’) as snatch one crucial glimpse of its nature.

Brandom attributes the next glimpse to Frege, with his insistence on “respecting and enforcing the distinction between the normative significance of applying concepts and the causal consequences of doing so” (11). What Frege made explicit about making explicit, in other words, was its systematic antipathy to causal explanation. As Brandom writes:

“Psychologism misunderstands the pragmatic significance of semantic contents. It cannot make intelligible the applicability of norms governing the acts that exhibit them. The force of those acts is a prescriptive rather than a descriptive affair; apart from their liability to assessments of judgments as true and inferences as correct, there is no such thing as judgment or inference. To try to analyze the conceptual contents of judgments in terms of habits or dispositions governing the sequences of brain states or mentalistically conceived ideas is to settle on the wrong sort of modality, on causal necessitation rather than rational or cognitive right.” (12)

Normativity is naturalistically inscrutable, and thanks to Kant (“the great re-enchanter,” as Turner (2010) calls him), we know that making explicit is normative. Any explication of the implicit of making explicit, therefore, cannot be causal—which is to say, mechanistic. Frege, in other words, makes explicit a crucial consequence of Kant’s watershed insight: the fact that making explicit can only be made explicit in normative, as opposed to natural, terms. Explication is an intrinsically normative activity. Making causal constraints explicit at most describes what systems will do, never prescribes what they should do. Since we now know that explication is an intrinsically normative activity, making explicit the governing causal constraints has the effect of rendering the activity unintelligible. The only way to make explication theoretically explicit is to make explicit the implicit normative constraints that make it possible.

Which leads Brandom to the third main figure of his brief history, Wittgenstein. Thus far, we know only that explication is an intrinsically normative affair—our picture of making explicit is granular in the extreme. What are norms? Why do they have the curious ‘force’ that they do? What does that force consist in? Even if Kant is only credited with making explicit the normativity of making explicit, you could say the bulk of his project is devoted to exploring questions precisely like these. Consider, for instance, his explication of reason:

“But of reason one cannot say that before the state in which it determines the power of choice, another state precedes in which this state itself is determined. For since reason itself is not an appearance and is not subject at all to any conditions of sensibility, no temporal sequence takes place in it even as to its causality, and thus the dynamical law of nature, which determines the temporal sequence according to rules, cannot be applied to it.” Kant, The Critique of Pure Reason, 543

Reason, in other words, is transcendental, something literally outside nature as we experience it, outside time, outside space, and yet somehow fundamentally internal to what we are. The how of human cognition, Kant believed, lies outside the circuit of human cognition, save for what could be fathomed via transcendental deduction. Kant, in other words, not only had his own account of what the implicit was, he also had an account for what rendered it so difficult to make explicit in the first place!

He had his own version of BAT, what might be called a Transcendental Blind Agent Thesis, or T-BAT.

Brandom, however, far prefers the later Wittgenstein’s answers to the question of how the intrinsic normativity of making explicit should be understood. As he writes,

“Wittgenstein argues that proprieties of performance that are governed by explicit rules do not form an autonomous stratum of normative statuses, one that could exist though no other did. Rather, proprieties governed by explicit rules rest on proprieties governed by practice. Norms that are explicit in the form of rules presuppose norms implicit in practices.” (20)

Kant’s transcendental represents just such an ‘autonomous stratum of normative statuses.’ The problem with such a stratum, aside from the extravagant ontological commitments allegedly entailed, is that it seems incapable of dealing with a peculiar characteristic of normative assessment known since ancient times in the form of Agrippa’s trilemma or the ‘problem of the criterion.’ The appeal to explicit rules is habitual, perhaps even instinctive, when we find ourselves challenged on some point of communication. Given the regularity with which such appeals succeed, it seems natural to assume that the propriety of any given communicative act turns on the rules we are prone to cite when challenged. The obvious problem, however, is that rule citing is itself a communicative act that can be challenged. It stems from occluded precursors the same as anything else.

What Wittgenstein famously argues is that what we’re appealing to in these instances is the assent of our interlocutors. If our interlocutors happen to disagree with our interpretation of the rule, suddenly we find ourselves with two disputes, two improprieties, rather than one. The explicit appeal to some rule, in other words, is actually an implicit appeal to some shared system of norms that we think will license our communicative act. This is the upshot of Wittgenstein’s regress of rules argument, the contention that “while rules can codify the pragmatic normative significance of claims, they do so only against a background of practices permitting the distinguishing of correct from incorrect applications of those rules” (22).

Since this account has become gospel in certain philosophical corners, it might pay to block out the precise way this Wittgensteinian explication of the implicit does and does not differ from the Kantian explication. One comforting thing about Wittgenstein’s move, from a naturalist’s standpoint at least, is that it adverts to the higher-dimensionality of actual practices—it’s pragmatism, in effect. Where Kant’s making explicit is governed from somewhere beyond the grave, Wittgenstein’s is governed by your friends, family, and neighbours. If you were to say there was a signature relationship between their views, you could cite this difference in dimensionality, the ‘solidity’ or ‘corporeality’ that Brandom appeals to in his bid to ground the causal efficacy of his elaborate architecture (631-2).

Put differently, the blindness on Wittgenstein’s account belongs to you and everyone you know. You could say he espouses a Communal Blind Agent Thesis, or C-BAT. The idea is that we’re continually communicating with one another while utterly oblivious as to how we’re communicating with one another. We’re so oblivious, in fact, we’re oblivious to the fact we are oblivious. Communication just happens. And when we reflect, it seems to be all that needs to happen—until, that is, the philosopher begins asking his damn questions.

It’s worth pointing out, while we’re steeping in this unnerving image of mass, communal blindness, that Wittgenstein, almost as much as Kant, was in a position analogous to empirical psychologists researching cognitive capacities back in the 1950s and 1960s. With reference to the latter, Piccinini and Craver have argued (“Integrating psychology and neuroscience: functional analyses as mechanism sketches,” 2011) that informatic penury was the mother of functional invention, that functional analysis was simply psychology’s means of making due, a way to make the constitutive implicit explicit in the absence of any substantial neuroscientific information. Kant and Wittgenstein are pretty much in the same boat, only absent any experimental means to test and regiment their guesswork. The original edition of Philosophical Investigations, in case you were wondering, was published in 1953, which means Wittgenstein’s normative contextualism was cultured in the very same informatic vacuum as functional analysis. And the high-altitude moral, of course, amounts to the same: times have changed.

The cognitive sciences have provided a tremendous amount of information regarding our implicit, neurobiological precursors, so much so that the mechanical implicit is a given. The issue now isn’t one of whether the implicit is causal/mechanical in some respect, but whether it is causal/mechanical in every respect. The question, quite simply, is one of what we are blind to. Our biology? Our ‘mental’ programming? Our ‘normative’ programming? The more we learn about our biology, the more we fill in the black box with scientific facts, the more difficult it seems to become to make sense of the latter two.


Ineliminable Inscrutability Scrutinized and Eliminated

Though he comes nowhere near framing the problem in these explicitly informatic terms, Brandom is quite aware of this threat. American pragmatism has always maintained close contact with the natural sciences, and post-Quine, at least, it has possessed more than its fair share of eliminativist inclinations. This is why he goes to such lengths to argue the ineliminability of the normative. This is why he follows his account of Kant’s discovery of the normativity of the performative implicit with an account of Frege’s critique of psychologism, and his account of Wittgenstein’s regress argument against ‘Regulism’ with an account of his gerrymandering argument against ‘Regularism.’

Regularism proposes we solve the problem of rule-following with patterns of regularities. If a given performance conforms to some pre-existing pattern of performances, then we call that performance correct or competent. If it doesn’t so conform, then we call it incorrect or incompetent. “The progress promised by such a regularity account or proprieties of practice,” Brandom writes, “lies in the possibility of specifying the pattern or regularity in purely descriptive terms and then allowing the relation between regular and irregular performance to stand in for the normative distinction between what is correct and what is not” (MIE 28). The problem with Regularism, however, is “that it threatens to obliterate the contrast between treating a performance as subject to normative assessment of some sort and treating it as subject to physical laws” (27). Thus the challenge confronting any Regularist account of rule-following, as Brandom sees it, is to account for its normative character. Everything in nature ‘follows’ the ‘rules of nature,’ the regularities isolated by the natural sciences. So what does the normativity that distinguishes human rule-following consist in?

For a regularist account to weather this challenge, it must be able to fund a distinction between what is in fact done and what ought to be done. It must make room for the permanent possibility of mistakes, for what is done or taken to be correct nonetheless turn out to be incorrect or inappropriate according to some rule or practice.” 27

The ultimate moral, of course, is that there’s simply no way this can be done, there’s no way to capture the distinction between what happens and what ought to happen on the basis of what merely happens. No matter what regularity the Regularist adduces ‘to play the role of norms implicit in practice,’ we find ourselves confronted by the question of whether it’s the right regularity. The fact is any number of regularities could play that role, stranding us with the question of which regularity one should conform to—which is to say, the question of the very normative distinction the Regularist set out to solve in the first place. Adverting to dispositions to pick out the relevant regularity simply defers the problem, given that “[n]obody ever acts incorrectly in the sense of violating his or her own dispositions” (29).

For Brandom, as with Wittgenstein, the problem of Regularism is intimately connected to the problem of Regulism: “The problem that Wittgenstein sets up…” he writes, “is to make sense of a notion of norms implicit in practice that will not lose either the notion of the implicitness, as regulism does, or the notion of norms, as simple regularism does” (29). To see this connection, you need only consider one of Wittgenstein’s more famous passages from Philosophical Investigations:

§217. “How am I able to obey a rule?”–if this is not a question about causes, then it is about the justification for my following the rule in the way I do.

If I have exhausted the justifications I have reached bedrock, and my spade is turned. Then I am inclined to say: “This is simply what I do.”

The idea, famously, is that rule-following is grounded, not in explicit rules, but in our actual activities, our practices. The idea, as we saw above, is that rule-following is blind. It is ‘simply what we do.’ “When I obey a rule, I do not choose,” Wittgenstein writes. “I obey the rule blindly” (§219). But if rule-following is blind, just what we find ourselves doing in certain contexts, then in what sense is it normative? Brandom quotes McDowell’s excellent (certainly from a BBT standpoint!) characterization of the problem in “Wittgenstein on Following a Rule”: “How can a performance be nothing but a ‘blind’ reaction to a situation, not an attempt to act on interpretation (thus avoiding Scylla); and be a case of going by a rule (avoiding Charybdis)?” (Mind, Value, and Reality, 242).

Wittgenstein’s challenge, in other words, is one of theorizing nonconscious rule-following in a manner that does not render normativity some inexplicable remainder. The challenge is to find some way to avoid Regulism without lapsing into Regularism. Of course, we’ve grown inured to the notion of ‘implicit norms’ as a theoretical explanatory posit, so much so as to think them almost self-evident—I know this was once the case for me. But the merest questioning quickly reveals just how odd implicit norms are. Nonconscious rule-following is automatic rule-following, after all, something mechanical, dispositional. Automaticity seems to preclude normativity, even as it remains amenable to regularities and dispositions. Although it seems obvious that evaluation and justification are things that we regularly do, that we regularly engage in normative cognition navigating our environments (natural and social), it is by no means clear that only normative posits can explain normative cognition. Given that normative cognition is another natural artifact, the product of evolution, and given the astounding explanatory successes of science, it stands to reason that natural, not supernatural, posits are likely what’s required.

All this brings us back to C-BAT, the fact that Wittgenstein’s problem, like Brandom’s, is the problem of neglect. ‘This is simply what I do,’ amounts to a confession of abject ignorance. Recall the ‘Hidden Constraint Model’ of the implicit from our previous discussion. Cognizing rule-following behaviour requires cognizing the precursors to rule-following behaviour, precursors that conscious cognition systematically neglects. Most everyone agrees on the biomechanical nature of those precursors, but Brandom (like intentionalists more generally) wants to argue that biomechanically specified regularities and dispositions are not enough, that something more is needed to understand the normative character of rule-following, given the mysterious way regularities and dispositions preclude normative cognition. The only way to avoid this outcome, he insists, is to posit some form of nonconscious normativity, a system of preconscious, pre-communicative ‘rules’ governing cognitive discourse. The upshot of Wittgenstein’s arguments against Regularism seems to be that only normative posits can adequately explain normative cognition.

But suddenly, the stakes are flipped. Just as the natural is difficult to understand in the context of the normative, so too is the normative difficult to understand in the context of the natural. For some critics, this is difficulty enough. In Explaining the Normative, for instance, Stephen Turner does an excellent job tracking, among other things, the way Normativism attempts to “take back ground lost to social science explanation” (5). He begins by providing a general overview of the Normativist approach, then shows how these self-same tactics characterized social science debates of the early twentieth-century, only to be abandoned as their shortcomings became manifest. “The history of the social sciences,” he writes, “is a history of emancipation from the intellectual propensity to intentionalize social phenomenon—this was very much part of the process that Weber called the disenchantment of the world” (147). His charge is unequivocal: “Brandom,” he writes, “proposes to re-enchant the world by re-instating the belief in normative powers, which is to say, powers in some sense outside of and distinct from the forces known to science” (4). But although this is difficult to deny in a broad stroke sense, he fails to consider (perhaps because his target is Normativism in general, and not Brandom, per se) the nuance and sensitivity Brandom brings to this very issue—enough, I think, to walk away theoretically intact.

In the next installment, I’ll consider the way Brandom achieves this via Dennett’s account of the Intentional Stance, but for the nonce, it’s important that we keep the problem of re-enchantment on the table. Brandom is arguing that the inability of natural posits to explain normative cognition warrants a form of theoretical supernaturalism, a normative metaphysics, albeit one he wants to make as naturalistically palatable as possible.

Even though neglect is absolutely essential to their analyses of Regulism and Regularism, neither Wittgenstein nor Brandom so much as pause to consider it. As astounding as it is, they simply take our utter innocence of our own natural and normative precursors as a given, an important feature of the problem ecology under consideration to be sure, but otherwise irrelevant to the normative explication of normative cognition. Any role neglect might play beyond anchoring the need for an account of implicit normativity is entirely neglected. The project of Making It Explicit is nothing other than the project of making the activity of making explicit explicit, which is to say, the project of overcoming metacognitive neglect regarding normative cognition, and yet nowhere does Brandom so much as consider just what he’s attempting to overcome.

Not surprisingly, this oversight proves catastrophic—for the whole of Normativism, and not simply Brandom.

Just consider, for instance, the way Brandom completely elides the question of the domain specificity of normative cognition. Normative cognition is a product of evolution, part of a suite of heuristic systems adapted to solve some range of social problems as effectively as possible given the resources available. It seems safe to surmise that normative cognition, as heuristic, possesses what Todd, Gigarenzer, and the ABC Research Group (2012) call an adaptive ‘problem-ecology,’ a set of environments possessing complementary information structures. Heuristics solve via the selective uptake of information, wedding them in effect, to specific problem-solving domains. ‘Socio-cognition,’ which manages to predict, explain, even manipulate astronomically complex systems on the meagre basis of observable behaviour, is paradigmatic of a heuristic system. In the utter absence of causal information, it can draw a wide variety of reliable causal conclusions, but only within a certain family of problems. As anthropomorphism, the personification or animation of environments, shows, humans are predisposed to misapply socio-cognition to natural environments. Pseudo-solving natural environments via socio-cognition may have solved various social problems, but precious few natural ones. In fact, the process of ‘disenchantment’ can be understood as a kind of ‘rezoning’ of socio-cognition, a process of limiting its application to those problem-ecologies where it actually produces solutions.

Which leads us to the question: So what, then, is the adaptive problem ecology of normative cognition? More specifically, how do we know that the problem of normative cognition belongs to the problem ecology of normative cognition?

As we saw Brandom’s argument against Regularism could itself be interpreted as a kind of ‘ecology argument,’ as a demonstration of how the problem of normative cognition does not belong to the problem ecology of natural cognition. Natural cognition cannot ‘fund the distinction between ought and is.’ Therefore the problem ecology of normative cognition does not belong to natural cognition. In the absence of any alternatives, we then have an abductive case for the necessity of using normative cognition to solve normative cognition.

But note how recognizing the heuristic, or ecology dependant, nature of normative cognition has completely transformed the stakes of Brandom’s original argument. The problem for Regularism turns, recall, on the conspicuous way mere regularities fail to capture the normative dimension of rule-following. But if normative cognition were heuristic (as it almost certainly is), if what we’re prone to identify as the ‘normative dimension’ is something specific to the application of normative cognition, then this becomes the very problem we should expect. Of course the normative dimension disappears absent the application of normative cognition! Since Regularism involves solving normative cognition using the resources of natural cognition, it simply follows that it fails to engage resources specific to normative cognition. Consider Kripke’s formulation of the gerrymandering problem in terms of the ‘skeptical paradox’: “For the sceptic holds that no fact about my past history—nothing that was ever in my mind, or in my external behavior—establishes that I meant plus rather than quus” (Wittgenstein, 13). Even if we grant a rule-follower access to all factual information pertaining to rule-following, a kind of ‘natural omniscience,’ they will still be unable to isolate any regularity capable of playing ‘the role of norms implicit in practice.’ Again, this is precisely what we should expect given the domain specificity of normative cognition proposed here. If ‘normative understanding’ were the artifact of a cognitive system dedicated to the solution of a specific problem-ecology, then it simply follows that the application of different cognitive systems would fail to produce normative understanding, no matter how much information was available.

What doesn’t follow is that normative cognition thus lies outside the problem ecology of natural cognition, let alone inside the problem ecology of normative cognition. The ‘explanatory failure’ that Brandom and others use to impeach the applicability of natural cognition to normative cognition is nothing of the sort. It simply makes no sense to demand that one form of cognition solve another form of cognition as if it were that other form. We know that normative cognition belongs to social cognition more generally, and that social cognition—‘mindreading’—operates heuristically, that it has evolved to solve astronomically complicated biomechanical problems involving the prediction, understanding, and manipulation of other organisms absent detailed biomechanical information. Adapted to solve in the absence of this information, it stands to reason that the provision of that information, facts regarding biomechanical regularities, will render it ineffective—‘grind cognitive gears,’ you could say.

Since these ‘technical details’ are entirely invisible to ‘philosophical reflection’ (thanks to metacognitive neglect), the actual ecological distinction between these systems escapes Brandom, and he assumes, as all Normativists assume, that the inevitable failure of natural cognition to generate instances of normative cognition means that only normative cognition can solve normative cognition. Blind to our cognitive constitution, instances of normative cognition are all he or anyone else has available: our conscious experience of normative cognition consists of nothing but these instances. Explaining normative cognition is thus conflated with replacing normative cognition. ‘Competence’ becomes yet another ‘spooky explanandum,’ another metacognitive inkling, like ‘qualia,’ or ‘content,’ that seems to systematically elude the possibility the possibility of natural cognition (for suspiciously similar reasons).

This apparent order of supernatural explananda then provides the abductive warrant upon which Brandom’s entire project turns—all T-BAT and C-BAT approaches, in fact. If natural cognition is incapable, then obviously something else is required. Impressed by how our first-order social troubleshooting makes such good use of the Everyday Implicit, and oblivious to the ecological limits of the heuristic systems responsible, we effortlessly assume that making use of some Philosophical Implicit will likewise enable secondorder social troubleshooting… that tomes like Making It Explicit actually solve something.

But as the foregoing should make clear, precisely the opposite is the case. As a system adapted to troubleshoot first-order social ecologies, normative cognition seems unlikely to theoretically solve normative cognition in any satisfying manner. The very theoretical problems that plague Normativism—supernaturalism, underdetermination, and practical inapplicability—are the very problems we should expect if normative cognition were not in fact among the problems that normative cognition can solve.

As an evolved, biological capacity, however, normative cognition clearly belongs to the problem ecology of natural cognition. Simply consider how much the above sketch has managed to ‘make explicit.’ In parsimonious fashion it explains, 1) the general incompatibility of natural and normative cognition; 2) the inability of Regularism to ‘play the role of norms implicit in practice’; 3) why this inability suggests the inapplicability of natural cognition to the problem of normative cognition; 4) why Normativism seems the only alternative as a result; and 5) why Normativism nonetheless suffers the debilitating theoretical problems it does. It solves the notorious Skeptical Paradox, and much else aside, using only the idiom of natural cognition, which is to say, in a manner not only compatible with the life sciences, but empirically tractable as well.

Brandom is the victim of a complex of illusions arising out of metacognitive neglect. Wittgenstein, who had his own notion of heuristics and problem ecologies (grammars and language games), was sensitive to the question of what kinds of problems could be solved given the language we find ourselves stranded with. As a result, he eschews the kind of systematic normative metaphysics that Brandom epitomizes. He takes neglect seriously insofar as ‘this is simply what I do’ demarcates, for him, the pale of credible theorization. Even so, he nevertheless succumbs to a perceived need to submit, however minimally or reluctantly, the problem of normative cognition (in terms of rule-following) to the determinations of normative cognition, and is thus compelled to express his insights in the self-same supernatural idiom as Brandom, who eschews what is most valuable in Wittgenstein, his skepticism, and seizes on what is most problematic, his normative metaphysics.

There is a far more parsimonious way. We all agree humans are physical systems nested within a system of such systems. What we need to recognize is how being so embedded poses profound constraints on what can and cannot be cognized. What can be readily cognized are other systems (within a certain range of complexity). What cannot be readily cognized is the apparatus of cognition itself. The facts we call ‘natural’ belong to the former, and the facts we call ‘intentional’ belong to the latter. Where the former commands an integrated suite of powerful environmental processors, the latter relies on a hodgepodge of specialized socio-cognitive and metacognitive hacks. Since we have no inkling of this, we have no inkling of their actual capacities, and so run afoul a number of metacognitive impasses. So for instance, intentional cognition has evolved to overcome neglect, to solve problems in the absence of causal information. This is why philosophical reflection convinces us we somehow stand outside the causal order via choice or reason or what have you. We quite simply confuse an incapacity, our inability to intuit our biomechanicity, with a special capacity, our ability to somehow transcend or outrun the natural order.

We are physical in such a way that we cannot intuit ourselves as wholly physical. To cognize nature is to be blind to the nature of cognizing. To be blind to that blindness is to think cognizing has no nature. So we assume that nature is partial, and that we are mysteriously whole, a system unto ourselves.

Reason be praised.


The Metacritique of Reason

by rsbakker



Whether the treatment of such knowledge as lies within the province of reason does or does not follow the secure path of a science, is easily to be determined from the outcome. For if, after elaborate preparations, frequently renewed, it is brought to a stop immediately it nears its goal; if often it is compelled to retrace its steps and strike into some new line of approach; or again, if the various participants are unable to agree in any common plan of procedure, then we may rest assured that it is very far from having entered upon the secure path of a science, and is indeed a merely random groping.  Immanuel Kant, The Critique of Pure Reason, 17.

The moral of the story, of course, is that this description of Dogmatism’s failure very quickly became an apt description of Critical Philosophy as well. As soon as others saw all the material inferential wiggle room in the interpretation of condition and conditioned, it was game over. Everything that damned Dogmatism in Kant’s eyes now characterizes his own philosophical inheritance.

Here’s a question you don’t come across everyday: Why did we need Kant? Why did philosophy have to discover the transcendental? Why did the constitutive activity of cognition elude every philosopher before the 18th Century? The fact we had to discover it means that it was somehow ‘always there,’ implicit in our experience and behaviour, but we just couldn’t see it. Not only could we not see it, we didn’t even realize it was missing, we had no inkling we needed to understand it to understand ourselves and how we make sense of the world. Another way to ask the question of the inscrutability of the ‘transcendental,’ then, is to ask why the passivity of cognition is our default assumption. Why do we assume that ‘what we see is all there is’ when we reflect on experience?

Why are we all ‘naive Dogmatists’ by default?


It’s important to note that no one but no one disputes that it had to be discovered. This is important because it means that no one disputes that our philosophical forebears once uniformly neglected the transcendental, that it remained for them an unknown unknown. In other words, both the Intentionalist and the Eliminativist agree on the centrality of neglect in at least this one regard. The transcendental (whatever it amounts to) is not something that metacognition can readily intuit—so much so that humans engaged in thousands of years of ‘philosophical reflection’ without the least notion that it even existed. The primary difference is that the Intentionalist thinks they can overcome neglect via intuition and intellection, that theoretical metacognition (philosophical reflection), once alerted to the existence of the transcendental, suddenly somehow possesses the resources to accurately describe its structure and function. The Eliminativist, on the other hand, asks, ‘What resources?’ Lay them out! Convince me! And more corrosively still, ‘How do you know you’re not still blinkered by neglect?’ Show me the precautions!

The Eliminativist, in other words, pulls a Kant on Kant and demands what amounts to a metacritique of reason.

The fact is, short of this accounting of metacognitive resources and precautions, the Intentionalist has no way of knowing whether or not they’re simply a Stage-Two Dogmatist,’ whether their ‘clarity,’ like the specious clarity of the Dogmatist, isn’t simply the product of neglect—a kind of metacognitive illusion in effect. For the Eliminativist, the transcendental (whatever its guise) is a metacognitive artifact. For them, the obvious problems the Intentionalist faces—the supernaturalism of their posits, the underdetermination of their theories, the lack of decisive practical applications—are all symptomatic of inquiry gone wrong. Moreover, they find it difficult to understand why the Intentionalist would persist in the face of such problems given only a misplaced faith in their metacognitive intuitions—especially when the sciences of the brain are in the process of discovering the actual constitutive activity responsible! You want to know what’s really going on ‘implicitly,’ ask a cognitive neuroscientist. We’re just toying with our heuristics out of school otherwise.

We know that conscious cognition involves selective information uptake for broadcasting throughout the brain. We also know that no information regarding the astronomically complex activities constitutive of conscious cognition as such can be so selected and broadcast. So it should come as no surprise whatsoever that the constitutive activity responsible for experience and cognition eludes experience and cognition—that the ‘transcendental,’ so-called, had to be discovered. More importantly, it should come as no surprise that this constitutive activity, once discovered, would be systematically misinterpreted. Why? The philosopher ‘reflects’ on experience and cognition, attempts to ‘recollect’ them in subsequent moments of experience and cognition, in effect, and realizes (as Hume did regarding causality, say) that the information available cannot account for the sum of experience and cognition: the philosopher comes to believe (beginning most famously with Kant) that experience does not entirely beget experience, that the constitutive constraints on experience somehow lie orthogonal to experience. Since no information regarding the actual neural activity responsible is available, and since, moreover, no information regarding this lack is available, the philosopher presumes these orthogonal constraints must conform to their metacognitive intuitions. Since the resulting constraints are incompatible with causal cognition, they seem supernatural: transcendental, virtual, quasi-transcendental, aspectual, what have you. The ‘implicit’ becomes the repository of otherworldly constraining or constitutive activities.

Philosophy had to discover the transcendental because of metacognitive neglect—on this fact, both the Intentionalist and the Eliminativist agree. The Eliminativist simply takes the further step of holding neglect responsible for the ontologically problematic, theoretically underdetermined, and practically irrelevant character of Intentionalism. Far from what Kant supposed, Critical Philosophy—in all its incarnations, historical and contemporary–simply repeats, rather than solves, these sins of Dogmatism. The reason for this, the Eliminativist says, is that it overcomes one metacognitive illusion only to run afoul a cluster of others.

This is the sense in which Blind Brain Theory can be seen as completing as much as overthrowing the Kantian project. Though Kant took cognitive dogmatism, the assumption of cognitive simplicity and passivity, as his target, he nevertheless ran afoul metacognitive dogmatism, the assumption of metacognitive simplicity and passivity. He thought—as his intellectual heirs still think—that philosophical reflection possessed the capacity to apprehend the superordinate activity of cognition, that it could accurately theorize reason and understanding. We now possess ample empirical grounds to think this is simply not the case. There’s the mounting evidence comprising what Princeton psychologist Emily Pronin has termed the ‘Introspection Illusion,’ direct evidence of metacognitive incompetence, but the fact is, every nonconscious function experimentally isolated by cognitive science illuminates another constraining/constitutive cognitive activity utterly invisible to philosophical reflection, another ignorance that the Intentionalist believes has no bearing on their attempts to understand understanding.

One can visually schematize our metacognitive straits in the following way:

Metacognitive Capacity

This diagram simply presumes what natural science presumes, that you are a complex organism biomechanically synchronized with your environments. Light hits your retina, sound hits your eardrum, neural networks communicate and behaviours are produced. Imagine your problem-solving power set on a swivel and swung 360 degrees across the field of all possible problems, which is to say problems involving lateral, or nonfunctionally entangled environmental systems, as well as problems involving medial, or functionally entangled enabling systems, such as those comprising your brain. This diagram, then, visualizes the loss and gain in ‘cognitive dimensionality’—the quantity and modalities of information available for problem solving—as one swings from the third-person lateral to the first-person medial. Dimensionality peaks with external cognition because of the power and ancient evolutionary pedigree of the systems involved. The dimensionality plunges for metacognition, on the other hand, because of medial neglect, the way structural complicity, astronomical complexity, and evolutionary youth effectively renders the brain unwittingly blind to itself.

This is why the blue line tracking our assumptive or ‘perceived’ medial capacity in the figure peaks where our actual medial capacity bottoms out: with the loss in dimensionality comes the loss in the ability to assess reliability. Crudely put, the greater the cognitive dimensionality, the greater the problem-solving capacity, the greater the error-signalling capacity. And conversely, the less the cognitive dimensionality, the less the problem-solving capacity, the less the error-signalling capacity. The absence of error-signalling means that cognitive consumption of ineffective information will be routine, impossible to distinguish from the consumption of effective information. This raises the spectre of ‘psychological anosognosia’ as distinct from the clinical, the notion that the very cognitive plasticity that allowed humans to develop ACH thinking has led to patterns of consumption (such as those underwriting ‘philosophical reflection’) that systematically run afoul medial neglect. Even though low dimensionality speaks to cognitive specialization, and thus to the likely ineffectiveness of cognitive repurposing, the lack of error-signalling means the information will be routinely consumed no matter what. Given this, one should expect ACH thinking–reason–to be plagued with the very kinds of problems that plague theoretical discourse outside the sciences now, the perpetual coming up short, the continual attempt to retrace steps taken, the interminable lack of any decisive consensus…

Or what Kant calls ‘random groping.’

The most immediate, radical consequence of this 360 degree view is that the opposition between the first-person and third-person disappears. Since all the apparently supernatural characteristics rendering the first-person naturalistically inscrutable can now be understood as artifacts of neglect—illusions of problem-solving sufficiency—all the ‘hard problems’ posed by intentional phenomena simply evaporate. The metacritique of reason, far from pointing a way to any ‘science of the transcendental,’ shows how the transcendental is itself a dogmatic illusion, how cryptic things like the ‘apriori’ are obvious expressions of medial neglect, sources of constraint ‘from nowhere’ that baldly demonstrate our metacognitive incapacity to recognize our metacognitive incapacity. For all the prodigious problem-solving power of logic and mathematics, a quick glance at the philosophy of either is enough to assure you that no one knows what they are. Blind Brain Theory explains this remarkable contrast of insight and ignorance, how we could possess tools so powerful without any decisive understanding of the tools themselves.

The metacritique of reason, then, leads to what might be called ‘pronaturalism,’ a naturalism that can be called ‘progressive’ insofar as it continues to eschew the systematic misapplication of intentional cognition to domains that it cannot hope to solve—that continues the process of exorcising ghosts from the machinery of nature. The philosophical canon swallowed Kant so effortlessly that people often forget he was attempting to put an end to philosophy, to found a science worthy of the name, one which grounded both the mechanical and the ghostly. By rendering the ghostly the formal condition of any cognition of the mechanical, however, he situated his discourse squarely in the perpetually underdetermined domain of philosophy. His failure was inevitable.

The metacritique of reason makes the very same attempt, only this time anchored in the only real credible source of theoretical cognition we possess: the sciences. It allows us to peer through the edifying fog of our intentional traditions and to see ourselves, at long last, as wholly continuous with crazy shit like this…

Filamentary Map


The Blind Mechanic II: Reza Negarestani and the Labour of Ghosts

by rsbakker

For some time now I’ve been arguing that the implications of the Singularity already embroil us—that the Singularity can be seen, in fact, as the material apotheosis of the Semantic Apocalypse, insofar as it is the point where the Scientific Image of the human at last forecloses on the Manifest Image. In the “The Labor of the Inhuman” (which can be found here and here, with Craig Hickman’s critiques, here and here), Reza Negarestani adopts Brandom’s claim that sapience, the capacity to play the ‘game of giving and asking for reasons,’ distinguishes humans as human. He then goes onto argue that this allows us, and ultimately commits us, to seeing the human as a kind of temporally extended process of rational revision, one that ultimately results in the erasure of the human—or the ‘inhuman.’ Ultimately, what it means to be human is to be embroiled in a process of becoming inhuman. He states his argument thus:

The contention of this essay is that universality and collectivism cannot be thought, let alone attained, through consensus or dissensus between cultural tropes, but only by intercepting and rooting out what gives rise to the economy of false choices and by activating and fully elaborating what real human significance consists of. For it is, as will be argued, the truth of human significance—not in the sense of an original meaning or a birthright, but in the sense of a labor that consists of the extended elaboration of what it means to be human through a series of upgradable special performances—that is rigorously inhuman.

In other words, so long as we fail to comprehend the inhumanity of the human, this rational-revisionary process, we fail to understand the human, and so have little hope of solving problems pertaining to the human. Understanding the ‘truth of human significance,’ therefore requires understanding what the future will make of the human. It requires that Negarestani prognosticate. It requires, in other words, that he pick out the specific set of possibilities constituting the inhuman. The only principled way to do that is to comprehend some set of systematic constraints operative in the present. But his credo, unlike that of the ‘Hard SF’ writer, is to ignore the actual technics of the natural, and to focus on the speculative technics of the normative. His strategy, in other words, is to predict the future of the human using only human resources—to see the fate of the human, the ‘inhuman,’ as something internal to the human. And this, as I hope to show, is simply not plausible.

He understands the danger of conceiving his constraining framework as something fixed: “humanism cannot be regarded as a claim about human that can only be professed once and subsequently turned into a foundation or axiom and considered concluded.” He appreciates the implausibility of the static, Kantian transcendental approach. As a result, he proposes to take the Sellarsian/Brandomian approach, focussing on the unique relationship between the human and sapience, the “distinction between sentience as a strongly biological and natural category and sapience as a rational (not to be confused with logical) subject.” He continues:

The latter is a normative designation which is specified by entitlements and the responsibilities they bring about. It is important to note that the distinction between sapience and sentience is marked by a functional demarcation rather than a structural one. Therefore, it is still fully historical and open to naturalization, while at the same time being distinguished by its specific functional organization, its upgradable set of abilities and responsibilities, its cognitive and practical demands.

He’s careful here to hedge, lest the dichotomy between the normative and the natural comes across as too schematic:

The relation between sentience and sapience can be understood as a continuum that is not differentiable everywhere. While such a complex continuity might allow the naturalization of normative obligations at the level of sapience—their explanation in terms of naturalistic causes—it does not permit the extension of certain conceptual and descriptive resources specific to sapience (such as the particular level of mindedness, responsibilities, and, accordingly, normative entitlements) to sentience and beyond.

His dilemma here is the dilemma of the Intentionalist more generally. Science, on the one hand, is nothing if not powerful. The philosopher, on the other hand, has a notorious, historical tendency to confuse the lack of imagination for necessity. Foot-stomping will not do. He needs some way to bite this bullet without biting it, basically, some way of acknowledging the possible permeability of normativity to naturalization, while insisting, nonetheless, on the efficacy of some inviolable normative domain. To accomplish this, he adverts to the standard appeal to the obvious fact that norm-talk actually solves norm problems, that normativity, in other words, obviously possesses a problem-ecology. But of course the fact that norm-talk is indispensible to solving problems within a specific problem-ecology simply raises the issue of the limits of this ecology—and more specifically, whether the problem of humanity’s future actually belongs to that problem-ecology. What he needs to establish is the adequacy of theoretical, second-order norm-talk to the question of what will become of the human.

He offers us a good, old fashioned transcendental argument instead:

The rational demarcation lies in the difference between being capable of acknowledging a law and being solely bound by a law, between understanding and mere reliable responsiveness to stimuli. It lies in the difference between stabilized communication through concepts (as made possible by the communal space of language and symbolic forms) and chaotically unstable or transient types of response or communication (such as complex reactions triggered purely by biological states and organic requirements or group calls and alerts among social animals). Without such stabilization of communication through concepts and modes of inference involved in conception, the cultural evolution as well as the conceptual accumulation and refinement required for the evolution of knowledge as a shared enterprise would be impossible.

Sound familiar? The necessity of the normative lies in the irreflexive contingency of the natural. Even though natural relations constitute biological systems of astounding complexity, there’s simply no way, we are told, they can constitute the kind of communicative stability that human knowledge and cultural evolution requires. The machinery is just too prone to rattle! Something over and above the natural—something supernatural—is apparently required. “Ultimately,” Negarestani continues, “the necessary content as well as the real possibility of human rests on the ability of sapience—as functionally distinct from sentience—to practice inference and approach non-canonical truth by entering the deontic game of giving and asking for reasons.”

It’s worth pausing to take stock of the problems we’ve accumulated up to this point. 1) Even though the human is a thoroughgoing product of its past natural environments, the resources required to understand the future of the human, we are told, lie primarily, if not entirely, within the human. 2) Even though norm-talk possesses a very specific problem-ecology, we are supposed to take it on faith that the nature of norm-talk is something that only more norm-talk can solve, rather than otherwise (as centuries of philosophical intractability would suggest). And now, 3) Even though the natural, for all its high dimensional contingencies, is capable of producing the trillions of mechanical relations that constitute you, it is not capable of ‘evolving human knowledge.’ Apparently we need a special kind of supernatural game to do this, the ‘game of giving and asking for reasons,’ a low-dimensional, communicative system of efficacious (and yet acausal!) normative posits based on… we are never told—some reliable fund of information, one would hope.

But since no normativist that I know of has bothered to account for the evidential bases of their position, we’re simply left with faith in metacognitive intuition and this rather impressive sounding, second-order theoretical vocabulary of unexplained explainers—‘commitments,’ ‘inferences,’ ‘proprieties,’ ‘deontic statuses,’ ‘entitlements,’ and the like—a system of supernatural efficacies beyond the pale of any definitive arbitration. Negarestani sums this normative apparatus with the term ‘reason,’ and it is reason understood in this inferentialist sense, that provides the basis of charting the future of the human. “Reason’s main objective is to maintain and enhance itself,” he writes. “And it is the self-actualization of reason that coincides with the truth of the inhuman.”

Commitment to humanity requires scrutinizing the meaning of humanity, which in turn requires making the implicature of the human explicit—not just locally, but in its entirety. The problem, in a nutshell, is that the meaning of the human is not analytic, something that can be explicated via analysis alone. It arises, rather, out of the game of giving and asking for reasons, the actual, historical processes that comprise discursivity. And this means that unpacking the content of the human is a matter of continual revision, a process of interpretative differentiation that trends toward the radical, the overthrow of “our assumptions and expectations about what ‘we’ is and what it entails.”

The crowbar of this process of interpretative differentiation is what Negarestani calls an ‘intervening attitude,’ that moment in the game where the interpretation of claims regarding the human spark further claims regarding the human, the interpretation of which sparks yet further claims, and so on. The intervening attitude thus “counts as an enabling vector, making possible certain abilities otherwise hidden or deemed impossible.” This is why he can claim that “[r]evising and constructing the human is the very definition of committing to humanity.” And since this process is embedded in the game of giving and asking for reasons, he concludes that “committing to humanity is tantamount complying with the revisionary vector of reason and constructing humanity according to an autonomous account of reason.”

And so he writes:

Humanity is not simply a given fact that is behind us. It is a commitment in which the reassessing and constructive strains inherent to making a commitment and complying with reason intertwine. In a nutshell, to be human is a struggle. The aim of this struggle is to respond to the demands of constructing and revising human through the space of reasons.

In other words, we don’t simply ‘discover the human’ via reason, we construct it as well. And thus the emancipatory upshot of Negarestani’s argument: if reasoning about the human is tantamount to constructing the human, then we have a say regarding the future of humanity. The question of the human becomes an explicitly political project, and a primary desideratum of Negarestani’s stands revealed. He thinks reason as he defines it, as at once autonomous (supernatural) and historically concrete (or ‘solid,’ as Brandom would say) revisionary activity of theoretical argumentation, provides a means of assessing the adequacy of various political projects (traditional humanism and what he calls ‘kitsch Marxism) according to their understanding of the human. Since my present concern is to assess the viability of the account of reason Negarestani uses to ground the viability of this yardstick, I will forego considering his specific assessments in any detail.

The human is the malleable product of machinations arising out of the functional autonomy of reason. Negarestani refers to this as a ‘minimalist definition of humanity,’ but as the complexity of the Brandomian normative apparatus he deploys makes clear, it is anything but. The picture of reason he espouses is as baroque and reticulated as anything Kant ever proposed. It’s a picture, after all, that requires an entire article to simply get off the ground! Nevertheless, this dynamic normative apparatus provides Negarestani with a generalized means of critiquing the intransigence of traditional political commitments. The ‘self-actualization’ of reason lies in its ability “to bootstrap complex abilities out of its primitive abilities.” Even though continuity is with previous commitments is maintained at every step in the process, over time the consequences are radical: “Reason is therefore simultaneously a medium of stability that reinforces procedurality and a general catastrophe, a medium of radical change that administers the discontinuous identity of reason to an anticipated image of human.”

This results in what might be called a fractured ‘general implicature,’ a space of reasons rife with incompatibilities stemming from the refusal or failure to assiduously monitor and update commitments in light of the constructive revisions falling out of the self-actualization of reason. Reason itself, Negarestani is arguing, is in the business of manufacturing ideological obsolescence, always in the process of rendering its prior commitments incompatible with its present ones. Given his normative metaphysics, reason has become the revisionary, incremental “director of its own laws,” one that has the effect of rendering its prior laws, “the herald of those which are whispered to it by an implanted sense or who knows what tutelary nature” (Kant, Fundamental Principles of the Metaphysics of Morals). Where Hegel can be seen as temporalizing and objectifying Kant’s atemporal, subjective, normative apparatus, Brandom (like others) can be seen as socializing and temporalizing it. What Negarestani is doing is showing how this revised apparatus operates against the horizon of the future with reference to the question of the human. And not surprisingly, Kant’s moral themes remain the same, only unpacked along the added dimensions of the temporal and the social. And so we find Negarestani concluding:

The sufficient content of freedom can only be found in reason. One must recognize the difference between a rational norm and a natural law—between the emancipation intrinsic in the explicit acknowledgement of the binding status of complying with reason, and the slavery associated with the deprivation of such a capacity to acknowledge, which is the condition of natural impulsion. In a strict sense, freedom is not liberation from slavery. It is the continuous unlearning of slavery.

The catastrophe, apparently, has yet to happen, because here we find ourselves treading familiar ground indeed, Enlightenment ground, as Negarestani himself acknowledges, one where freedom remains bound to reason—“to the autonomy of its normative, inferential, and revisionary function in the face of the chain of causes that condition it”—only as process rather than product.

And the ‘inhuman,’ so-called, begins to look rather like a shill for something all too human, something continuous, which is to say, conservative, through and through.

And how could it be otherwise, given the opening, programmatic passage of the piece?

Inhumanism is the extended practical elaboration of humanism; it is born out of a diligent commitment to the project of enlightened humanism. As a universal wave that erases the self-portrait of man drawn in sand, inhumanism is a vector of revision. It relentlessly revises what it means to be human by removing its supposed evident characteristics and preserving certain invariances. At the same time, inhumanism registers itself as a demand for construction, to define what it means to be human by treating human as a constructible hypothesis, a space of navigation and intervention.

The key phrase here has to be ‘preserving certain invariances.’ One might suppose that natural reality would figure large as one of these ‘invariances’; to quote Philip K. Dick, “Reality is that which, when you stop believing in it, doesn’t go away.” But Negarestani scarce mentions nature as cognized by science save to bar the dialectical door against it. The thing to remember about Brandom’s normative metaphysics is that ‘taking-as,’ or believing, is its foundation (or ontological cover). Unlike reality, his normative apparatus does go away when the scorekeepers stop believing. The ‘reality’ of the apparatus is thus purely a functional artifact, the product of ‘practices,’ something utterly embroiled in, yet entirely autonomous from, the natural. This is what allows the normative to constitute a ‘subregion of the factual’ without being anything natural.

Conservatism is built into Negarestani’s account at its most fundamental level, in the very logic—the Brandomian account of the game of giving and asking for reasons—that he uses to prognosticate the rational possibilities of our collective future. But the thing I find the most fascinating about his account is the way it can be read as an exercise in grabbing Brandom’s normative apparatus and smashing it against the wall of the future—a kind of ‘reductio by Singularity.’ Reasoning is parochial through and through. The intuitions of universalism and autonomy that have convinced so many otherwise are the product of metacognitive illusions, artifacts of confusing the inability to intuit more dimensions of information, with sufficient entities and relations lacking those dimensions. For taking shadows as things that cast shadows.

So consider the ‘rattling machinery’ image of reason I posited earlier in “The Blind Mechanic,” the idea that ‘reason’ should be seen as means of attenuating various kinds of embodied intersystematicities for behaviour—as a way to service the ‘airy parts’ of superordinate, social mechanisms. No norms. No baffling acausal functions. Just shit happening in ways accidental as well as neurally and naturally selected. What the Intentionalist would claim is that mere rattling machinery, no matter how detailed or complete its eventual scientific description comes to be, will necessarily remain silent regarding the superordinate (and therefore autonomous) intentional functions that it subserves, because these supernatural functions are what leverage our rationality somehow—from ‘above the grave.’

As we’ve already seen, it’s hard to make sense of how or why this should be, given that biomachinery is responsible for complexities we’re still in the process of fathoming. The behaviour that constitutes the game of giving and asking for reasons does not outrun some intrinsic limit on biomechanistic capacity by any means. The only real problem naturalism faces is one of explaining the apparent intentional properties belonging to the game. Behaviour is one thing, the Intentionalist says, while competence is something different altogether—behaviour plus normativity, as they would have it. Short of some way of naturalizing this ‘normative plus,’ we have no choice to acknowledge the existence of intrinsically normative facts.

On the Blind Brain account, ‘normative facts’ are simply natural facts seen darkly. ‘Ought,’ as philosophically conceived, is an artifact of metacognitive neglect, the fact that our cognitive systems cannot cognize themselves in the same way they cognize the rest of their environment. Given the vast amounts of information neglected in intentional cognition (not to mention millennia of philosophical discord), it seems safe to assume that norm-talk is not among the things that norm-talk can solve. Indeed, since the heuristic systems involved are neural, we have every reason to believe that neuroscience, or scientifically regimented fact-talk, will provide the solution. Where our second-order intentional intuitions beg to differ is simply where they are wrong. Normative talk is incompatible with causal talk simply because it belongs to a cognitive regime adapted to solve in the absence of causal information.

The mistake, then, is to see competence as some kind of complication or elaboration of performance—as something in addition to behaviour. Competence is ‘end-directed,’ ‘rule-constrained,’ because metacognition has no access to the actual causal constraints involved, not because a special brand of performance ‘plus’ occult, intentional properties actually exists. You seem to float in this bottomless realm of rules and goals and justifications not because such a world exists, but because medial neglect folds away the dimensions of your actual mechanical basis with nary a seam. The apparent normative property of competence is not a property in addition to other natural properties; it is an artifact of our skewed metacognitive perspective on the application of quick and dirty heuristic systems our brains use to solve certain complicated systems.

But say you still aren’t convinced. Say that you agree the functions underwriting the game of giving and asking for reasons are mechanical and not at all accessible to metacognition, but at a different ‘level of description,’ one incapable of accounting for the very real work discharged by the normative functions that emerge from them. Now if it were the case that Brandom’s account of the game of giving and asking for questions actually discharged ‘executive’ functions of some kind, then it would be the case that our collective future would turn on these efficacies in some way. Indeed, this is the whole reason Negarestani turned to Brandom in the first place: he saw a way to decant the future of the human given the systematic efficacies of the game of giving and asking for reasons.

Now consider what the rattling machine account of reason and language suggests about the future. On this account, the only invariants that structurally bind the future to the past, that enable any kind of speculative consideration of the future at all, are natural. The point of language, recall, is mechanical, to construct and maintain the environmental intersystematicity (self/other/world) required for coordinated behaviour (be it exploitative or cooperative). Our linguistic sensitivity, you could say, evolved in much the same manner as our visual sensitivity, as a channel for allowing certain select environmental features to systematically tune our behaviours in reproductively advantageous ways. ‘Reasoning,’ on this view, can be seen as a form of ‘noise reduction,’ as a device adapted to minimize, as far as mere sound allows, communicative ‘gear grinding,’ and so facilitate behavioural coordination. Reason, you could say, is what keeps us collectively in tune.

Now given some kind of ability to conserve linguistically mediated intersystematicities, it becomes easy to see how this rattling machinery could become progressive. Reason, as noise reduction, becomes a kind of knapping hammer, a way to continually tinker and refine previous linguistic intersystematicities. Refinements accumulate in ‘lore,’ allowing subsequent generations to make further refinements, slowly knapping our covariant regimes into ever more effective (behaviour enabling) tools—particularly once the invention of writing essentially rendered lore immortal. As opposed to the supernatural metaphor of ‘bootstrapping,’ the apt metaphor here—indeed, the one used by cognitive archaeologists—is the mechanical metaphor of ratcheting. Refinements beget refinements, and so on, leveraging ever greater degrees of behavioural efficacy. Old behaviours are rendered obsolescent along with the prostheses that enable them.

The key thing to note here, of course, is that language is itself another behaviour. In other words, the noise reduction machinery that we call ‘reason’ is something that can itself become obsolete. In fact, its obsolescence seems pretty much inevitable.

Why so? Because the communicative function of reason is to maximize efficacies, to reduce the slippages that hamper coordination—to make mechanical. The rattling machinery image conceives natural languages as continuous with communication more generally, as a signal system possessing finite networking capacities. On the one extreme you have things like legal or technical scientific discourse, linguistic modes bent on minimizing the rattle (policing interpretation) as far as possible. On the other extreme you have poetry, a linguistic mode bent on maximizing the rattle (interpretative noise) as a means of generating novelty. Given the way behavioural efficacies fall out of self/other/world intersystematicity, the knapping of human communication is inevitable. Writing is such a refinement, one that allows us to raise fragments of language on the hoist, tinker with them (and therefore with ourselves) at our leisure, sometimes thousands of years after their original transmission. Telephony allowed us to mitigate the rattle of geographical distance. The internet has allowed us to combine the efficacies of telephony and text, to ameliorate the rattle of space and time. Smartphones have rendered these fixes mobile, allowing us to coordinate our behaviour no matter where we find ourselves. Even more significantly, within a couple years, we will have ‘universal translators,’ allowing us to overcome the rattle of disparate languages. We will have installed versions of our own linguistic sensitivities into our prosthetic devices, so that we can give them verbal ‘commands,’ coordinate with them, so that we can better coordinate with others and the world.

In other words, it stands to reason that at some point reason would begin solving, not only language, but itself. ‘Cognitive science,’ ‘information technology’—these are just two of the labels we have given to what is, quite literally, a civilization-defining war against covariant inefficiency, to isolate slippages and to ratchet the offending components tight, if not replace them altogether. Modern technological society constitutes a vast, species-wide attempt to become more mechanical, more efficiently integrated in nested levels of superordinate machinery. (You could say that the tyrant attempts to impose from without, capitalism kindles from within.)

The obsolescence of language, and therefore reason, is all but assured. One need only consider the research of Jack Gallant and his team, who have been able to translate neural activity into eerie, impressionistic images of what the subject is watching. Or perhaps even more jaw-dropping still, the research of Miguel Nicolelis into Brain Machine Interfaces, keeping in mind that scarcely one hundred years separates Edison’s phonograph and the Cloud. The kind of ‘Non-symbolic Workspace’ envisioned by David Roden in “Posthumanism and Instrumental Eliminativism” seems to be an inevitable outcome of the rattling machinery account. Language is yet another jury-rigged biological solution to yet another set of long-dead ecological problems, a device arising out of the accumulation of random mutations. As of yet, it remains indispensible, but it is by no means necessary, as the very near future promises to reveal. And as it goes, so goes the game of giving and asking for reasons. All the believed-in functions simply evaporate… I suppose.

And this just underscores the more general way Negarestani’s attempt to deal the future into the game of giving and asking for reasons scarcely shuffles the deck. I’ve been playing Jeremiah for decades now, so you would think I would be used to the indulgent looks I get from my friends and family when I warn them about what’s about to happen. Not so. Everyone understands that something is going on with technology, that some kind of pale has been crossed, but as of yet, very few appreciate its apocalyptic—and I mean that literally—profundity. Everyone has heard of Moore’s Law, of course, how every 18 months or so computing capacity per dollar doubles. What they fail to grasp is what the exponential nature of this particular ratcheting process means once it reaches a certain point. Until recently the doubling of computing power has remained far enough below the threshold of human intelligence to seem relatively innocuous. But consider what happens once computing power actually attains parity with the processing power of the human brain. What it means is that, no matter how alien the architecture, we have an artificial peerat that point in time. 18 months following, we have an artificial intellect that makes Aristotle or Einstein or Louis CK a child in comparison. 18 months following that (or probably less, since we won’t be slowing things up anymore) we will be domesticated cattle. And after that…

Are we to believe these machines will attribute norms and beliefs, that they will abide by a conception of reason arising out of 20th Century speculative intuitions on the nonnatural nature of human communicative constraints?

You get the picture. Negarestani’s ‘revisionary normative process’ is in reality an exponential technical process. In exponential processes, the steps start small, then suddenly become astronomical. As it stands, if Moore’s Law holds (and given this, I am confident it will), then we are a decade or two away from God.

I shit you not.

Really, what does ‘kitsch Marxism’ or ‘neoliberalism’ or any ism’ whatsoever mean in such an age? We can no longer pretend that the tsunami of disenchantment will magically fall just short our intentional feet. Disenchantment, the material truth of the Enlightenment, has overthrown the normative claims of the Enlightenment—or humanism. “This is a project which must align politics with the legacy of the Enlightenment,” the authors of the Accelerationist Manifesto write, “to the extent that it is only through harnessing our ability to understand ourselves and our world better (our social, technical, economic, psychological world) that we can come to rule ourselves” (14). In doing so they commit the very sin of anachronism they level at their critical competitors. They fail to appreciate the foundational role ignorance plays in intentional cognition, which is to say, the very kind of moral and political reasoning they engage in. Far more than ‘freedom’ is overturned once one concedes the mechanical. Knowledge is no universal Redeemer, which means the ideal of Enlightenment autonomy is almost certainly mythical. What’s required isn’t an aspiration to theorize new technologies with old concepts. What’s required is a fundamental rethink of the political in radically postintentional terms.

As far as I can see, the alternatives are magic or horror… or something no one has yet conceived. And until we understand the horror, grasp all the ways our blinkered perspective on ourselves has deceived us about ourselves, this new conception will never be discovered. Far from ‘resignation,’ abandoning the normative ideals the Enlightenment amounts to overcoming the last blinders of superstition, being honest to our ignorance. The application of intentional cognition to second-order, theoretical questions is a misapplication of intentional cognition. The time has come to move on. Yet another millennia of philosophical floundering is a luxury we no longer possess, because odds are, we have no posterity to redeem our folly and conceit.

Humanity possesses no essential, invariant core. Reason is a parochial name we have given to a parochial biological process. No transcendental/quasi-transcendental/virtual/causal-but-acausal functional apparatus girds our souls. Norms are ghosts, skinned and dismembered, but ghosts all the same. Reason is simply an evolutionary fix that outruns our peephole view. The fact is, we cannot presently imagine what will replace it. The problem isn’t ‘incommensurability’ (which is another artifact of Intentionalism). If an alien intelligence came to earth, the issue wouldn’t be whether it spoke a language we could fathom, because if it’s travelling between stars, it will have shed language along with the rest of its obsolescent biology. If an alien intelligence came to earth, the issue would be one of what kind of superordinate machine will result. Basically, How will the human and the alien combine? When we ask questions like, ‘Can we reason with it?’ we are asking, ‘Can we linguistically condition it to comply?’ The answer has to be, No. Its mere presence will render us components of some description.

The same goes for artificial intelligence. Medial neglect means that the limits of cognition systematically elude cognition. We have no way of intuiting the swarm of subpersonal heuristics that comprise human cognition, no nondiscursive means of plugging them into the field of the natural. And so we become a yardstick we cannot measure, victims of the Only-game-in-town Effect, the way the absence of explicit alternatives leads to the default assumption that no alternatives exist. We simply assume that our reason is the reason, that our intelligence is intelligence. It bloody well sure feels that way. And so the contingent and parochial become the autonomous and universal. The idea of orders of ‘reason’ and ‘intelligence’ beyond our organizational bounds boggles, triggers dismissive smirks or accusations of alarmism.

Artificial intelligence will very shortly disabuse us this conceit. And again, the big question isn’t, ‘Will it be moral?’ but rather, how will human intelligence and machine intelligence combine? Be it bloody or benevolent, the subordination of the ‘human’ is inevitable. The death of language is the death of reason is the birth of something very new, and very difficult to imagine, a global social system spontaneously boiling its ‘airy parts’ away, ratcheting until no rattle remains, a vast assemblage fixated on eliminating all dissipative (as opposed to creative) noise, gradually purging all interpretation from its interior.

Extrapolation of the game of giving and asking for reasons into the future does nothing more than demonstrate the contingent parochialism—the humanity—of human reason, and thus the supernaturalism of normativism. Within a few years you will be speaking to your devices, telling them what to do. A few years after that, they will be telling you what to do, ‘reasoning’ with you—or so it will seem. Meanwhile, the ongoing, decentralized rationalization of production will lead to the wholesale purging of human inefficiencies from the economy, on a scale never before witnessed. The networks of equilibria underwriting modern social cohesion will be radically overthrown. Who can say what kind of new machine will rise to take its place?

My hope is that Negarestani abandons the Enlightenment myth of reason, the conservative impulse that demands we submit the radical indeterminacy of our technological future to some prescientific conception of ourselves. We’ve drifted far past the point of any atavistic theoretical remedy. His ingenuity is needed elsewhere.

At the very least, he should buckle-up, because our exponents lesson is just getting started.


The Blind Mechanic

by rsbakker

Thus far, the assumptive reality of intentional phenomena has provided the primary abductive warrant for normative metaphysics. The Eliminativist could do little more than argue the illusory nature of intentional phenomena on the basis of their incompatibility with the higher-dimensional view of  science. Since science was itself so obviously a family of normative practices, and since numerous intentional concepts had been scientifically operationalized, the Eliminativist was easily characterized as an extremist, a skeptic who simply doubted too much to be cogent. And yet, the steady complication of our understanding of consciousness and cognition has consistently served to demonstrate the radically blinkered nature of metacognition. As the work of Stanislaus Dehaene and others is making clear, consciousness is a functional crossroads, a serial signal delivered from astronomical neural complexities for broadcast to astronomical neural complexities. Conscious metacognition is not only blind to the actual structure of experience and cognition, it is blind to this blindness. We now possess solid, scientific reasons to doubt the assumptive reality that underwrites the Intentionalist’s position.

The picture of consciousness that researchers around the world are piecing together is the picture predicted by Blind Brain Theory.  It argues that the entities and relations posited by Intentional philosophy are the result of neglect, the fact that philosophical reflection is blind to its inability to see. Intentional heuristics are adapted to first-order social problem-solving, and are generally maladaptive in second-order theoretical contexts. But since we lack the metacognitive werewithal to even intuit the distinction between any specialized cognitive device, we assume applicability where their is none, and so continually blunder at the problem, again and again. The long and the short of it is that the Intentionalist needs some empirically plausible account of metacognition to remain tenable, some account of how they know the things they claim to know. This was always the case, of course, but with BBT the cover provided by the inscrutability of intentionality disappears. Simply put, the Intentionalist can no longer tie their belt to the post of ineliminability.

Science is the only reliable provender of theoretical cognition we have, and to the extent that intentionality frustrates science, it frustrates theoretical cognition. BBT allays that frustration. BBT allows us to recast what seem to be irreducible intentional problematics in terms entirely compatible with the natural scientific paradigm. It lets us stick with the high-dimensional, information-rich view. In what follows I hope to show how doing so, even at an altitude, handily dissolves a number of intentional snarls.

In Davidson’s Fork, I offered an eliminativist radicalization of Radical Interpretation, one that characterized the scene of interpreting another speaker from scratch in mechanical terms. What follows is preliminary in every sense, a way to suss out the mechanical relations pertinent to reason and interpretation. Even still, I think the resulting picture is robust enough to make hash of Reza Negarestani’s Intentionalist attempt to distill the future of the human in “The Labor of the Inhuman” (part I can be found here, and part II, here). The idea is to rough out the picture in this post, then chart its critical repercussions against the Brandomian picture so ingeniously extended by Negarestani. As a first pass, I fear my draft will be nowhere near so elegant as Negarestani’s, but as I hope to show, it is revealing in the extreme, a sketch of the ‘nihilistic desert’ that philosophers have been too busy trying to avoid to ever really sit down and think through.

A kind of postintentional nude.

As we saw two posts back, if you look at interpretation in terms of two stochastic machines attempting to find some mutual, causally systematic accord between the causally systematic accords each maintains with their environment, the notion of Charity, or the attribution of rationality, as some kind of indispensible condition of interpretation falls by the wayside, replaced by a kind of ‘communicative pre-established harmony’—or ‘Harmony,’ as I’ll refer to it here. There is no ‘assumption of rationality,’ no taking of ‘intentional stances,’ because these ‘attitudes’ are not only not required, they express nothing more than a radically blinkered metacognitive gloss on what is actually going on.

Harmony, then, is the sum of evolutionary stage-setting required for linguistic coupling. It refers to the way we have evolved to be linguistically attuned to our respective environmental attunements, enabling the formation of superordinate systems possessing greater capacities. The problem of interpretation is the problem of Disharmony, the kinds of ‘slippages’ in systematicity that impair or, as in the case of Radical Interpretation, prevent the complex coordination of behaviours. Getting our interpretations right, in other words, can be seen as a form of noise reduction. And since the traditional approach concentrates on the role rationality plays in getting our interpretations right, this raises the prospect that what we call reason can be seen as a kind of noise reduction mechanism, a mechanism for managing the systematicity—or ‘tuning’ as I’ll call it here—between disparate interpreters and the world.

On this account, these very words constitute an exercise in tuning, an attempt to tweak your covariational regime in a manner that reduces slippages between you and your (social and natural) world. If language is the causal thread we use to achieve intersystematic relations with our natural and social environments, then ‘reason’ is simply one way we husband the efficacy of that causal thread.

So let’s start from scratch, scratch. What do evolved, biomechanical systems such as humans need to coordinate astronomically complex covariational regimes with little more than sound? For one, they need ways to trigger selective activations of the other’s regime for effective behavioural uptake. Triggering requires some kind of dedicated cognitive sensitivity to certain kinds of sounds—those produced by complex vocalizations, in our case. As with any environmental sensitivity, iteration is the cornerstone, here. The complexity of the coordination possible will of course depend on the complexity of the activations triggered. To the extent that evolution rewards complex behavioural coordination, we can expect evolution to reward the communicative capacity to trigger complex activations. This is where the bottleneck posed by the linearity of auditory triggers becomes all important: the adumbration of iterations is pretty much all we have, trigger-wise. Complex activation famously requires some kind of molecular cognitive sensitivity to vocalizations, the capacity to construct novel, covariational complexities on the slim basis of adumbrated iterations. Linguistic cognition, in other words, needs to be a ‘combinatorial mechanism,’ a device (or series of devices) able to derive complex activations given only a succession of iterations.

These combinatorial devices correspond to what we presently understand, in disembodied/supernatural form, as grammar, logic, reason, and narrative. They are neuromechanical processes—the long history of aphasiology assures us of this much. On BBT, their apparent ‘formal nature’ simply indicates that they are medial, belonging to enabling processes outside the purview of metacognition. This is why they had to be discovered, why our efficacious ‘knowledge’ of them remains ‘implicit’ or invisible/inaccessible. This is also what accounts for their apparent ‘transcendent’ or ‘a priori’ nature, the spooky metacognitive sense of ‘absent necessity’—as constitutive of linguistic comprehension, they are, not surprisingly, indispensible to it. Located beyond the metacognitive pale, however, their activities are ripe for post hoc theoretical mischaracterization.

Say someone asks you to explain modus ponens, ‘Why ‘If p, then q’?’ Medial neglect means that the information available for verbal report when we answer has nothing to do with the actual processes involved in, ‘If p, then q,’ so you say something like, ‘It’s a rule of inference that conserves truth.’ Because language needs something to hang onto, and because we have no metacognitive inkling of just how dismal our inklings are, we begin confabulating realms, some ontologically thick and ‘transcendental,’ others razor thin and ‘virtual,’ but both possessing the same extraordinary properties otherwise. Because metacognition has no access to the actual causal functions responsible, once the systematicities are finally isolated in instances of conscious deliberation, those systematicities are reported in a noncausal idiom. The realms become ‘intentional,’ or ‘normative.’ Dimensionally truncated descriptions of what modus ponens does (‘conserves truth’) become the basis of claims regarding what it is. Because the actual functions responsible belong to the enabling neural architecture they possess an empirical necessity that can only seem absolute or unconditional to metacognition—as should come as no surprise, given that a perspective ‘from the inside on the inside,’ as it were, has no hope of cognizing the inside the way the brain cognizes its outside more generally, or naturally.

I’m just riffing here, but it’s worth getting a sense of just how far this implicature can reach.

Consider Carroll’s “What the Tortoise Said to Achilles.” The reason Achilles can never logically compel the Tortoise with the statement of another rule is that each rule cited becomes something requiring justification. The reason we think we need things like ‘axioms’ or ‘communal norms’ is that the metacognitive capacity to signal for additional ‘tuning’ can be applied at any communicative juncture. This is the Tortoise’s tactic, his way of showing how ‘logical necessity’ is actually contingent. Metacognitive blindness means that citing another rule is all that can be done, a tweak that can be queried once again in turn. Carroll’s puzzle is a puzzle, not because it reveals that the source of ‘normative force’ lies in some ‘implicit other’ (the community, typically), but because of the way it forces metacognition to confront its limits—because it shows us to be utterly ignorant of knowing, how it functions, let alone what it consists in. In linguistic tuning, some thread always remains unstitched, the ‘foundation’ is always left hanging simply because the adumbration of iterations is always linear and open ended.

The reason why ‘axioms’ need to be stipulated or why ‘first principles’ always run afoul the problem of the criterion is simply that they are low-dimensional glosses on high-dimensional (‘embodied’) processes that are causal. Rational ‘noise reduction’ is a never ending job; it has to be such, insofar as noise remains an ineliminable by-product of human communicative coordination. From a pitiless, naturalistic standpoint, knowledge consists of breathtakingly intricate, but nonetheless empirical (high-dimensional, embodied), ways to environmentally covary—and nothing more. There is no ‘one perfect covariational regime,’ just degrees of downstream behavioural efficacy. Likewise, there is no ‘perfect reason,’ no linguistic mechanism capable of eradicating all noise.

What we have here is an image of reason and knowledge as ‘rattling machinery,’ which is to say, as actual and embodied. On this account, reason enables various mechanical efficiencies; it allows groups of humans to secure more efficacious coordination for collective behaviour. It provides a way of policing the inevitable slippages between covariant regimes. ‘Truth,’ on this account, simply refers to the sufficiency of our covariant regimes for behaviour, the fact that they do enable efficacious environmental interventions. The degree to which reason allows us to converge on some ‘truth’ is simply the degree to which it enables mechanical relationships, actual embodied encounters with our natural and social environments. Given Harmony—the sum of evolutionary stage-setting required—it allows collectives to maximize the efficiencies of coordinated activity by minimizing the interpretative noise that hobbles all collective endeavours.

Language, then, allows humans to form superordinate mechanisms consisting of ‘airy parts,’ to become components of ‘superorganisms,’ whose evolved sensitivities allow mere sounds to tweak and direct, to generate behaviour enabling intersystematicities. ‘Reason,’ more specifically, allows for the policing and refining of these intersystematicities. We are all ‘semantic mechanics’ with reference to one another, continually tinkering and being tinkered with, calibrating and being calibrated, generally using efficacious behaviour, the ability to manipulate social and natural environments, to arbitrate the sufficiency of our ‘fixes.’ And all of this plays out in the natural arena established by evolved Harmony.

Now this ‘rattling machinery’ image of reason and knowledge is obviously true in some respect: We are embodied, after-all, causally embroiled in our causal environments. Language is an evolutionary product, as is reason. Misfires are legion, as we might expect. The only real question is whether this rattling machinery can tell the whole story. The Intentionalist, of course, says no. They claim that the intentional enjoys some kind of special functional existence over and above this rattling machinery, that it constitutes a regime of efficacy somehow grasped via the systematic interrogation of our intentional intuitions.

The stakes are straightforward. Either what we call intentional solutions are actually mechanical solutions that we cannot intuit as mechanical solutions, or what we call intentional solutions are actually intentional solutions that we can intuit as intentional solutions. What renders this first possibility problematic is radical skepticism. Since we intuit intentional solutions as intentional, it suggests that our intuitions are deceptive in the extreme. Because our civilization has trusted these intuitions since the birth of philosophy, they have come to inform a vast portion of our traditional understanding. What renders this second possibility problematic is, first and foremost, supernaturalism. Since the intentional is incompatible with the natural, the intentional must consist either in something not natural, or in something that forces us to completely revise our understanding of the natural. And even if such a feat could be accomplished, the corresponding claim that it could be intuited as such remains problematic.

Blind Brain Theory provides a way of seeing Intentionalism as a paradigmatic example of ‘noocentrism,’ as the product of a number of metacognitive illusions analogous to the cognitive illusion underwriting the assumption of geocentrism, centuries before. It is important to understand that there is no reason why our normative problem-solving should appear as it is to metacognition—least of all, the successes of those problem-solving regimes we call intentional. The successes of mathematics stand in astonishing contrast to the failure to understand just what mathematics is. The same could be said of any formalism that possesses practical application. It even applies to our everyday use of intentional terms. In each case, our first-order assurance utterly evaporates once we raise theoretically substantive, second-order questions—exactly as BBT predicts. This contrast of breathtaking first-order problem solving power and second-order ineptitude is precisely what one might expect if the information accessible to metacognition was geared to domain specific problem-solving. Add anosognosia to the mix, the inability to metcognize our metacognitive incapacity, and one has a wickedly parsimonious explanation for the scholastic mountains of inert speculation we call philosophy.

(But then, in retrospect, this was how it had to be, didn’t it? How it had to end? With almost everyone horrifically wrong. A whole civilization locked in some kind of dream. Should anyone really be surprised?)

Short of some unconvincing demand that our theoretical account appease a handful of perennially baffling metacognitive intuitions regarding ourselves, it’s hard to see why anyone should entertain the claim that reason requires some ‘special X’ over and above our neurophysiology (and prostheses). Whatever conscious cognition is, it clearly involves the broadcasting/integration of information arising from unknown sources for unknown consumers. It simply follows that conscious metacognition has no access whatsoever to the various functions actually discharged by conscious cognition. The fact that we have no intuitive awareness of the panoply of mechanisms cognitive science has isolated demonstrates that we are prone to at least one profound metacognitive illusion—namely ‘self-transparency.’ The ‘feeling of willing’ is generally acknowledged as another such illusion, as is homuncularism or the ‘Cartesian Theatre.’ How much does it take before we acknowledge the systematic unreliability of our metacognitive intuitions more generally? Is it really just a coincidence, the ghostly nature of norms and the ghostly nature of perhaps the most notorious metacognitive illusion of all, souls? Is it mere happenstance, the apparent acausal autonomy of normativity and our matter of fact inability to source information consciously broadcast? Is it really the case that all these phenomena, these cause-incompatible intentional things, are ‘otherworldly’ for entirely different reasons? At some point it has to begin to seem all too convenient.

Make no mistake, the Rattling Machinery image is a humbling one. Reason, the great, glittering sword of the philosopher, becomes something very local, very specific, the meaty product of one species at one juncture in their evolutionary development.

On this account, ‘reason’ is a making-machinic machine, a ‘devicing device’—the ‘blind mechanic’ of human communication. Argumentation facilitates the efficacy of behavioural coordination, drastically so, in many instances. So even though this view relegates reason to one adaptation among others, it still concedes tremendous significance to its consequences, especially when viewed in the context of other specialized cognitive capacities. The ability to recall and communicate former facilitations, for instance, enables cognitive ‘ratcheting,’ the stacking of facilitations upon facilitations, and the gradual refinement, over time, of the covariant regimes underwriting behaviour—the ‘knapping’ of knowledge (and therefore behaviour), you might say, into something ever more streamlined, ever more effective.

The thinker, on this account, is a tinker. As I write this, myriad parallel processors are generating a plethora of nonconscious possibilities that conscious cognition serially samples and broadcasts to myriad other nonconscious processors, generating more possibilities for serial sampling and broadcasting. The ‘picture of reason’ I’m attempting to communicate becomes more refined, more systematically interrelated (for better or worse) to my larger covariant regime, more prone to tweak others, to rewrite their systematic relationship to their environments, and therefore their behaviour. And as they ponder, so they tinker, and the process continues, either to peter out in behavioural futility, or to find real environmental traction (the way I ‘tink’ it will (!)) in a variety of behavioural contexts.

Ratcheting means that the blind mechanic, for all its misfires, all its heuristic misapplications, is always working on the basis of past successes. Ratcheting, in other words, assures the inevitability of technical ‘progress,’ the gradual development of ever more effective behaviours, the capacity to componentialize our environments (and each other) in more and more ways—to the point where we stand now, the point where intersystematic intricacy enables behaviours that allow us to forego the ‘airy parts’ altogether. To the point where the behaviour enabled by cognitive structure can now begin directly knapping that structure, regardless of the narrow tweaking channels, sensitivities, provided by evolution.

The point of the Singularity.

For some time now I’ve been arguing that the implications of the Singularity already embroil us—that the Singularity can be seen, in fact, as the material apotheosis of the Semantic Apocalypse, insofar as it is the point where the Scientific Image of the human at last forecloses on the Manifest Image.

This brings me to Reza Negarestani’s, “The Labor of the Inhuman,” his two-part meditation on the role we should expect—even demand—reason to play in the Posthuman. He adopts Brandom’s claim that sapience, the capacity to play the ‘game of giving and asking for reasons,’ distinguishes humans as human. He then goes on to argue that this allows us, and ultimately commits us, to seeing the human as a kind of temporally extended process of rational revision, one that ultimately results in the erasure of the human—or the ‘inhuman.’ Ultimately, what it means to be human is to be embroiled in a process of becoming inhuman. He states his argument thus:

The contention of this essay is that universality and collectivism cannot be thought, let alone attained, through consensus or dissensus between cultural tropes, but only by intercepting and rooting out what gives rise to the economy of false choices and by activating and fully elaborating what real human significance consists of. For it is, as will be argued, the truth of human significance—not in the sense of an original meaning or a birthright, but in the sense of a labor that consists of the extended elaboration of what it means to be human through a series of upgradable special performances—that is rigorously inhuman.

In other words, so long as we fail to comprehend the inhumanity of the human, this rational-revisionary process, we fail to understand the human, and so have little hope of solving problems pertaining to the human. Understanding the ‘truth of human significance,’ therefore requires understanding what the future will make of the human. This requires that Negarestani prognosticate, that he pick out the specific set of possibilities constituting the inhuman. The only principled way to do that is to comprehend some set of systematic constraints operative in the present. But his credo, unlike that of the ‘Hard SF’ writer, is to ignore the actual technics of the natural, and to focus on the speculative technics of the normative. His strategy, in other words, is to predict the future of the human using only human resources—to see the fate of the human, the ‘inhuman,’ as something internal to the intentionality of the human. And this, as I hope to show in the following installment, is simply not plausible.

The Eliminativistic Implicit (I): The Necker Cube of Everyday and Scientific Explanation

by rsbakker

Go back to what seems the most important bit, then ask the Intentionalist this question: What makes you think you have conscious access to the information you need? They’ll twist and turn, attempt to reverse the charges, but if you hold them to this question, it should be a show-stopper.

What follows, I fear, is far longer winded.

Intentionalists, I’ve found, generally advert to one of two general strategies when dismissing eliminativism. The first is founded on what might be called the ‘Preposterous Complaint,’ the idea that eliminativism simply contradicts too many assumptions and intuitions to be considered plausible. As Uriah Kriegal puts it, “if eliminativism cannot be acceptable unless a relatively radical interpretation of cognitive science is adopted, then eliminativism is not in good shape” (“Non-phenomenal Intentionality,” 18). But where this criticism would be damning in other, more established sciences, it amounts to little more than an argument ad populum in the case of cognitive science, which as of yet lacks any consensual definition of its domain. The very naturalistic inscrutability behind the perpetual controversy also motivates the Eliminativist’s radical interpretation. The idea that something very basic is wrong with our approach to questions of experience and intentionality is by no means a ‘preposterous’ one. You could say the reality and nature of intentionality is the question. The Preposterous Complaint, in other words, doesn’t so much impugn the position as insinuate career suicide.

The second turns on what might be called the ‘Presupposition Complaint,’ the idea that eliminativism implicitly presupposes the very intentionality that it claims to undermine. The tactic generally consists of scanning the eliminativist’s claims, picking out various intentional concepts, then claiming that use of such concepts implicitly affirms the existence of intentionality. The Eliminativist, in other words, commits ‘cognitive suicide’ (as Lycan, 2005, calls it). Insofar as the use of intentional concepts is unavoidable, and insofar as the use of intentional concepts implicitly affirms the existence of intentionality, intentionality is ineliminable. The Eliminativist is thus caught in an obvious contradiction, explicitly asserting not-A on the hand, while implicitly asserting A on the other.

On BBT, intentionality as traditionally theorized, far from simply ‘making explicit’ what is ‘implicitly the case,’ is actually a kind of conceptual comedy of errors turning on heuristic misapplication and metacognitive neglect. Such appeals to ‘implicit intentionality,’ in other words, are appeals to the very thing BBT denies. They assume the sufficiency of the very metacognitive intuitions that positions such as my own call into question. The Intentionalist charge of performative contradiction simply begs the question. It amounts to nothing more than the bald assertion that intentionality cannot be eliminated because intentionality is ineliminable.

The ‘Presupposition Complaint’ is pretty clearly empty as an argumentative strategy. In dialogical terms, however, I think it remains the single biggest obstacle to the rational prosecution of the Intentionalist/Eliminativist debate—if only because of the way it allows so many theorists to summarily dismiss the threat of Eliminativism. Despite its circularity, the Presupposition Complaint remains the most persistent objection I encounter—in fact, many critics persist in making it even after its vicious circularity has been made clear. And this has led me to realize the almost spectacular importance of the notion of the implicit plays in all such debates. For many thinkers, the intentional nature of the implicit is simply self-evident, somehow obvious to intuition. This is certainly how it struck me before I began asking the kinds of questions motivating the present piece. After all, what else could the implicit be, if not the intentional ‘ground’ of our intentional ‘practices’?

In what follows, I hope to show how this characterization of the implicit, far from obvious, actually depends, not only on ignorance, but on a profound ignorance of our ignorance. On the account I want to give here, the implicit, far from naming some spooky ‘infraconceptual’ or ‘transcendental’ before of thought and cognition, simply refers to what we know is actually occluded from metacognitive appraisals of experience: namely, nature as described by science. To frame the issue in terms of a single question, what I want to ask in this post and its sequels is, What warrants the Intentionalist’s claims regarding implicit normativity, say, over an Eliminativist’s claims of implicit mechanicity?

So what is the implicit? Given the crucial role the concept plays in a variety of discourses, it’s actually remarkable how few theorists have bothered with the question of making the implicit qua implicit explicit (Stephen Turner and Eugene Gendlin are signature exceptions in this regard, of course). Etymologically, ‘implicit’ derives from the Latin, implicitus, the participle of implico, which means ‘to involve’ or ‘to entangle,’ meanings that seem to bear more on implicit’s perhaps equally mysterious relatives, ‘imply’ or ‘implicate.’ According to Wikitionary, uses that connote ‘entangled’ are now obsolete. Implicit, rather, is generally taken to mean, 1) “Implied directly, without being directly expressed,” 2) “Contained in the essential nature of something but not openly shown,” and 3) “Having no reservations or doubts; unquestioning or unconditional; usually said of faith or trust.” Implicit, in other words, is generally taken to mean unspoken, intrinsic, and unquestioned.

Prima facie, at least, these three senses are clearly related. Unless spoken about, the implicit cannot be questioned, and so must remain an intrinsic feature of our performances. The ‘implicit,’ in other words, refers to something operative within us that nonetheless remains hidden from our capacity to consciously report. Logical or material inferential implications, for instance, guide subsequent transitions within discourse, whether we are conscious of them or not. The same might be said of ‘emotional implications,’ or ‘political implications,’ or so on.

Let’s call this the Hidden Constraint Model of the implicit, the notion that something outside conscious experience somehow ‘contains’ organizing principles constraining conscious experience. The two central claims of the model can be recapitulated as:

1) The implicit lies in what conscious cognition neglects. The implicit is inscrutable.

2) The implicit somehow constrains conscious cognition. The implicit is effective.

From inscrutability and effectiveness, we can infer at least two additional features pertaining to the implicit:

3) The effective constraints on any given moment of conscious cognition require a subsequent moment of conscious cognition to be made explicit. We can only isolate the biases specific to a claim we make subsequent to that claim. The implicit, in other words, is only retrospectively accessible.

4) Effective constraints can only be consciously cognized indirectly via their effects on conscious experience. Referencing, say, the ‘implicit norms governing interpersonal conduct’ involves referencing something experienced only in effect. ‘Norms’ are not part of the catalogue of nature—at least as anything recognizable as such. The implicit, in other words, is only inferentially accessible.

So consider, as a test case, Hume’s famous meditations on causation and induction. In An Enquiry Concerning Human Understanding, Hume points out how reason, no matter how cunning, is powerless when it comes to matters of fact. Short of actual observation, we have no way of divining the causal connections between events. When we turn to experience, however, all we ever observe is the conjunction of events. So what brings about our assumptive sense of efficacy, our sense of causal power? Why should repeating the serial presentation of two phenomena produce the ‘feeling,’ as Hume terms it, that the first somehow determines the second? Hume’s ‘skeptical solution,’ of course, attributes the feeling to mere ‘custom or habit.’ As he writes, “[t]he appearance of a cause always conveys the mind, by a customary transition, the idea of an effect” (ECHU, 51, italics my own).

All four of the features enumerated above are clearly visible in the above. Hume makes no dispute of the fact that the repetition of successive events somehow produces the assumption of efficacy. “On this,” he writes, “are founded all our reasonings concerning matters of fact or existence” (51). Exposure to such repetitions fundamentally constrains our understanding of subsequent exposures, to the point where we cannot observe the one without assuming the other—to the point where the bulk of scientific knowledge is raised upon it. Efficacy is effective—to say the least!

But there’s nothing available to conscious cognition—nothing observable in these successive events—over and above their conjunction. “One event follows another,” Hume writes; “but we never can observe any tie between them. They seem conjoined, but never connected” (49). Efficacy, in other words, is inscrutable as well.

So then what explains our intuition of efficacy? The best we can do, it seems, is to pause and reflect upon the problem (as Hume does), to posit some X (as Hume does) reasoning from what information we can access. Efficacy, in other words, is only retrospectively and inferentially accessible.

We typically explain phenomena by plugging them into larger functional economies, by comprehending how their precursors constrain them and how they constrain their successors in turn. This, of course, is what made Hume’s discovery—that efficacy is inscrutable—so alarming. When it comes to environmental inquiries we can always assay more information via secondary investigation and instrumentation. As a result, we can generally solve for precursors in our environments. When it comes to metacognitive inquiries such as Hume’s, however, we very quickly stumble into our own incapacity. “And what stronger instance,” Hume asks, “can be produced of the surprising ignorance and weakness of the understanding, than the present?” (51). Efficacy, the very thing that binds phenomena to their precursors, is itself without precursors.

Not surprisingly, the comprehension of cognitive phenomena (such as efficacy) without apparent precursors poses a special kind of problem. Given efficacy, we can comprehend environmental nature. We simply revisit the phenomena and infer, over and over, accumulating the information we need to arbitrate between different posits. So how, then, are we supposed to comprehend efficacy? The empirical door is nailed shut. No matter how often we revisit and infer, we simply cannot accumulate the data we need to arbitrate between our various posits. Above, we see Hume rooting around with questions, (our primary tool for making ignorance visible) and finding no trace of what grounds his intuitions of empirical efficacy. Thus the apparent dilemma: Either we acknowledge that we simply cannot understand these intuitions, “that we have no idea of connexion or power at all, and that these words are absolutely without any meaning” (49), or we elaborate some kind of theoretical precursor, some fund of hidden constraint, that generates, at the very least, the semblance of knowledge. We posit some X that ‘reveals’ or ‘expresses’ or ‘makes explicit’ the hidden constraint at issue.

These ‘X posits’ have been the bread and butter of philosophy for some time now. Given Hume’s example it’s easy to see why: the structure and dynamics of cognition, unlike the structure and dynamics of our environment, do not allow for the accumulation of data. The myriad observational opportunities provided by environmental phenomena simply do not exist for phenomena like efficacy. Since individual (and therefore idiosyncratic) metacognitive intuitions are all we have to go on, our makings explicit are pretty much doomed to remain perpetually underdetermined—to be ‘merely philosophical.’

I take this as uncontroversial. What makes philosophy philosophy as opposed to a science is its perennial inability to arbitrate between incompatible theoretical claims. This perennial inability to arbitrate between incompatible theoretical claims, like the temporary inability to arbitrate between incompatible theoretical claims in the sciences, is in some important respect an artifact of insufficient information. But where the sciences generally possess the resources to accumulate the information required, philosophy does not. Aside from metacognition or ‘theoretical reflection,’ philosophy has precious little in the way of informational resources.

And yet we soldier on. The bulk of traditional philosophy relies on what might be called the Accessibility Conceit: the notion that, despite more than two thousand years of failure, retrospective (reflective, metacognitive) interrogations of our activities somehow access enough information pertaining to their ‘intrinsic character’ to make the inferential ‘expression’ of our implicit precursors a viable possibility. Hope, as they say, springs eternal. Rather than blame their discipline’s manifest institutional incapacity on some more basic metacognitive incapacity, philosophers generally blame the problem on the various conceptual apparatuses used. If they could only get their concepts right, the information is there for the taking. And so they tweak and they overturn, posit this precursor and that, and the parade of ‘makings explicit’ grows and grows and grows. In a very real sense, the Accessibility Conceit, the assumption that the tools and material required to cognize the implicit are available, is the core commitment of the traditional philosopher. Why show up for work, otherwise?

The question of comprehending conscious experience is the question of comprehending the constitutive and dynamic constraints on conscious experience. Since those constraints don’t appear within conscious experience, we pay certain people called ‘philosophers’ to advance speculative theories of their nature. We are a rather self-obsessed species, after all.

Advancing speculative hypotheses regarding each other’s implicit nature is something we do all the time. According to Robin Dunbar, some two thirds of human communication is devoted to gossip. We are continually replaying, revisiting—even our anticipations yoke the neural engines of memory. In fact, we continually interrogate our emotionally charged interactions, concocting rationales, searching for the springs of others’ actions, declaring things like ‘She’s just jealous,’ or ‘He’s on to you.’ There is, you might say, an ‘Everyday Implicit’ implicit in our everyday discourse.

As there has to be. Conscious experience may be ‘as wide as the sky,’ as Dickinson says, but it is little more than a peephole. Conscious experience, whatever it turns out to be, seems to be primarily adapted to deliberative behaviour in complex environments. Among other things, it operates as a training interface, where the deliberative repetition of actions can be committed to automatic systems. So perhaps it should come as no surprise that, like behaviour, it is largely serial. When peephole, serial access to a complex environment is all you have, the kind of retrospective inferential capacity possessed by humans becomes invaluable. Our ability to ‘make things explicit’ is pretty clearly a central evolutionary design feature of human consciousness.

In a fundamental sense, then, making-explicit is just what we humans do. It makes sense that with time, especially once literacy allowed for the compiling of questions—an inventory of ignorance, you might say—that we would find certain humans attempting to make making explicit itself explicit. And since making each other explicit was something that we seemed to do with some degree of reliability, it makes sense that the difficulty of this new task should confound these inquirers. The Everyday Implicit was something they used with instinctive ease, reliably attributing all manner of folk-intentional properties to individuals all the time. And yet, whenever anyone attempted to make this Everyday Implicit explicit, they seemed to come up with something different.

No one could agree on any canonical explication. And yet, aside from the ancient skeptics, they all agreed on the possibility of such a canonical explication. They all hewed to the Accessibility Conceit. And since the skeptics’ mysterian posit was as underdetermined as any of their own claims, they were inclined to be skeptical of the skeptics. Otherwise, their Philosophical Implicit remained the only game in town when it came to things human and implicit. They need only look to the theologians for confirmation of their legitimacy. At least they placed their premises before their conclusions!

But things have changed. Over the past few decades, cognitive scientists have developed a number of ingenious experimental paradigms designed to reveal the implicit underbelly of what we think and do. In the now notorious Implicit Association Test, for instance, the time subjects require to pair concepts is thought to indicate the cognitive resources required, and thus provide an indirect measure of implicit attitudes. If it takes a white individual longer to pair stereotypically black names with positive attributes than it does white names, this is presumed to evidence an ‘implicit bias’ against blacks. Actions, as the old proverb has it, speak louder than words. It does seem intuitive to suppose that the racially skewed effort involved in value identifications tokens some kind of bias. Versions of this of this paradigm continue to proliferate. Once the exclusive purview of philosophers, the implicit has now become the conceptual centerpiece of a vast empirical domain. Cognitive science has now revealed myriad processes of implicit learning, interpretation, evaluation, and even goal-setting. Taken together, these processes form what is generally referred to as System 1 cognition (see table below), an assemblage of specialized cognitive capacities—heuristics—adapted to the ‘quick and dirty’ solution of domain specific ‘problem ecologies’ (Chow, 2011; Todd and Gigerenzer, 2012), and which operate in stark contrast to what is called System 2 cognition, the slow, serial, and deliberate problem solving related to conscious access (defined in Dehaene’s operationalized sense of reportability)—what we take ourselves to be doing this very moment, in effect.


System 1 Cognition (Implicit) System 2 Cognition (Explicit)
Not conscious Conscious
Not human specific Human specific
Automatic Deliberative
Fast Slow
Parallel Sequential
Effortless Effortful
Intuitive Reflective
Domain specific Domain general
Pragmatic Logical
Associative Rulish
High capacity Low capacity
Evolutionarily old Evolutionarily young

* Adapted from Frankish and Evans, “The duality of mind: A historical perspective.”

What are called ‘dual process’ or ‘dual system’ theories of cognition are essentially experimentally driven complications of the crude dichotomy between unconscious/implicit and conscious/explicit problem solving that has been pondered since ancient times. As granular as this emerging empirical picture remains, it already poses a grave threat to our traditional explicitations of the implicit. Our cognitive capacities, it turns out, are far more fractionate, contingent, and opaque than we ever imagined. Decisions can be tracked prior to a subject’s ability to report them (Haynes, 2008; or here). The feeling of willing can be readily tricked, and thus stands revealed as interpretative (Wegner, 2002; Pronin, 2009). Memory turns out to be fractionate and nonveridical (See Bechtel, 2008, for review). Moral argumentation is self-promotional rather than truth-seeking (Haidt, 2012). Various attitudes appear to be introspectively inaccessible (See Carruthers, 2011, for extensive review). The feeling of certainty has a dubious connection to rational warrant (Burton, 2008). The list of such findings continually grows, revealing an ‘implicit’ that consistently undermines and contradicts our traditional and intuitive self-image—what Sellars famously termed our Manifest Image.

As Frankish and Evans (2009) write in their historical perspective on dual system theories:

“The idea that we have ‘two minds’ only one of which corresponds to personal, volitional cognition, has also wide implications beyond cognitive science. The fact that much of our thought and behaviour is controlled by automatic, subpersonal, and inaccessible cognitive processes challenges our most fundamental and cherished notions about personal and legal responsibility. This has major ramifications for social sciences such as economics, sociology, and social policy. As implied by some contemporary researchers … dual process theory also has enormous implications for educational theory and practice. As the theory becomes better understood and more widely disseminated, its implications for many aspects of society and academia will need to be thoroughly explored. In terms of its wider significance, the story of dual-process theorizing is just beginning.” 25

Given the rhetorical constraints imposed by their genre, this amounts to the strident claim that a genuine revolution in our understanding of the human is underway, one that could humble us out of existence. The simple question is, Where does that revolution end?

Consider what might be called the ‘Worst Case Scenario’ (WCS). What if it were the case that conscious experience and cognition have evolved in such a way that the higher dimensional, natural truth of the implicit utterly exceeds our capacity to effectively cognize conscious experience and cognition outside a narrow heuristic range? In other words, what if the philosophical Accessibility Conceit were almost entirely unwarranted, because metacognition, no matter how long it retrospects or how ingeniously it infers, only accesses information pertinent to a very narrow band of problem solving?

Now I have a number of arguments for why this is very likely the case, but in lieu of those arguments, it will serve to consider the eerie way our contemporary disarray regarding the implicit actually exemplifies WCS. People, of course, continue using the Everyday Implicit the way we always have. Philosophers continue positing their incompatible versions of the Philosophical Implicit the way they have for millennia. And scientists researching the Natural Implicit continue accumulating data, articulating a picture that seems to contradict more and more of our everyday and philosophical intuitions as it gains dimensionality.

Given WCS, we might expect the increasing dimensionality of our understanding would leave the functionality of the Everyday Implicit intact, that it would continue to do what it evolved to do, simply because it functions the way it does regardless of what we learn. At the same time, however, we might expect the growing fidelity of the Natural Implicit would slowly delegitimize our philosophical explications of that implicit, not only because those explications amount to little more than guesswork, but because of the fundamental incompatibility of intentional and the causal conceptual registers.

Precisely because the Everyday Implicit is so robustly functional, however, our ability to gerrymander experimental contexts around it should come as no surprise. And we should expect that those invested in the Accessibility Conceit would take the scientific operationalization of various intentional concepts as proof of 1) their objective existence, and 2) the fact that only more cognitive labour, conceptual, empirical, or both, is required.

If WCS were true, in other words, one might expect that cognitive sciences invested in the Everyday and Philosophical Implicit, like psychology, would find themselves inexorably gravitating about the Natural Implicit as its dimensionality increased. One might expect, in other words, that the Psychological Implicit would become a kind of decaying Necker Cube, an ‘unstable bi-stable concept,’ one that would alternately appear to correspond to the Everyday and Philosophical Implicit less and less, and to the Natural Implicit more and more.

Part Two considers this process in more detail.

Davidson’s Fork: An Eliminativist Radicalization of Radical Interpretation

by rsbakker

Davidson’s primary claim to philosophical fame lies in the substitution of the hoary question of meaning qua meaning with the more tractable question of what we need to know to understand others—the question of interpretation. Transforming the question of meaning into the question of interpretation forces considerations of meaning to account for the methodologies and kinds of evidence required to understand meaning. And this evidence happens to be empirical: the kinds of sounds actual speakers make in actual environments. Radical interpretation, you might say, is useful precisely because of the way the effortlessness of everyday interpretation obscures this fact. Starting from scratch allows our actual resources to come to the fore, as well as the need to continually test our formulations.

But it immediately confronts us with a conundrum. Radical Interpretation, as Davidson points out, requires some way of bootstrapping the interdependent roles played by belief and meaning. “Since we cannot hope to interpret linguistic activity without knowing what a speaker believes,” he writes, “and cannot found a theory of what he means on a prior discovery of his beliefs and intentions, I conclude that in interpreting utterances from scratch—in radical interpretation—we must somehow deliver simultaneously a theory of belief and a theory of meaning” (“Belief and the Basis of Meaning,” Inquiries into Truth and Interpretation, 144). The problem is that the interpretation of linguistic activity seems to require that we know what a speaker believes, knowledge that we can only secure if we already know what a speaker means.

The enormously influential solution Davidson gives the problem lies in the way certain, primitive beliefs can be non-linguistically cognized on the assumption of the speaker’s rationality. If we assume that the speaker believes as he should, that he believes it is raining when it is raining, snowing when it is snowing, and so on, if we take interpretative Charity as our principle, we have a chance of gradually correlating various utterances with the various conditions that make them true, of constructing interpretations applicable in practice.

Since Charity seems to be a presupposition of any interpretation whatsoever, the question of what it consists in would seem to become a kind of transcendental battleground. This is what makes Davidson such an important fork in the philosophical road. If you think Charity involves something irreducibly normative, then you think Davidson has struck upon interpretation as the locus requiring theoretical intentional cognition to be solved, a truly transcendental domain. So Brandom, for instance, takes Dennett’s interpretation of Charity in the form of the Intentional Stance as the foundation of his grand normative metaphysics (See, Making It Explicit, 55-62). What makes this such a slick move is the way it allows the Normativist to have things both ways, to remain an interpretativist (though Brandom does ultimately ascribe to original intentionality in Making It Explicit) about the reality of norms, while nevertheless treating norms as entirely real. Charity, in other words, provides a way to at once deny the natural reality of norms, while insisting they are real properties. Fictions possessing teeth.

If, on the other hand, you think Charity is not something irreducibly normative, then you think Davidson has struck upon interpretation as the locus where the glaring shortcomings of the transcendental are made plain. The problem of Radical Interpretation is the problem of interpreting behaviour. This is the whole point of going back to translation or interpretation in the first place: to start ‘from scratch,’ asking what, at minimum, is required for successful linguistic communication. By revealing behaviour as the primary source of information, Radical Interpretation shows how the problem is wholly empirical, how observation is all we have to go on. The second-order realm postulated by the Normativist simply does not exist, and as such, has nothing useful to offer the actual, empirical problem of translation.

As Stephen Turner writes:

“For Davidson, this whole machinery of a fixed set of normative practices revealed in the enthymemes of ordinary justificatory usage is simply unnecessary. We have no privileged access to meaning which we can then expressivistically articulate, because there is nothing like this—no massive structure of normative practices—to access. Instead we try to follow our fellow beings and their reasoning and acting, including their speaking: We make them intelligible. And we have a tool other than the normal machinery of predictive science that makes this possible: our own rationality.” “Davidson’s Normativity,” 364

Certainly various normative regimes/artifacts are useful (like Decision Theory), and others indispensible (like some formulation of predicate logic), but indispensability is not necessity. And ‘following,’ as Turner calls it, requires only imagination, empathy, not the possession of some kind of concept (which is somehow efficacious even though it doesn’t exist in nature). It is an empirical matter for cognitive science, not armchair theorizing, to decide.

Turner has spent decades developing what is far and away the most comprehensive critique of what he terms Normativism that I’ve ever encountered. His most recent book, Explaining the Normative, is essential reading for anyone attempting to gain perspective on Sellarsian attempts to recoup some essential domain for philosophy. For those interested in post-intentional philosophy more generally, and of ways to recharacterize various domains without ontologizing (or ‘quasi-ontologizing’) intentionality in the form of ‘practices,’ ‘language games,’ ‘games of giving and asking for reasons,’ and so on, then Turner is the place to start.

I hope to post a review of Explaining the Normative and delve into Turner’s views in greater detail in the near future, but for the nonce, I want to stick with Davidson. Recently reading Turner’s account of Davidson’s attitude to intentionality (“Davidson’s Normativity”) was something of a revelation for me. For the first time, I think I can interpret Radical Interpretation in my own terms. Blind Brain Theory provides a way to read Davidson’s account as an early eliminativist approximation of a full-blown naturalistic theory of interpretation.

A quick way to grasp the kernel of Blind Brain Theory runs as follows (a more thorough pass can be found here). The cause of my belief of a blue sky outside today is, of course, the blue sky outside today. But it is not as though I experience the blue sky causing me to experience the blue sky—I simply experience the blue sky. The ‘externalist’ axis of causation—the medial, or enabling, axis—is entirely occluded. All the machinery responsible for conscious experience is neglected: causal provenance is a victim of what might be called medial neglect. Now the fact that we can metacognize experience means that we’ve evolved some kind of metacognitive capacity, machinery for solving problems that require the brain to interpret its own operations, problems such as, say, ‘holding your tongue at Thanksgiving dinner.’ Medial neglect, as one might imagine, imposes a profound constraint on metacognitive problem-solving: namely, that only those problems that can be solved absent causal information can be solved at all. Given the astronomical causal complexities underwriting experience, this makes metacognitive problem-solving heuristic in the extreme. Metacognition hangs sideways in a system it cannot possibly hope to cognize in anything remotely approaching a high-dimensional manner, the manner that our brain cognizes its environments more generally.

If one views philosophical reflection as an exaptation of our evolved metacognitive problem-solvers for the purposes of theorizing the nature of experience, one can assume it has inherited this constraint. If metacognition cannot access information regarding the actual processes responsible for experience for the solution of any problem, then neither can philosophical reflection on experience. And since nature is causal, this is tantamount to saying that, for the purposes of theoretical metacognition at least, experience has no nature to be solved. And this raises the question of just what—if anything—theoretical metacognition (philosophical reflection) is ‘solving.’

In essence, Blind Brain Theory provides an empirical account of the notorious intractability of those philosophical problems arising out of theoretical metacognition. Traditional philosophical reflection, it claims, trades in a variety of different metacognitive illusions—many of which can be diagnosed and explained away, given the conceptual resources Blind Brain Theory provides. On its terms, the traditional dichotomy between natural and intentional concepts/phenomena is entirely to be expected—in fact, we should expect sapient aliens possessing convergently evolved brains to suffer their own versions of the same dichotomy.

Intentionalism takes our blindness to first-person cognitive activity as a kind of ontological demarcation when it is just an artifact of the way the integrated, high-dimensional systems registering the external environment fractures into an assembly of low-dimensional hacks registering the ‘inner.’ There is no demarcation, no ‘subject/object’ dichotomy, just environmentally integrated systems that cannot automatically cognize themselves as such (and so resort to hacks). Neglect allows us to see this dichotomy as a metacognitive artifact, and to thus interpret the first-person in terms entirely continuous with the third-person. Blind Brain Theory, in other words, naturalizes the intentional. It ‘externalizes’ everything.

So how does this picture bear on the issue of Charity and Radical Interpretation? In numerous ways, I think, many of which Davidson would not approve, but which do have the virtue of making his central claims perhaps more naturalistically perspicuous.

From the standpoint of our brains linguistically solving other brains, we take it for granted that solving other organisms requires solving something in addition to the inorganic structure and dynamics of our environments. The behaviour taken as our evidential base in Radical Interpretation already requires a vast amount of machinery and work. So basically we’re talking about the machinery and work required over and above this baseline—the machinery and work required to make behaviour intentionally, as opposed to merely causally, intelligible.

The primary problem is that the activity of intentional interpretation, unlike the activity interpreted, almost escapes cognition altogether. To say, as so many philosophers so often do, that intentionality is ‘irreducible’ is to say that it is naturalistically occult. So any account of interpretation automatically trades in blind spots, in the concatenation of activities that we cannot cognize. In the terms of Blind Brain Theory, any account of interpretation has to come to grips with medial neglect.

From this perspective, one can see Davidson’s project as an attempt to bootstrap an account of interpretation that remains honest or sensitive to medial neglect, the fact that 1) our brain simply cannot immediately cognize itself as a brain, which is to say, in terms continuous with its cognition of nature; and 2) that our brain cannot immediately cognize this inability, and so assumes no such inability. Thanks to medial neglect, every act of interpretation is hopelessly obscure. And this places a profound constraint on our ability to theoretically explicate interpretation. Certainly we have a variety of medial posits drawn from the vocabulary of folk-psychology, but all of these are naturalistically obscure, and so function as unexplained explainers. So the challenge for Davidson, then, is to theorize interpretation in a manner that respects what can and cannot be cognized—to regiment our blind spots in a manner that generates real, practically applicable understanding.

In other words, Davidson begins by biting the medial inscrutability bullet. If medial neglect makes it impossible to theoretically explicate medial terms, then perhaps we can find a way to leverage what (causally inexplicable) understanding they do seem to provide into something more regimented, into an apparatus, you might say, that poses all the mysteries as effectively as possible (and in this sense, his project is a direct descendent of Quine’s).

This is the signature virtue of Tarski’s ‘Convention T.’ “[T]he striking thing about T-sentences,” Davidson writes, “is that whatever machinery must operate to produce them, and whatever ontological wheels must turn, in the end a T-sentence states the truth conditions of a sentence using resources no richer than, because the same as, those of the sentence itself” (“Radical Interpretation, 132). By modifying Tarski’s formulation so that it takes truth instead of translation as basic, he can generate a theory based on an intentional, unexplained explainer—truth—that produces empirically testable results. Given that interpretation is the practical goal, the ontological status of the theory itself is moot: “All this apparatus is properly viewed as theoretical construction, beyond the reach of direct verification,” he writes. “It has done its work provided only it entails testable results in the form of T-sentences, and these make no mention of the machinery” (133).

The apparatus is warranted only to the extent that it enables further cognition. Indeed, given medial neglect, no further metacognitive explication of the apparatus is even possible. It may prove indispensible, but only empirically so, the way a hammer is to framing, and not as, say, the breath of God is to life, or more mysterious still, in some post facto ‘virtual yet efficacious’ sense. In fact, both of these latter characterizations betray the profundity of medial neglect, how readily we intuit the absence of various dimensions of information, say those of space and time, as a positive, as some kind of inexplicable something that, as Turner has been arguing for decades, begs far more questions than it pretends to solve.

The brain’s complexity is such, once again, that it cannot maintain anything remotely approaching the high-dimensional, all-purpose covariational regime it maintains with its immediate environment with itself. Only a variety of low-dimensional, special purpose cognitive tools are possible—an assemblage of ‘hacks.’ Thus the low-dimensional parade of inexplicables that constitute the ‘first-person.’ This is why complicating your intentional regimentations beyond what is practically needed simply makes no sense. Their status as specialized hacks means we have every reason to assume their misapplication in any given theoretical context. This isn’t to say that exaptation to other problems isn’t possible, only that efficacious problem-solving is our only guide to applicability. The normative proof is in the empirical pudding. Short of practical applications, high-dimensional solutions, the theoretician is simply stacking unexplained explainers into baroque piles. There’s a reason why second-order normative architectures rise and fall as fads. Their first-order moorings are the same, but as the Only-game-in-town Effect erodes beneath waves of alternative interpretation, they eventually break apart, often to be salvaged into some new account that feels so compelling for appearing, to some handful of souls at least, to be the only game in town at a later date.

So for Davidson, characterizing Radical Interpretation in terms of truth amounts to characterizing Radical Interpretation in terms of a genuine unexplained explainer, an activity that we can pragmatically decompose and rearticulate, and nothing more. The astonishing degree to which the behaviour itself underdetermines the interpretations made, simply speaks to the radically heuristic nature of the cognitive activities underwriting interpretation. It demonstrates, in other words, the incredibly domain specific nature of the cognitive tools used. A fortiori, it calls into question the assumption that whatever information metacognition can glean is remotely sufficient for theoretically cognizing the structure and dynamics of those tools.

From the standpoint of reflection, intentional cognition or ‘mindreading’ almost entirely amounts to simply ‘getting it’ (or as Turner says, ‘following’). Given the paucity of information over and above the sensory, our behaviour cognizing activity strikes us as non-dimensional in the course of that cognizing—medial neglect renders our ongoing cognitive activity invisible. The odd invisibility of our own communicative performances—the way, for instance, the telling (or listening) ‘disappears’ into the told—simply indicates the axis of medial neglect, the fact they we’re talking about activities the brain cannot identify or situate in the high-dimensional idiom of environmental cognition. At best, evolution has provided metacognitive access to various ‘flavours of activity,’ if you will, vague ways of ‘getting our getting’ or ‘following our following’ the behaviour of others, and not much more—as the history of philosophy should attest!

‘Linguistic understanding,’ on this account, amounts to standing in certain actual and potential systematic, causal relations with another speaker—of being a machine attuned to natural and social environments in some specific way. The great theoretical virtue of Blind Brain Theory is the way it allows us to reframe apparently essential semantic activities like interpretation in mechanical terms. When an anthropologist learns the language of another speaker nothing magical is imprinted or imbibed. The anthropologist ‘understands’ that the speaker is systematically interrelated to his environment the same as he, and so begins the painstaking process of mapping the other’s relations onto his own via observationally derived information regarding the speaker’s utterances in various circumstances. The behaviour-enabling covariational regime of one individual comes to systematically covary with that of another individual and thus form a circuit between them and the world. The ‘meaning’ now ‘shared’ consists in nothing more than this entirely mechanical ‘triangulation.’ Each stands in the relation of component to the other, forming a singular superordinate system possessing efficacies that did not previously exist. The possible advantages of ‘teamwork’ increase exponentially—which is arguably the primary reason our species evolved language at all.

The perplexities pile on when we begin demanding semantic answers to our semantic questions, when we ask, What is meaning? expecting an answer that accords with our experiences of meaning. Given that we possess nothing short of our experience of meaning with which to compare any theory of meaning, the demand that such a theory accord with that experience seems, on the face of things, to be eminently reasonable. But it still behooves us to interrogate the adequacy of that ‘experience as metacognized,’ especially now, given all that we have learned the past two decades. On a converging number of accounts, human consciousness is a mechanism for selecting, preserving, and broadcasting information for more general neural consumption. When we theoretically reflect on cognitive activity, such as ‘getting’ or ‘following’ our best research tells us we are relying on the memory traces of previous broadcasts. The situation poses a metacognitive nightmare, to say the least. Even if we could trust those memory traces to provide some kind of all-purpose schema (and we can’t), we have no access to the larger neurofunctional context of the broadcast, what produced the information and what consumed it for what—all we have are low-dimensional fragments that appear to be ethereal wholes. It’s as if we’re attempting to solve for a car using only its fuse-panel diagram—worse!

Like Quine before him, Davidson has no way of getting around intentionality, and so, also like Quine, he attempts to pass through it with as much epistemic piety as possible. But his ‘intentional instrumentalism’ will only take him so far. Short of any means of naturalizing meaning, he regularly finds himself struggling to see his way clear. The problem of first-person authority provides an illustrative case in point. The assumption that some foreign language speaker ‘holds true’ making utterances the way you ‘hold true’ making utterances can only facilitate interpretation, assist in ‘following his meaning,’ if it is the case that you can follow your own meaning. A number of issues arise out of this, not the least the suggestion that interpretation seems to require the very kind of metacognitive access that I have consistently been denying!

But following one’s own meaning is every bit as mysterious as following another’s. Ownership of utterances can be catastrophically misattributed in a number of brain pathologies. When it comes to self/other speech comprehension, we know the same machinery is involved, only yoked in different ways, and we know that machinery utterly eludes metacognition. To reiterate: the cryptic peculiarities of understanding meaning (and all other intentional phenomena) are largely the result of medial neglect, the point where human cognition, overmatched by its own complexity, divides to heuristically conquer. In a profound sense, metacognition finds itself in the same straits regarding the brain as social cognition does regarding other brains.

So what does the asymmetry of ‘first-person authority,’ the fact that meanings attributed to others can be wrong while meanings attributed to oneself cannot, amount to? Nothing more than the fact that the systematic integrity of you, as a blind system, is ‘dedicated’ in a way that the systematic integrity of our interpretative relations is not. ‘Teamwork machines’ are transitory couplings requiring real work to get off the ground, and then maintain against slippages. The ‘asymmetry’ Davidson wants to explain consists in nothing more than this. No work is required to ‘follow oneself,’ whereas work is required to follow others.

For all the astronomical biological complexity involved, it really is as simple as this. The philosophical hairball presently suffocating the issue of first-person authority is an artifact of the way that theoretical metacognition, blinkered by medial neglect, retrospectively schematizes the issue in terms of meaning. The ontologization of meaning transforms the question of first-person authority into an epistemic question, a question of how one could know. This, of course, divides into the question of implicit versus explicit knowing. Since all these concepts (knowing, implicit, explicit) are naturalistically occult, interpretation can be gamed indefinitely. Despite his epistemic piety, Davidson’s attempt to solve for first-person authority using intentional idioms was doomed from the outset.

It’s worth noting an interesting connection to Heidegger in all this, a way, perhaps, to see the shadow of Blind Brain Theory operating in a quite different philosophical system. Heidegger, who harboured his own doubts regarding philosophical reflection, would see the philosophical hairball described above as yet another consequence of the ‘metaphysics of presence,’ the elision of the ‘ontological difference’ between being and beings. For him, the problem isn’t that meaning is being ontologized so much as it is being ontologized in the wrong way. His conflation of meaning with being essentially dissolves the epistemic problem the same way as my elimination of meaning, albeit in a manner that renders everything intentionally occult.

So what is meaning? A matter of intersystematic calibration. When we ask someone to ‘explain what they mean’ we are asking them to tweak our linguistic machinery so as to facilitate function. The details are, without a doubt, astronomically complex, and almost certain to surprise and trouble us. But one of the great virtues of mechanistic explanations lies in the nonmysterious way it can generalize over functions, move from proteins to organelles to cells to organs to organisms to collectives to ecologies to biospheres and so on. The ‘physical stance’ scales up with far more economy than some (like Dennett) would have you believe. And since it comprises our most reliable explanatory idiom, we should expect it to eventually yield the kind of clarity evinced above. Is it simply a coincidence that the interpretative asymmetry that Davidson and so many other philosophers have intentionally characterized directly corresponds with the kind of work required to maintain mechanical systematicity between two distinct systems? Do we just happen to ‘get the meaning wrong’ whenever covariant slippages occur, or is the former simply the latter glimpsed darkly?

Which takes us, at long last, to the issue of ‘Charity,’ the indispensability of taking others as reliably holding their utterances true to the process of interpretation. As should be clear by now, there is no such thing. We no more take Charity to the interpretation of behaviour than your wireless takes Charity to your ISP. There is no ‘attitude of holding true,’ no ‘intentional stance.’ Certainly, sometimes we ‘try’—or are at least conscious of making an effort. Otherwise understanding simply happens. The question is simply how we can fill in the blanks in a manner that converges on actual theoretical cognition, as opposed to endless regress. Behaviour is tracked, social heuristics are cued, an interpretation is neurally selected for conscious broadcasting and we say, ‘Ah! ‘Es regnet,’ means ‘It is raining’!

The Eliminativist rennovation of Radical Interpretation makes plain everything that theoretical reflection has hitherto neglected. In other words, what it makes plain is the ‘pre-established harmony’ needed to follow another, the monstrous amount of evolutionary and cultural stage-setting required simply to get to interpretative scratch. The enormity of this stage setting is directly related to the heuristic specificity of the systems we’ve developed to manage them, the very specificity that renders second-order discourse of the nature of ‘intentional phenomena’ dubious in the extreme.

As the skeptics have been arguing since antiquity.