Three Pound Brain

No bells, just whistling in the dark…

Tag: Brandom

The Eliminativistic Implicit II: Brandom in the Pool of Shiloam

by rsbakker

norm brain

In “The Eliminativistic Implicit I,” we saw how the implicit anchors the communicative solution of humans and their activities. Since comprehension consists in establishing connections between behaviours and their precursors, the inscrutability of those precursors requires we use explanatory posits, suppositional surrogate precursors, to comprehend ourselves and our fellows. The ‘implicit’ is a kind of compensatory mechanism, a communicative prosthesis for neglect, a ‘blank box’ for the post facto proposal of various, abductively warranted precursors.

We also saw how the implicit possessed a number of different incarnations:

1) The Everyday Implicit: The regime of folk posits adapted to solve various practical problems involving humans (and animals).

2) The Philosophical Implicit: The regime of intentional posits thought to solve aspects of the human in general.

3) The Psychological Implicit: The regime of functional posits thought to solve various aspects of the human in general.

4) The Mechanical Implicit: The regime of neurobiological posits thought to solve various aspects of the human in general.

The overarching argument I’m pressing is that only (4) holds the key to any genuine theoretical understanding of (1-3). On my account of (4), (1) is an adaptive component of socio-communicative cognition, (2) is largely an artifact of theoretical misapplications of those heuristic systems, and (3) represents an empirical attempt to approximate (4) on the basis of indirect behavioural evidence.

In this episode, the idea is to illustrate how both the problems and the apparent successes of the Philosophical Implicit can be parsimoniously explained in terms of neglect and heuristic misapplication via Robert Brandom’s magisterial Making-it-Explicit. We’ll consider what motivates Brandom’s normative pragmatism, why he thinks that only normative cognition can explain normative cognition. Without this motivation, the explanation of normative cognition defaults to natural cognition (epitomized by science), and Brandom quite simply has no subject matter. The cornerstone of his case is the Wittgensteinian gerrymandering argument against Regularism. As I hope to show, Blind Brain Theory dismantles this argument with surprising facility. And it does so, moreover, in a manner that explains why so many theorists (including myself at one time!) are so inclined to find the argument convincing. As it turns out, the intuitions that motivate Normativism turn on a cluster of quite inevitable metacognitive illusions.

norm pentagon

Blind Agents

Making-it-Explicit: Reasoning, Representing, and Discursive Commitment is easily the most sustained and nuanced philosophical consideration of the implicit I’ve encountered. I was gobsmacked I was when I first read it in the late 90s. Stylistically, it had a combination of Heideggerean density and Analytic clarity that I found narcotic. Argumentatively, I was deeply impressed by the way Brandom’s interpretive functionalism seemed to actually pull intentional facts from natural hats, how his account of communal taking as seemed to render normativity at once ‘natural’ and autonomous. For a time, I bought into a great deal of what Brandom had to say—I was particularly interested in working my ‘frame ontology’ into his normative framework. Making It Explicit had become a big part of my dissertation… ere epic fantasy saved my life!

I now think I was deluded.

In this work, Brandom takes nothing less than the explication of the ‘game of giving and asking for reasons’ as his task, “making explicit the implicit structure characteristic of discursive practice as such” (649). He wants to make explicit the role that making explicit plays in discursive cognition. It’s worth pausing to ponder the fact that we do so very many things with only the most hazy or granular second-order understanding. It might seem so platitudinal as to go without saying, but it’s worth noting in passing at least: Looming large in the implicature of all accounts such as Brandom’s is the claim that we somehow know the world without ever knowing how we know the world.

As we saw in the previous installment, the implicit designates a kind of profound cognitive incapacity, a lack of knowledge regarding our own activities. The implicit entails what might be called a Blind Agent Thesis, or BAT. Brandom, by his own admission, is attempting to generalize the behaviour of the most complicated biomechanical system known to science almost entirely blind to the functioning of that system. (He just thinks he’s operating at an ‘autonomous social functional’ level). He is, as we shall see, effectively arguing his own particular BAT.

Insofar as every theoretician, myself included, is trying to show ‘what everyone is missing,’ there’s a sense in which something like BAT is hard to deny. Why all the blather, otherwise? But this negative characterization clearly has a problem: How could we do anything without knowing how to do it? Obviously we have to ‘know how’ in some manner, otherwise we wouldn’t be able to do anything at all! This is the sense in which the implicit can be positively characterized as a species of knowing in its own right. And this leads us to the quasi-paradoxical understanding of the implicit as ‘knowing without knowing,’ a knowing how to do something without knowing how to discursively explain that doing.

Making explicit, Brandom is saying, has never been adequately made explicit—this despite millennia of philosophical disputation. He (unlike Kant, say) never offers any reason why this is the case, any consideration of what it is about making explicit in particular that should render it so resistant to explication—but then philosophers are generally prone to take the difficulty of their problems as a given. (I’m not the only one out there shouting the problem I happen to working on is like, the most important problem ever!) I mention this because any attempt to assay the difficulty of the problem of making making-explicit explicit would have explicitly begged the question of whether he (or anyone else) possessed the resources required to solve the problem.

You know, as blind and all.

What Brandom provides instead is an elegant reprise of the problem’s history, beginning with Kant’s fundamental ‘transformation of perspective,’ the way he made explicit the hitherto implicit normative dimension of making explicit, what allowed him “to talk about the natural necessity whose recognition is implicit in cognitive or theoretical activity, and the moral necessity whose recognition is implicit in practical activity, as species of one genus” (10).

Kant, in effect, had discovered something that humanity had been all but oblivious to: the essentially prescriptive nature of making explicit. Of course, Brandom almost entirely eschews Kant’s metaphysical commitments: for him, normative constraint lies in the attributions of other agents and nowhere else. Kant, in other words, had not so much illuminated the darkness of the implicit (which he baroquely misconstrues as ‘transcendental’) as snatch one crucial glimpse of its nature.

Brandom attributes the next glimpse to Frege, with his insistence on “respecting and enforcing the distinction between the normative significance of applying concepts and the causal consequences of doing so” (11). What Frege made explicit about making explicit, in other words, was its systematic antipathy to causal explanation. As Brandom writes:

“Psychologism misunderstands the pragmatic significance of semantic contents. It cannot make intelligible the applicability of norms governing the acts that exhibit them. The force of those acts is a prescriptive rather than a descriptive affair; apart from their liability to assessments of judgments as true and inferences as correct, there is no such thing as judgment or inference. To try to analyze the conceptual contents of judgments in terms of habits or dispositions governing the sequences of brain states or mentalistically conceived ideas is to settle on the wrong sort of modality, on causal necessitation rather than rational or cognitive right.” (12)

Normativity is naturalistically inscrutable, and thanks to Kant (“the great re-enchanter,” as Turner (2010) calls him), we know that making explicit is normative. Any explication of the implicit of making explicit, therefore, cannot be causal—which is to say, mechanistic. Frege, in other words, makes explicit a crucial consequence of Kant’s watershed insight: the fact that making explicit can only be made explicit in normative, as opposed to natural, terms. Explication is an intrinsically normative activity. Making causal constraints explicit at most describes what systems will do, never prescribes what they should do. Since we now know that explication is an intrinsically normative activity, making explicit the governing causal constraints has the effect of rendering the activity unintelligible. The only way to make explication theoretically explicit is to make explicit the implicit normative constraints that make it possible.

Which leads Brandom to the third main figure of his brief history, Wittgenstein. Thus far, we know only that explication is an intrinsically normative affair—our picture of making explicit is granular in the extreme. What are norms? Why do they have the curious ‘force’ that they do? What does that force consist in? Even if Kant is only credited with making explicit the normativity of making explicit, you could say the bulk of his project is devoted to exploring questions precisely like these. Consider, for instance, his explication of reason:

“But of reason one cannot say that before the state in which it determines the power of choice, another state precedes in which this state itself is determined. For since reason itself is not an appearance and is not subject at all to any conditions of sensibility, no temporal sequence takes place in it even as to its causality, and thus the dynamical law of nature, which determines the temporal sequence according to rules, cannot be applied to it.” Kant, The Critique of Pure Reason, 543

Reason, in other words, is transcendental, something literally outside nature as we experience it, outside time, outside space, and yet somehow fundamentally internal to what we are. The how of human cognition, Kant believed, lies outside the circuit of human cognition, save for what could be fathomed via transcendental deduction. Kant, in other words, not only had his own account of what the implicit was, he also had an account for what rendered it so difficult to make explicit in the first place!

He had his own version of BAT, what might be called a Transcendental Blind Agent Thesis, or T-BAT.

Brandom, however, far prefers the later Wittgenstein’s answers to the question of how the intrinsic normativity of making explicit should be understood. As he writes,

“Wittgenstein argues that proprieties of performance that are governed by explicit rules do not form an autonomous stratum of normative statuses, one that could exist though no other did. Rather, proprieties governed by explicit rules rest on proprieties governed by practice. Norms that are explicit in the form of rules presuppose norms implicit in practices.” (20)

Kant’s transcendental represents just such an ‘autonomous stratum of normative statuses.’ The problem with such a stratum, aside from the extravagant ontological commitments allegedly entailed, is that it seems incapable of dealing with a peculiar characteristic of normative assessment known since ancient times in the form of Agrippa’s trilemma or the ‘problem of the criterion.’ The appeal to explicit rules is habitual, perhaps even instinctive, when we find ourselves challenged on some point of communication. Given the regularity with which such appeals succeed, it seems natural to assume that the propriety of any given communicative act turns on the rules we are prone to cite when challenged. The obvious problem, however, is that rule citing is itself a communicative act that can be challenged. It stems from occluded precursors the same as anything else.

What Wittgenstein famously argues is that what we’re appealing to in these instances is the assent of our interlocutors. If our interlocutors happen to disagree with our interpretation of the rule, suddenly we find ourselves with two disputes, two improprieties, rather than one. The explicit appeal to some rule, in other words, is actually an implicit appeal to some shared system of norms that we think will license our communicative act. This is the upshot of Wittgenstein’s regress of rules argument, the contention that “while rules can codify the pragmatic normative significance of claims, they do so only against a background of practices permitting the distinguishing of correct from incorrect applications of those rules” (22).

Since this account has become gospel in certain philosophical corners, it might pay to block out the precise way this Wittgensteinian explication of the implicit does and does not differ from the Kantian explication. One comforting thing about Wittgenstein’s move, from a naturalist’s standpoint at least, is that it adverts to the higher-dimensionality of actual practices—it’s pragmatism, in effect. Where Kant’s making explicit is governed from somewhere beyond the grave, Wittgenstein’s is governed by your friends, family, and neighbours. If you were to say there was a signature relationship between their views, you could cite this difference in dimensionality, the ‘solidity’ or ‘corporeality’ that Brandom appeals to in his bid to ground the causal efficacy of his elaborate architecture (631-2).

Put differently, the blindness on Wittgenstein’s account belongs to you and everyone you know. You could say he espouses a Communal Blind Agent Thesis, or C-BAT. The idea is that we’re continually communicating with one another while utterly oblivious as to how we’re communicating with one another. We’re so oblivious, in fact, we’re oblivious to the fact we are oblivious. Communication just happens. And when we reflect, it seems to be all that needs to happen—until, that is, the philosopher begins asking his damn questions.

It’s worth pointing out, while we’re steeping in this unnerving image of mass, communal blindness, that Wittgenstein, almost as much as Kant, was in a position analogous to empirical psychologists researching cognitive capacities back in the 1950s and 1960s. With reference to the latter, Piccinini and Craver have argued (“Integrating psychology and neuroscience: functional analyses as mechanism sketches,” 2011) that informatic penury was the mother of functional invention, that functional analysis was simply psychology’s means of making due, a way to make the constitutive implicit explicit in the absence of any substantial neuroscientific information. Kant and Wittgenstein are pretty much in the same boat, only absent any experimental means to test and regiment their guesswork. The original edition of Philosophical Investigations, in case you were wondering, was published in 1953, which means Wittgenstein’s normative contextualism was cultured in the very same informatic vacuum as functional analysis. And the high-altitude moral, of course, amounts to the same: times have changed.

The cognitive sciences have provided a tremendous amount of information regarding our implicit, neurobiological precursors, so much so that the mechanical implicit is a given. The issue now isn’t one of whether the implicit is causal/mechanical in some respect, but whether it is causal/mechanical in every respect. The question, quite simply, is one of what we are blind to. Our biology? Our ‘mental’ programming? Our ‘normative’ programming? The more we learn about our biology, the more we fill in the black box with scientific facts, the more difficult it seems to become to make sense of the latter two.

norms

Ineliminable Inscrutability Scrutinized and Eliminated

Though he comes nowhere near framing the problem in these explicitly informatic terms, Brandom is quite aware of this threat. American pragmatism has always maintained close contact with the natural sciences, and post-Quine, at least, it has possessed more than its fair share of eliminativist inclinations. This is why he goes to such lengths to argue the ineliminability of the normative. This is why he follows his account of Kant’s discovery of the normativity of the performative implicit with an account of Frege’s critique of psychologism, and his account of Wittgenstein’s regress argument against ‘Regulism’ with an account of his gerrymandering argument against ‘Regularism.’

Regularism proposes we solve the problem of rule-following with patterns of regularities. If a given performance conforms to some pre-existing pattern of performances, then we call that performance correct or competent. If it doesn’t so conform, then we call it incorrect or incompetent. “The progress promised by such a regularity account or proprieties of practice,” Brandom writes, “lies in the possibility of specifying the pattern or regularity in purely descriptive terms and then allowing the relation between regular and irregular performance to stand in for the normative distinction between what is correct and what is not” (MIE 28). The problem with Regularism, however, is “that it threatens to obliterate the contrast between treating a performance as subject to normative assessment of some sort and treating it as subject to physical laws” (27). Thus the challenge confronting any Regularist account of rule-following, as Brandom sees it, is to account for its normative character. Everything in nature ‘follows’ the ‘rules of nature,’ the regularities isolated by the natural sciences. So what does the normativity that distinguishes human rule-following consist in?

For a regularist account to weather this challenge, it must be able to fund a distinction between what is in fact done and what ought to be done. It must make room for the permanent possibility of mistakes, for what is done or taken to be correct nonetheless turn out to be incorrect or inappropriate according to some rule or practice.” 27

The ultimate moral, of course, is that there’s simply no way this can be done, there’s no way to capture the distinction between what happens and what ought to happen on the basis of what merely happens. No matter what regularity the Regularist adduces ‘to play the role of norms implicit in practice,’ we find ourselves confronted by the question of whether it’s the right regularity. The fact is any number of regularities could play that role, stranding us with the question of which regularity one should conform to—which is to say, the question of the very normative distinction the Regularist set out to solve in the first place. Adverting to dispositions to pick out the relevant regularity simply defers the problem, given that “[n]obody ever acts incorrectly in the sense of violating his or her own dispositions” (29).

For Brandom, as with Wittgenstein, the problem of Regularism is intimately connected to the problem of Regulism: “The problem that Wittgenstein sets up…” he writes, “is to make sense of a notion of norms implicit in practice that will not lose either the notion of the implicitness, as regulism does, or the notion of norms, as simple regularism does” (29). To see this connection, you need only consider one of Wittgenstein’s more famous passages from Philosophical Investigations:

§217. “How am I able to obey a rule?”–if this is not a question about causes, then it is about the justification for my following the rule in the way I do.

If I have exhausted the justifications I have reached bedrock, and my spade is turned. Then I am inclined to say: “This is simply what I do.”

The idea, famously, is that rule-following is grounded, not in explicit rules, but in our actual activities, our practices. The idea, as we saw above, is that rule-following is blind. It is ‘simply what we do.’ “When I obey a rule, I do not choose,” Wittgenstein writes. “I obey the rule blindly” (§219). But if rule-following is blind, just what we find ourselves doing in certain contexts, then in what sense is it normative? Brandom quotes McDowell’s excellent (certainly from a BBT standpoint!) characterization of the problem in “Wittgenstein on Following a Rule”: “How can a performance be nothing but a ‘blind’ reaction to a situation, not an attempt to act on interpretation (thus avoiding Scylla); and be a case of going by a rule (avoiding Charybdis)?” (Mind, Value, and Reality, 242).

Wittgenstein’s challenge, in other words, is one of theorizing nonconscious rule-following in a manner that does not render normativity some inexplicable remainder. The challenge is to find some way to avoid Regulism without lapsing into Regularism. Of course, we’ve grown inured to the notion of ‘implicit norms’ as a theoretical explanatory posit, so much so as to think them almost self-evident—I know this was once the case for me. But the merest questioning quickly reveals just how odd implicit norms are. Nonconscious rule-following is automatic rule-following, after all, something mechanical, dispositional. Automaticity seems to preclude normativity, even as it remains amenable to regularities and dispositions. Although it seems obvious that evaluation and justification are things that we regularly do, that we regularly engage in normative cognition navigating our environments (natural and social), it is by no means clear that only normative posits can explain normative cognition. Given that normative cognition is another natural artifact, the product of evolution, and given the astounding explanatory successes of science, it stands to reason that natural, not supernatural, posits are likely what’s required.

All this brings us back to C-BAT, the fact that Wittgenstein’s problem, like Brandom’s, is the problem of neglect. ‘This is simply what I do,’ amounts to a confession of abject ignorance. Recall the ‘Hidden Constraint Model’ of the implicit from our previous discussion. Cognizing rule-following behaviour requires cognizing the precursors to rule-following behaviour, precursors that conscious cognition systematically neglects. Most everyone agrees on the biomechanical nature of those precursors, but Brandom (like intentionalists more generally) wants to argue that biomechanically specified regularities and dispositions are not enough, that something more is needed to understand the normative character of rule-following, given the mysterious way regularities and dispositions preclude normative cognition. The only way to avoid this outcome, he insists, is to posit some form of nonconscious normativity, a system of preconscious, pre-communicative ‘rules’ governing cognitive discourse. The upshot of Wittgenstein’s arguments against Regularism seems to be that only normative posits can adequately explain normative cognition.

But suddenly, the stakes are flipped. Just as the natural is difficult to understand in the context of the normative, so too is the normative difficult to understand in the context of the natural. For some critics, this is difficulty enough. In Explaining the Normative, for instance, Stephen Turner does an excellent job tracking, among other things, the way Normativism attempts to “take back ground lost to social science explanation” (5). He begins by providing a general overview of the Normativist approach, then shows how these self-same tactics characterized social science debates of the early twentieth-century, only to be abandoned as their shortcomings became manifest. “The history of the social sciences,” he writes, “is a history of emancipation from the intellectual propensity to intentionalize social phenomenon—this was very much part of the process that Weber called the disenchantment of the world” (147). His charge is unequivocal: “Brandom,” he writes, “proposes to re-enchant the world by re-instating the belief in normative powers, which is to say, powers in some sense outside of and distinct from the forces known to science” (4). But although this is difficult to deny in a broad stroke sense, he fails to consider (perhaps because his target is Normativism in general, and not Brandom, per se) the nuance and sensitivity Brandom brings to this very issue—enough, I think, to walk away theoretically intact.

In the next installment, I’ll consider the way Brandom achieves this via Dennett’s account of the Intentional Stance, but for the nonce, it’s important that we keep the problem of re-enchantment on the table. Brandom is arguing that the inability of natural posits to explain normative cognition warrants a form of theoretical supernaturalism, a normative metaphysics, albeit one he wants to make as naturalistically palatable as possible.

Even though neglect is absolutely essential to their analyses of Regulism and Regularism, neither Wittgenstein nor Brandom so much as pause to consider it. As astounding as it is, they simply take our utter innocence of our own natural and normative precursors as a given, an important feature of the problem ecology under consideration to be sure, but otherwise irrelevant to the normative explication of normative cognition. Any role neglect might play beyond anchoring the need for an account of implicit normativity is entirely neglected. The project of Making It Explicit is nothing other than the project of making the activity of making explicit explicit, which is to say, the project of overcoming metacognitive neglect regarding normative cognition, and yet nowhere does Brandom so much as consider just what he’s attempting to overcome.

Not surprisingly, this oversight proves catastrophic—for the whole of Normativism, and not simply Brandom.

Just consider, for instance, the way Brandom completely elides the question of the domain specificity of normative cognition. Normative cognition is a product of evolution, part of a suite of heuristic systems adapted to solve some range of social problems as effectively as possible given the resources available. It seems safe to surmise that normative cognition, as heuristic, possesses what Todd, Gigarenzer, and the ABC Research Group (2012) call an adaptive ‘problem-ecology,’ a set of environments possessing complementary information structures. Heuristics solve via the selective uptake of information, wedding them in effect, to specific problem-solving domains. ‘Socio-cognition,’ which manages to predict, explain, even manipulate astronomically complex systems on the meagre basis of observable behaviour, is paradigmatic of a heuristic system. In the utter absence of causal information, it can draw a wide variety of reliable causal conclusions, but only within a certain family of problems. As anthropomorphism, the personification or animation of environments, shows, humans are predisposed to misapply socio-cognition to natural environments. Pseudo-solving natural environments via socio-cognition may have solved various social problems, but precious few natural ones. In fact, the process of ‘disenchantment’ can be understood as a kind of ‘rezoning’ of socio-cognition, a process of limiting its application to those problem-ecologies where it actually produces solutions.

Which leads us to the question: So what, then, is the adaptive problem ecology of normative cognition? More specifically, how do we know that the problem of normative cognition belongs to the problem ecology of normative cognition?

As we saw Brandom’s argument against Regularism could itself be interpreted as a kind of ‘ecology argument,’ as a demonstration of how the problem of normative cognition does not belong to the problem ecology of natural cognition. Natural cognition cannot ‘fund the distinction between ought and is.’ Therefore the problem ecology of normative cognition does not belong to natural cognition. In the absence of any alternatives, we then have an abductive case for the necessity of using normative cognition to solve normative cognition.

But note how recognizing the heuristic, or ecology dependant, nature of normative cognition has completely transformed the stakes of Brandom’s original argument. The problem for Regularism turns, recall, on the conspicuous way mere regularities fail to capture the normative dimension of rule-following. But if normative cognition were heuristic (as it almost certainly is), if what we’re prone to identify as the ‘normative dimension’ is something specific to the application of normative cognition, then this becomes the very problem we should expect. Of course the normative dimension disappears absent the application of normative cognition! Since Regularism involves solving normative cognition using the resources of natural cognition, it simply follows that it fails to engage resources specific to normative cognition. Consider Kripke’s formulation of the gerrymandering problem in terms of the ‘skeptical paradox’: “For the sceptic holds that no fact about my past history—nothing that was ever in my mind, or in my external behavior—establishes that I meant plus rather than quus” (Wittgenstein, 13). Even if we grant a rule-follower access to all factual information pertaining to rule-following, a kind of ‘natural omniscience,’ they will still be unable to isolate any regularity capable of playing ‘the role of norms implicit in practice.’ Again, this is precisely what we should expect given the domain specificity of normative cognition proposed here. If ‘normative understanding’ were the artifact of a cognitive system dedicated to the solution of a specific problem-ecology, then it simply follows that the application of different cognitive systems would fail to produce normative understanding, no matter how much information was available.

What doesn’t follow is that normative cognition thus lies outside the problem ecology of natural cognition, let alone inside the problem ecology of normative cognition. The ‘explanatory failure’ that Brandom and others use to impeach the applicability of natural cognition to normative cognition is nothing of the sort. It simply makes no sense to demand that one form of cognition solve another form of cognition as if it were that other form. We know that normative cognition belongs to social cognition more generally, and that social cognition—‘mindreading’—operates heuristically, that it has evolved to solve astronomically complicated biomechanical problems involving the prediction, understanding, and manipulation of other organisms absent detailed biomechanical information. Adapted to solve in the absence of this information, it stands to reason that the provision of that information, facts regarding biomechanical regularities, will render it ineffective—‘grind cognitive gears,’ you could say.

Since these ‘technical details’ are entirely invisible to ‘philosophical reflection’ (thanks to metacognitive neglect), the actual ecological distinction between these systems escapes Brandom, and he assumes, as all Normativists assume, that the inevitable failure of natural cognition to generate instances of normative cognition means that only normative cognition can solve normative cognition. Blind to our cognitive constitution, instances of normative cognition are all he or anyone else has available: our conscious experience of normative cognition consists of nothing but these instances. Explaining normative cognition is thus conflated with replacing normative cognition. ‘Competence’ becomes yet another ‘spooky explanandum,’ another metacognitive inkling, like ‘qualia,’ or ‘content,’ that seems to systematically elude the possibility the possibility of natural cognition (for suspiciously similar reasons).

This apparent order of supernatural explananda then provides the abductive warrant upon which Brandom’s entire project turns—all T-BAT and C-BAT approaches, in fact. If natural cognition is incapable, then obviously something else is required. Impressed by how our first-order social troubleshooting makes such good use of the Everyday Implicit, and oblivious to the ecological limits of the heuristic systems responsible, we effortlessly assume that making use of some Philosophical Implicit will likewise enable secondorder social troubleshooting… that tomes like Making It Explicit actually solve something.

But as the foregoing should make clear, precisely the opposite is the case. As a system adapted to troubleshoot first-order social ecologies, normative cognition seems unlikely to theoretically solve normative cognition in any satisfying manner. The very theoretical problems that plague Normativism—supernaturalism, underdetermination, and practical inapplicability—are the very problems we should expect if normative cognition were not in fact among the problems that normative cognition can solve.

As an evolved, biological capacity, however, normative cognition clearly belongs to the problem ecology of natural cognition. Simply consider how much the above sketch has managed to ‘make explicit.’ In parsimonious fashion it explains, 1) the general incompatibility of natural and normative cognition; 2) the inability of Regularism to ‘play the role of norms implicit in practice’; 3) why this inability suggests the inapplicability of natural cognition to the problem of normative cognition; 4) why Normativism seems the only alternative as a result; and 5) why Normativism nonetheless suffers the debilitating theoretical problems it does. It solves the notorious Skeptical Paradox, and much else aside, using only the idiom of natural cognition, which is to say, in a manner not only compatible with the life sciences, but empirically tractable as well.

Brandom is the victim of a complex of illusions arising out of metacognitive neglect. Wittgenstein, who had his own notion of heuristics and problem ecologies (grammars and language games), was sensitive to the question of what kinds of problems could be solved given the language we find ourselves stranded with. As a result, he eschews the kind of systematic normative metaphysics that Brandom epitomizes. He takes neglect seriously insofar as ‘this is simply what I do’ demarcates, for him, the pale of credible theorization. Even so, he nevertheless succumbs to a perceived need to submit, however minimally or reluctantly, the problem of normative cognition (in terms of rule-following) to the determinations of normative cognition, and is thus compelled to express his insights in the self-same supernatural idiom as Brandom, who eschews what is most valuable in Wittgenstein, his skepticism, and seizes on what is most problematic, his normative metaphysics.

There is a far more parsimonious way. We all agree humans are physical systems nested within a system of such systems. What we need to recognize is how being so embedded poses profound constraints on what can and cannot be cognized. What can be readily cognized are other systems (within a certain range of complexity). What cannot be readily cognized is the apparatus of cognition itself. The facts we call ‘natural’ belong to the former, and the facts we call ‘intentional’ belong to the latter. Where the former commands an integrated suite of powerful environmental processors, the latter relies on a hodgepodge of specialized socio-cognitive and metacognitive hacks. Since we have no inkling of this, we have no inkling of their actual capacities, and so run afoul a number of metacognitive impasses. So for instance, intentional cognition has evolved to overcome neglect, to solve problems in the absence of causal information. This is why philosophical reflection convinces us we somehow stand outside the causal order via choice or reason or what have you. We quite simply confuse an incapacity, our inability to intuit our biomechanicity, with a special capacity, our ability to somehow transcend or outrun the natural order.

We are physical in such a way that we cannot intuit ourselves as wholly physical. To cognize nature is to be blind to the nature of cognizing. To be blind to that blindness is to think cognizing has no nature. So we assume that nature is partial, and that we are mysteriously whole, a system unto ourselves.

Reason be praised.

 

The Blind Mechanic II: Reza Negarestani and the Labour of Ghosts

by rsbakker

For some time now I’ve been arguing that the implications of the Singularity already embroil us—that the Singularity can be seen, in fact, as the material apotheosis of the Semantic Apocalypse, insofar as it is the point where the Scientific Image of the human at last forecloses on the Manifest Image. In the “The Labor of the Inhuman” (which can be found here and here, with Craig Hickman’s critiques, here and here), Reza Negarestani adopts Brandom’s claim that sapience, the capacity to play the ‘game of giving and asking for reasons,’ distinguishes humans as human. He then goes onto argue that this allows us, and ultimately commits us, to seeing the human as a kind of temporally extended process of rational revision, one that ultimately results in the erasure of the human—or the ‘inhuman.’ Ultimately, what it means to be human is to be embroiled in a process of becoming inhuman. He states his argument thus:

The contention of this essay is that universality and collectivism cannot be thought, let alone attained, through consensus or dissensus between cultural tropes, but only by intercepting and rooting out what gives rise to the economy of false choices and by activating and fully elaborating what real human significance consists of. For it is, as will be argued, the truth of human significance—not in the sense of an original meaning or a birthright, but in the sense of a labor that consists of the extended elaboration of what it means to be human through a series of upgradable special performances—that is rigorously inhuman.

In other words, so long as we fail to comprehend the inhumanity of the human, this rational-revisionary process, we fail to understand the human, and so have little hope of solving problems pertaining to the human. Understanding the ‘truth of human significance,’ therefore requires understanding what the future will make of the human. It requires that Negarestani prognosticate. It requires, in other words, that he pick out the specific set of possibilities constituting the inhuman. The only principled way to do that is to comprehend some set of systematic constraints operative in the present. But his credo, unlike that of the ‘Hard SF’ writer, is to ignore the actual technics of the natural, and to focus on the speculative technics of the normative. His strategy, in other words, is to predict the future of the human using only human resources—to see the fate of the human, the ‘inhuman,’ as something internal to the human. And this, as I hope to show, is simply not plausible.

He understands the danger of conceiving his constraining framework as something fixed: “humanism cannot be regarded as a claim about human that can only be professed once and subsequently turned into a foundation or axiom and considered concluded.” He appreciates the implausibility of the static, Kantian transcendental approach. As a result, he proposes to take the Sellarsian/Brandomian approach, focussing on the unique relationship between the human and sapience, the “distinction between sentience as a strongly biological and natural category and sapience as a rational (not to be confused with logical) subject.” He continues:

The latter is a normative designation which is specified by entitlements and the responsibilities they bring about. It is important to note that the distinction between sapience and sentience is marked by a functional demarcation rather than a structural one. Therefore, it is still fully historical and open to naturalization, while at the same time being distinguished by its specific functional organization, its upgradable set of abilities and responsibilities, its cognitive and practical demands.

He’s careful here to hedge, lest the dichotomy between the normative and the natural comes across as too schematic:

The relation between sentience and sapience can be understood as a continuum that is not differentiable everywhere. While such a complex continuity might allow the naturalization of normative obligations at the level of sapience—their explanation in terms of naturalistic causes—it does not permit the extension of certain conceptual and descriptive resources specific to sapience (such as the particular level of mindedness, responsibilities, and, accordingly, normative entitlements) to sentience and beyond.

His dilemma here is the dilemma of the Intentionalist more generally. Science, on the one hand, is nothing if not powerful. The philosopher, on the other hand, has a notorious, historical tendency to confuse the lack of imagination for necessity. Foot-stomping will not do. He needs some way to bite this bullet without biting it, basically, some way of acknowledging the possible permeability of normativity to naturalization, while insisting, nonetheless, on the efficacy of some inviolable normative domain. To accomplish this, he adverts to the standard appeal to the obvious fact that norm-talk actually solves norm problems, that normativity, in other words, obviously possesses a problem-ecology. But of course the fact that norm-talk is indispensible to solving problems within a specific problem-ecology simply raises the issue of the limits of this ecology—and more specifically, whether the problem of humanity’s future actually belongs to that problem-ecology. What he needs to establish is the adequacy of theoretical, second-order norm-talk to the question of what will become of the human.

He offers us a good, old fashioned transcendental argument instead:

The rational demarcation lies in the difference between being capable of acknowledging a law and being solely bound by a law, between understanding and mere reliable responsiveness to stimuli. It lies in the difference between stabilized communication through concepts (as made possible by the communal space of language and symbolic forms) and chaotically unstable or transient types of response or communication (such as complex reactions triggered purely by biological states and organic requirements or group calls and alerts among social animals). Without such stabilization of communication through concepts and modes of inference involved in conception, the cultural evolution as well as the conceptual accumulation and refinement required for the evolution of knowledge as a shared enterprise would be impossible.

Sound familiar? The necessity of the normative lies in the irreflexive contingency of the natural. Even though natural relations constitute biological systems of astounding complexity, there’s simply no way, we are told, they can constitute the kind of communicative stability that human knowledge and cultural evolution requires. The machinery is just too prone to rattle! Something over and above the natural—something supernatural—is apparently required. “Ultimately,” Negarestani continues, “the necessary content as well as the real possibility of human rests on the ability of sapience—as functionally distinct from sentience—to practice inference and approach non-canonical truth by entering the deontic game of giving and asking for reasons.”

It’s worth pausing to take stock of the problems we’ve accumulated up to this point. 1) Even though the human is a thoroughgoing product of its past natural environments, the resources required to understand the future of the human, we are told, lie primarily, if not entirely, within the human. 2) Even though norm-talk possesses a very specific problem-ecology, we are supposed to take it on faith that the nature of norm-talk is something that only more norm-talk can solve, rather than otherwise (as centuries of philosophical intractability would suggest). And now, 3) Even though the natural, for all its high dimensional contingencies, is capable of producing the trillions of mechanical relations that constitute you, it is not capable of ‘evolving human knowledge.’ Apparently we need a special kind of supernatural game to do this, the ‘game of giving and asking for reasons,’ a low-dimensional, communicative system of efficacious (and yet acausal!) normative posits based on… we are never told—some reliable fund of information, one would hope.

But since no normativist that I know of has bothered to account for the evidential bases of their position, we’re simply left with faith in metacognitive intuition and this rather impressive sounding, second-order theoretical vocabulary of unexplained explainers—‘commitments,’ ‘inferences,’ ‘proprieties,’ ‘deontic statuses,’ ‘entitlements,’ and the like—a system of supernatural efficacies beyond the pale of any definitive arbitration. Negarestani sums this normative apparatus with the term ‘reason,’ and it is reason understood in this inferentialist sense, that provides the basis of charting the future of the human. “Reason’s main objective is to maintain and enhance itself,” he writes. “And it is the self-actualization of reason that coincides with the truth of the inhuman.”

Commitment to humanity requires scrutinizing the meaning of humanity, which in turn requires making the implicature of the human explicit—not just locally, but in its entirety. The problem, in a nutshell, is that the meaning of the human is not analytic, something that can be explicated via analysis alone. It arises, rather, out of the game of giving and asking for reasons, the actual, historical processes that comprise discursivity. And this means that unpacking the content of the human is a matter of continual revision, a process of interpretative differentiation that trends toward the radical, the overthrow of “our assumptions and expectations about what ‘we’ is and what it entails.”

The crowbar of this process of interpretative differentiation is what Negarestani calls an ‘intervening attitude,’ that moment in the game where the interpretation of claims regarding the human spark further claims regarding the human, the interpretation of which sparks yet further claims, and so on. The intervening attitude thus “counts as an enabling vector, making possible certain abilities otherwise hidden or deemed impossible.” This is why he can claim that “[r]evising and constructing the human is the very definition of committing to humanity.” And since this process is embedded in the game of giving and asking for reasons, he concludes that “committing to humanity is tantamount complying with the revisionary vector of reason and constructing humanity according to an autonomous account of reason.”

And so he writes:

Humanity is not simply a given fact that is behind us. It is a commitment in which the reassessing and constructive strains inherent to making a commitment and complying with reason intertwine. In a nutshell, to be human is a struggle. The aim of this struggle is to respond to the demands of constructing and revising human through the space of reasons.

In other words, we don’t simply ‘discover the human’ via reason, we construct it as well. And thus the emancipatory upshot of Negarestani’s argument: if reasoning about the human is tantamount to constructing the human, then we have a say regarding the future of humanity. The question of the human becomes an explicitly political project, and a primary desideratum of Negarestani’s stands revealed. He thinks reason as he defines it, as at once autonomous (supernatural) and historically concrete (or ‘solid,’ as Brandom would say) revisionary activity of theoretical argumentation, provides a means of assessing the adequacy of various political projects (traditional humanism and what he calls ‘kitsch Marxism) according to their understanding of the human. Since my present concern is to assess the viability of the account of reason Negarestani uses to ground the viability of this yardstick, I will forego considering his specific assessments in any detail.

The human is the malleable product of machinations arising out of the functional autonomy of reason. Negarestani refers to this as a ‘minimalist definition of humanity,’ but as the complexity of the Brandomian normative apparatus he deploys makes clear, it is anything but. The picture of reason he espouses is as baroque and reticulated as anything Kant ever proposed. It’s a picture, after all, that requires an entire article to simply get off the ground! Nevertheless, this dynamic normative apparatus provides Negarestani with a generalized means of critiquing the intransigence of traditional political commitments. The ‘self-actualization’ of reason lies in its ability “to bootstrap complex abilities out of its primitive abilities.” Even though continuity is with previous commitments is maintained at every step in the process, over time the consequences are radical: “Reason is therefore simultaneously a medium of stability that reinforces procedurality and a general catastrophe, a medium of radical change that administers the discontinuous identity of reason to an anticipated image of human.”

This results in what might be called a fractured ‘general implicature,’ a space of reasons rife with incompatibilities stemming from the refusal or failure to assiduously monitor and update commitments in light of the constructive revisions falling out of the self-actualization of reason. Reason itself, Negarestani is arguing, is in the business of manufacturing ideological obsolescence, always in the process of rendering its prior commitments incompatible with its present ones. Given his normative metaphysics, reason has become the revisionary, incremental “director of its own laws,” one that has the effect of rendering its prior laws, “the herald of those which are whispered to it by an implanted sense or who knows what tutelary nature” (Kant, Fundamental Principles of the Metaphysics of Morals). Where Hegel can be seen as temporalizing and objectifying Kant’s atemporal, subjective, normative apparatus, Brandom (like others) can be seen as socializing and temporalizing it. What Negarestani is doing is showing how this revised apparatus operates against the horizon of the future with reference to the question of the human. And not surprisingly, Kant’s moral themes remain the same, only unpacked along the added dimensions of the temporal and the social. And so we find Negarestani concluding:

The sufficient content of freedom can only be found in reason. One must recognize the difference between a rational norm and a natural law—between the emancipation intrinsic in the explicit acknowledgement of the binding status of complying with reason, and the slavery associated with the deprivation of such a capacity to acknowledge, which is the condition of natural impulsion. In a strict sense, freedom is not liberation from slavery. It is the continuous unlearning of slavery.

The catastrophe, apparently, has yet to happen, because here we find ourselves treading familiar ground indeed, Enlightenment ground, as Negarestani himself acknowledges, one where freedom remains bound to reason—“to the autonomy of its normative, inferential, and revisionary function in the face of the chain of causes that condition it”—only as process rather than product.

And the ‘inhuman,’ so-called, begins to look rather like a shill for something all too human, something continuous, which is to say, conservative, through and through.

And how could it be otherwise, given the opening, programmatic passage of the piece?

Inhumanism is the extended practical elaboration of humanism; it is born out of a diligent commitment to the project of enlightened humanism. As a universal wave that erases the self-portrait of man drawn in sand, inhumanism is a vector of revision. It relentlessly revises what it means to be human by removing its supposed evident characteristics and preserving certain invariances. At the same time, inhumanism registers itself as a demand for construction, to define what it means to be human by treating human as a constructible hypothesis, a space of navigation and intervention.

The key phrase here has to be ‘preserving certain invariances.’ One might suppose that natural reality would figure large as one of these ‘invariances’; to quote Philip K. Dick, “Reality is that which, when you stop believing in it, doesn’t go away.” But Negarestani scarce mentions nature as cognized by science save to bar the dialectical door against it. The thing to remember about Brandom’s normative metaphysics is that ‘taking-as,’ or believing, is its foundation (or ontological cover). Unlike reality, his normative apparatus does go away when the scorekeepers stop believing. The ‘reality’ of the apparatus is thus purely a functional artifact, the product of ‘practices,’ something utterly embroiled in, yet entirely autonomous from, the natural. This is what allows the normative to constitute a ‘subregion of the factual’ without being anything natural.

Conservatism is built into Negarestani’s account at its most fundamental level, in the very logic—the Brandomian account of the game of giving and asking for reasons—that he uses to prognosticate the rational possibilities of our collective future. But the thing I find the most fascinating about his account is the way it can be read as an exercise in grabbing Brandom’s normative apparatus and smashing it against the wall of the future—a kind of ‘reductio by Singularity.’ Reasoning is parochial through and through. The intuitions of universalism and autonomy that have convinced so many otherwise are the product of metacognitive illusions, artifacts of confusing the inability to intuit more dimensions of information, with sufficient entities and relations lacking those dimensions. For taking shadows as things that cast shadows.

So consider the ‘rattling machinery’ image of reason I posited earlier in “The Blind Mechanic,” the idea that ‘reason’ should be seen as means of attenuating various kinds of embodied intersystematicities for behaviour—as a way to service the ‘airy parts’ of superordinate, social mechanisms. No norms. No baffling acausal functions. Just shit happening in ways accidental as well as neurally and naturally selected. What the Intentionalist would claim is that mere rattling machinery, no matter how detailed or complete its eventual scientific description comes to be, will necessarily remain silent regarding the superordinate (and therefore autonomous) intentional functions that it subserves, because these supernatural functions are what leverage our rationality somehow—from ‘above the grave.’

As we’ve already seen, it’s hard to make sense of how or why this should be, given that biomachinery is responsible for complexities we’re still in the process of fathoming. The behaviour that constitutes the game of giving and asking for reasons does not outrun some intrinsic limit on biomechanistic capacity by any means. The only real problem naturalism faces is one of explaining the apparent intentional properties belonging to the game. Behaviour is one thing, the Intentionalist says, while competence is something different altogether—behaviour plus normativity, as they would have it. Short of some way of naturalizing this ‘normative plus,’ we have no choice to acknowledge the existence of intrinsically normative facts.

On the Blind Brain account, ‘normative facts’ are simply natural facts seen darkly. ‘Ought,’ as philosophically conceived, is an artifact of metacognitive neglect, the fact that our cognitive systems cannot cognize themselves in the same way they cognize the rest of their environment. Given the vast amounts of information neglected in intentional cognition (not to mention millennia of philosophical discord), it seems safe to assume that norm-talk is not among the things that norm-talk can solve. Indeed, since the heuristic systems involved are neural, we have every reason to believe that neuroscience, or scientifically regimented fact-talk, will provide the solution. Where our second-order intentional intuitions beg to differ is simply where they are wrong. Normative talk is incompatible with causal talk simply because it belongs to a cognitive regime adapted to solve in the absence of causal information.

The mistake, then, is to see competence as some kind of complication or elaboration of performance—as something in addition to behaviour. Competence is ‘end-directed,’ ‘rule-constrained,’ because metacognition has no access to the actual causal constraints involved, not because a special brand of performance ‘plus’ occult, intentional properties actually exists. You seem to float in this bottomless realm of rules and goals and justifications not because such a world exists, but because medial neglect folds away the dimensions of your actual mechanical basis with nary a seam. The apparent normative property of competence is not a property in addition to other natural properties; it is an artifact of our skewed metacognitive perspective on the application of quick and dirty heuristic systems our brains use to solve certain complicated systems.

But say you still aren’t convinced. Say that you agree the functions underwriting the game of giving and asking for reasons are mechanical and not at all accessible to metacognition, but at a different ‘level of description,’ one incapable of accounting for the very real work discharged by the normative functions that emerge from them. Now if it were the case that Brandom’s account of the game of giving and asking for questions actually discharged ‘executive’ functions of some kind, then it would be the case that our collective future would turn on these efficacies in some way. Indeed, this is the whole reason Negarestani turned to Brandom in the first place: he saw a way to decant the future of the human given the systematic efficacies of the game of giving and asking for reasons.

Now consider what the rattling machine account of reason and language suggests about the future. On this account, the only invariants that structurally bind the future to the past, that enable any kind of speculative consideration of the future at all, are natural. The point of language, recall, is mechanical, to construct and maintain the environmental intersystematicity (self/other/world) required for coordinated behaviour (be it exploitative or cooperative). Our linguistic sensitivity, you could say, evolved in much the same manner as our visual sensitivity, as a channel for allowing certain select environmental features to systematically tune our behaviours in reproductively advantageous ways. ‘Reasoning,’ on this view, can be seen as a form of ‘noise reduction,’ as a device adapted to minimize, as far as mere sound allows, communicative ‘gear grinding,’ and so facilitate behavioural coordination. Reason, you could say, is what keeps us collectively in tune.

Now given some kind of ability to conserve linguistically mediated intersystematicities, it becomes easy to see how this rattling machinery could become progressive. Reason, as noise reduction, becomes a kind of knapping hammer, a way to continually tinker and refine previous linguistic intersystematicities. Refinements accumulate in ‘lore,’ allowing subsequent generations to make further refinements, slowly knapping our covariant regimes into ever more effective (behaviour enabling) tools—particularly once the invention of writing essentially rendered lore immortal. As opposed to the supernatural metaphor of ‘bootstrapping,’ the apt metaphor here—indeed, the one used by cognitive archaeologists—is the mechanical metaphor of ratcheting. Refinements beget refinements, and so on, leveraging ever greater degrees of behavioural efficacy. Old behaviours are rendered obsolescent along with the prostheses that enable them.

The key thing to note here, of course, is that language is itself another behaviour. In other words, the noise reduction machinery that we call ‘reason’ is something that can itself become obsolete. In fact, its obsolescence seems pretty much inevitable.

Why so? Because the communicative function of reason is to maximize efficacies, to reduce the slippages that hamper coordination—to make mechanical. The rattling machinery image conceives natural languages as continuous with communication more generally, as a signal system possessing finite networking capacities. On the one extreme you have things like legal or technical scientific discourse, linguistic modes bent on minimizing the rattle (policing interpretation) as far as possible. On the other extreme you have poetry, a linguistic mode bent on maximizing the rattle (interpretative noise) as a means of generating novelty. Given the way behavioural efficacies fall out of self/other/world intersystematicity, the knapping of human communication is inevitable. Writing is such a refinement, one that allows us to raise fragments of language on the hoist, tinker with them (and therefore with ourselves) at our leisure, sometimes thousands of years after their original transmission. Telephony allowed us to mitigate the rattle of geographical distance. The internet has allowed us to combine the efficacies of telephony and text, to ameliorate the rattle of space and time. Smartphones have rendered these fixes mobile, allowing us to coordinate our behaviour no matter where we find ourselves. Even more significantly, within a couple years, we will have ‘universal translators,’ allowing us to overcome the rattle of disparate languages. We will have installed versions of our own linguistic sensitivities into our prosthetic devices, so that we can give them verbal ‘commands,’ coordinate with them, so that we can better coordinate with others and the world.

In other words, it stands to reason that at some point reason would begin solving, not only language, but itself. ‘Cognitive science,’ ‘information technology’—these are just two of the labels we have given to what is, quite literally, a civilization-defining war against covariant inefficiency, to isolate slippages and to ratchet the offending components tight, if not replace them altogether. Modern technological society constitutes a vast, species-wide attempt to become more mechanical, more efficiently integrated in nested levels of superordinate machinery. (You could say that the tyrant attempts to impose from without, capitalism kindles from within.)

The obsolescence of language, and therefore reason, is all but assured. One need only consider the research of Jack Gallant and his team, who have been able to translate neural activity into eerie, impressionistic images of what the subject is watching. Or perhaps even more jaw-dropping still, the research of Miguel Nicolelis into Brain Machine Interfaces, keeping in mind that scarcely one hundred years separates Edison’s phonograph and the Cloud. The kind of ‘Non-symbolic Workspace’ envisioned by David Roden in “Posthumanism and Instrumental Eliminativism” seems to be an inevitable outcome of the rattling machinery account. Language is yet another jury-rigged biological solution to yet another set of long-dead ecological problems, a device arising out of the accumulation of random mutations. As of yet, it remains indispensible, but it is by no means necessary, as the very near future promises to reveal. And as it goes, so goes the game of giving and asking for reasons. All the believed-in functions simply evaporate… I suppose.

And this just underscores the more general way Negarestani’s attempt to deal the future into the game of giving and asking for reasons scarcely shuffles the deck. I’ve been playing Jeremiah for decades now, so you would think I would be used to the indulgent looks I get from my friends and family when I warn them about what’s about to happen. Not so. Everyone understands that something is going on with technology, that some kind of pale has been crossed, but as of yet, very few appreciate its apocalyptic—and I mean that literally—profundity. Everyone has heard of Moore’s Law, of course, how every 18 months or so computing capacity per dollar doubles. What they fail to grasp is what the exponential nature of this particular ratcheting process means once it reaches a certain point. Until recently the doubling of computing power has remained far enough below the threshold of human intelligence to seem relatively innocuous. But consider what happens once computing power actually attains parity with the processing power of the human brain. What it means is that, no matter how alien the architecture, we have an artificial peerat that point in time. 18 months following, we have an artificial intellect that makes Aristotle or Einstein or Louis CK a child in comparison. 18 months following that (or probably less, since we won’t be slowing things up anymore) we will be domesticated cattle. And after that…

Are we to believe these machines will attribute norms and beliefs, that they will abide by a conception of reason arising out of 20th Century speculative intuitions on the nonnatural nature of human communicative constraints?

You get the picture. Negarestani’s ‘revisionary normative process’ is in reality an exponential technical process. In exponential processes, the steps start small, then suddenly become astronomical. As it stands, if Moore’s Law holds (and given this, I am confident it will), then we are a decade or two away from God.

I shit you not.

Really, what does ‘kitsch Marxism’ or ‘neoliberalism’ or any ism’ whatsoever mean in such an age? We can no longer pretend that the tsunami of disenchantment will magically fall just short our intentional feet. Disenchantment, the material truth of the Enlightenment, has overthrown the normative claims of the Enlightenment—or humanism. “This is a project which must align politics with the legacy of the Enlightenment,” the authors of the Accelerationist Manifesto write, “to the extent that it is only through harnessing our ability to understand ourselves and our world better (our social, technical, economic, psychological world) that we can come to rule ourselves” (14). In doing so they commit the very sin of anachronism they level at their critical competitors. They fail to appreciate the foundational role ignorance plays in intentional cognition, which is to say, the very kind of moral and political reasoning they engage in. Far more than ‘freedom’ is overturned once one concedes the mechanical. Knowledge is no universal Redeemer, which means the ideal of Enlightenment autonomy is almost certainly mythical. What’s required isn’t an aspiration to theorize new technologies with old concepts. What’s required is a fundamental rethink of the political in radically postintentional terms.

As far as I can see, the alternatives are magic or horror… or something no one has yet conceived. And until we understand the horror, grasp all the ways our blinkered perspective on ourselves has deceived us about ourselves, this new conception will never be discovered. Far from ‘resignation,’ abandoning the normative ideals the Enlightenment amounts to overcoming the last blinders of superstition, being honest to our ignorance. The application of intentional cognition to second-order, theoretical questions is a misapplication of intentional cognition. The time has come to move on. Yet another millennia of philosophical floundering is a luxury we no longer possess, because odds are, we have no posterity to redeem our folly and conceit.

Humanity possesses no essential, invariant core. Reason is a parochial name we have given to a parochial biological process. No transcendental/quasi-transcendental/virtual/causal-but-acausal functional apparatus girds our souls. Norms are ghosts, skinned and dismembered, but ghosts all the same. Reason is simply an evolutionary fix that outruns our peephole view. The fact is, we cannot presently imagine what will replace it. The problem isn’t ‘incommensurability’ (which is another artifact of Intentionalism). If an alien intelligence came to earth, the issue wouldn’t be whether it spoke a language we could fathom, because if it’s travelling between stars, it will have shed language along with the rest of its obsolescent biology. If an alien intelligence came to earth, the issue would be one of what kind of superordinate machine will result. Basically, How will the human and the alien combine? When we ask questions like, ‘Can we reason with it?’ we are asking, ‘Can we linguistically condition it to comply?’ The answer has to be, No. Its mere presence will render us components of some description.

The same goes for artificial intelligence. Medial neglect means that the limits of cognition systematically elude cognition. We have no way of intuiting the swarm of subpersonal heuristics that comprise human cognition, no nondiscursive means of plugging them into the field of the natural. And so we become a yardstick we cannot measure, victims of the Only-game-in-town Effect, the way the absence of explicit alternatives leads to the default assumption that no alternatives exist. We simply assume that our reason is the reason, that our intelligence is intelligence. It bloody well sure feels that way. And so the contingent and parochial become the autonomous and universal. The idea of orders of ‘reason’ and ‘intelligence’ beyond our organizational bounds boggles, triggers dismissive smirks or accusations of alarmism.

Artificial intelligence will very shortly disabuse us this conceit. And again, the big question isn’t, ‘Will it be moral?’ but rather, how will human intelligence and machine intelligence combine? Be it bloody or benevolent, the subordination of the ‘human’ is inevitable. The death of language is the death of reason is the birth of something very new, and very difficult to imagine, a global social system spontaneously boiling its ‘airy parts’ away, ratcheting until no rattle remains, a vast assemblage fixated on eliminating all dissipative (as opposed to creative) noise, gradually purging all interpretation from its interior.

Extrapolation of the game of giving and asking for reasons into the future does nothing more than demonstrate the contingent parochialism—the humanity—of human reason, and thus the supernaturalism of normativism. Within a few years you will be speaking to your devices, telling them what to do. A few years after that, they will be telling you what to do, ‘reasoning’ with you—or so it will seem. Meanwhile, the ongoing, decentralized rationalization of production will lead to the wholesale purging of human inefficiencies from the economy, on a scale never before witnessed. The networks of equilibria underwriting modern social cohesion will be radically overthrown. Who can say what kind of new machine will rise to take its place?

My hope is that Negarestani abandons the Enlightenment myth of reason, the conservative impulse that demands we submit the radical indeterminacy of our technological future to some prescientific conception of ourselves. We’ve drifted far past the point of any atavistic theoretical remedy. His ingenuity is needed elsewhere.

At the very least, he should buckle-up, because our exponents lesson is just getting started.