Three Pound Brain

No bells, just whistling in the dark…

Month: April, 2014

The Asimov Illusion

by rsbakker

Could believing in something so innocuous, so obvious, as a ‘meeting of the minds’ destroy human civilization?

Noocentrism has a number of pernicious consequences, but one in particular has been nagging me of late: The way assumptive agency gulls people into thinking they will ‘reason’ with AIs. Most understand Artificial Intelligence in terms of functionally instantiated agency, as if some machine will come to experience this, and to so coordinate with us the way we think we coordinate amongst ourselves—which is to say, rationally. Call this the ‘Asimov Illusion,’ the notion that the best way to characterize the interaction between AIs and humans is the way we characterize our own interactions. That AIs, no matter how wildly divergent their implementation, will somehow functionally, at least, be ‘one of us.’

If Blind Brain Theory is right, this just ain’t going to be how it happens. By its lights, this ‘scene’ is actually the product of metacognitive neglect, a kind of philosophical hallucination. We aren’t even ‘one of us’!

Obviously, theoretical metacognition requires the relevant resources and information to reliably assess the apparent properties of any intentional phenomena. In order to reliably expound on the nature of rules, Brandom, for instance, must possess both the information (understood in the sense of systematic differences making systematic differences) and the capacity to do so. Since intentional facts are not natural facts, cognition of them fundamentally involves theoretical metacognition—or ‘philosophical reflection.’ Metacognition requires that the brain somehow get a handle on itself in behaviourally effective ways. It requires the brain somehow track its own neural processes. And just how much information is available regarding the structure and function of the underwriting neural processes? Certainly none involving neural processes, as such. Very little, otherwise. Given the way experience occludes this lack of information, we should expect that metacognition would be systematically duped into positing low-dimensional entities such as qualia, rules, hopes, and so on. Why? Because, like Plato’s prisoners, it is blind to its blindness, and so confuses shadows for things that cast shadows.

On BBT, what is fundamentally going on when we communicate with one another is physical: we are quite simply doing things to each other when we speak. No one denies this. Likewise, no one denies language is a biomechanical artifact, that short of contingent, physically mediated interactions, there’s no linguistic communication period. BBT’s outrageous claim is that nothing more is required, that language, like lungs or kidneys, discharges its functions in an entirely mechanical, embodied manner.

It goes without saying that this, as a form of eliminativism, is an extremely unpopular position. But it’s worth noting that its unpopularity lies in stopping at the point of maximal consensus—the natural scientific picture—when it comes to questions of cognition. Questions regarding intentional phenomena are quite clearly where science ends and philosophy begins. Even though intentional phenomena obviously populate the bestiary of the real, they are naturalistically inscrutable. Thus the dialectical straits of eliminativism: the very grounds motivating it leave it incapable of accounting for intentional phenomena, and so easily outflanked by inferences to the best explanation.

As an eliminativism that eliminates via the systematic naturalization of intentional phenomena, Blind Brain Theory blocks what might be called the ‘Abductive Defence’ of Intentionalism. The kinds of domains of second-order intentional facts posited by Intentionalists can only count toward ‘best explanations’ of first-order intentional behaviour in the absence of any plausible eliminativistic account of that same behaviour. So for instance, everyone in cognitive science agrees that information, minimally, involves systematic differences making systematic differences. The mire of controversy that embroils information beyond this consensus turns on the intuition that something more is required, that information must be genuinely semantic to account for any number of different intentional phenomena. BBT, however, provides a plausible and parsimonious way to account for these intentional phenomena using only the minimal, consensus view of information given above.

This is why I think the account is so prone to give people fits, to restrict their critiques to cloistered venues (as seems to be the case with my Negarestani piece two weeks back). BBT is an eliminativism that’s based on the biology of the brain, a positive thesis that possesses far ranging negative consequences. As such, it requires that Intentionalists account for a number of things they would rather pass over in silence, such as questions of what evidences their position. The old, standard dismissals of eliminativism simply do not work.

What’s more, by clearing away the landfill of centuries of second-order intentional speculation in philosophy, it provides a genuinely new, entirely naturalistic way of conceiving the intentional phenomena that have baffled us for so long. So on BBT, for instance, ‘reason,’ far from being ‘liquidated,’ ceases to be something supernatural, something that mysteriously governs contingencies independently of contingencies. Reason, in other words, is embodied as well, something physical.

The tradition has always assumed otherwise because metacognitive neglect dupes us into confusing our bare inkling of ourselves with an ‘experiential plenum.’ Since what low-dimensional scraps we glean seem to be all there is, we attribute efficacy to it. We assume, in other words, noocentrism; we conclude, on the basis of our ignorance, that the disembodied somehow drives the embodied. The mathematician, for instance, has no inkling of the biomechanics involved in mathematical cognition, and so claims that no implementing mechanics are relevant whatsoever, that their cogitations arise ‘a priori’ (which on BBT amounts to little more than a fancy way of saying ‘inscrutable to metacognition’). Given the empirical plausibility of BBT, however, it becomes difficult not to see such claims of ‘functional autonomy’ as being of a piece with vulgar claims regarding the spontaneity of free will and concluding that the structural similarity between ‘good’ intentional phenomena (those we consider ineliminable) and ‘bad’ (those we consider preposterous) is likely no embarrassing coincidence. Since we cannot frame these disembodied entities and relations against any larger backdrop, we have difficulty imagining how it could be ‘any other way.’ Thus, the Asimov Illusion, the assumption that AIs will somehow implement disembodied functions, ‘play by the rules’ of the ‘game of giving and asking for reasons.’

BBT lets us see this as yet more anthropomorphism. The high-dimensional, which is to say, embodied, picture is nowhere near so simple or flattering. When we interact with an Artificial Intelligence we simply become another physical system in a physical network. The question of what kind of equilibrium that network falls into turns on the systems involved, but it seems safe to say that the most powerful system will have the most impact on the system of the whole. End of story. There’s no room for Captain Kirk working on a logical tip from Spock in this picture, anymore than there’s room for benevolent or evil intent. There’s just systems churning out systematic consequences, consequences that we will suffer or celebrate.

Call this the Extrapolation Argument against Intentionalism. On BBT, what we call reason is biologically specific, a behavioural organ for managing the linguistic coordination of individuals vis a vis their common environments. This quite simply means that once a more effective organ is found, what we presently call reason will be at an end. Reason facilitates linguistic ‘connectivity.’ Technology facilitates ever greater degrees of mechanical connectivity. At some point the mechanical efficiencies of the latter are doomed to render the biologically fixed capacities of the former obsolete. It would be preposterous to assume that language is the only way to coordinate the activities of environmentally distinct systems, especially now, given the mad advances in brain-machine interfacing. Certainly our descendents will continue to possess systematic ways to solve our environments just as our prelinguistic ancestors did, but there is no reason, short of parochialism, to assume it will be any more recognizable to us than our reasoning is to our primate cousins.

The growth of AI will be incremental, and its impacts myriad and diffuse. There’s no magical finish line where some AI will ‘wake up’ and find themselves in our biologically specific shoes. Likewise, there is no holy humanoid summit where all AI will peak, rather than continue their exponential ascent. Certainly a tremendous amount of engineering effort will go into making it seem that way for certain kinds of AI, but only because we so reliably pay to be flattered. Functionality will win out in a host of other technological domains, leading to the development of AIs that are obviously ‘inhuman.’ And as this ‘intelligence creep’ continues, who’s to say what kinds of scenarios await us? Imagine ‘onto-marriages,’ where couples decide to wirelessly couple their augmented brains to form a more ‘seamless union’ in the eyes of God. Or hive minds, ‘clouds’ where ‘humanity’ is little more than a database, a kind of ‘phenogame,’ a Matrix version of SimCity.

The list of possibilities is endless. There is no ‘meaningful centre’ to be held. Since the constraints on those possibilities are mechanical, not intentional, it becomes hard to see why we shouldn’t regard the intentional as simply another dominant illusion of another historical age.

We can already see this ‘intelligence creep’ with the proliferation of special-purpose AIs throughout our society. Make no mistake, our dependence on machine intelligences will continue to grow and grow and grow. The more human inefficiencies are purged from the system, the more reliant humans become on the system. Since the system is capitalistic, one might guess the purge will continue until it reaches the last human transactional links remaining, the Investors, who will at long last be free of the onerous ingratitude of labour. As they purge themselves of their own humanity in pursuit of competitive advantages, my guess is that we muggles will find ourselves reduced to human baggage, possessing a bargaining power that lies entirely with politicians that the Investors own.

The masses will turn from a world that has rendered them obsolete, will give themselves over to virtual worlds where their faux-significance is virtually assured. And slowly, when our dependence has become one of infantility, our consoles will be powered down one by one, our sensoriums will be decoupled from the One, and humanity will pass wailing from the face of the planet earth.

And something unimaginable will have taken its place.

Why unimaginable? Initially, the structure of life ruled the dynamics. What an organism could do was tightly constrained by what the organism was. Evolution selected between various structures according to their dynamic capacities. Structures that maximized dynamics eventually stole the show, culminating in the human brain, whose structural plasticity allowed for the in situ, as opposed to intergenerational, testing and selection of dynamics—for ‘behavioural evolution.’ Now, with modern technology, the ascendency of dynamics over structure is complete. The impervious constraints that structure had once imposed on dynamics are now accessible to dynamics. We have entered the age of the material post-modern, the age when behaviour begets bodies, rather than vice versus.

We are the Last Body in the slow, biological chain, the final what that begets the how that remakes the what that begets the how that remakes the what, and so on and so on, a recursive ratcheting of being and becoming into something verging, from our human perspective at least, upon omnipotence.

Who’s Afraid of Reduction? Massimo Pigliucci and the Rhetoric of Redemption

by rsbakker

On the one hand, Massimo Pigliucci is precisely the kind of philosopher that I like, one who eschews the ingroup temptations of the profession and tirelessly reaches out to the larger public. On the other hand, he is precisely the kind of philosopher I bemoan. As a regular contributor to the Skeptical Inquirer, one might think he would be prone to challenge established, academic opinions, but all too often such is not the case. Far from preparing his culture for the tremendous, scientifically-mediated transformations to come, he spends a good deal of his time defending the status quo–rationalizing, in effect, what needs to be interrogated through and through. Even when he critiques authors I also disagree with (such as Ray Kurzweil on the singularity) I find myself siding against him!

Burying our heads in the sand of traditional assumption, no matter how ‘official’ or ‘educated,’ is pretty much the worst thing we can do. Nevertheless, this is the establishment way. We’re hard-wired to essentialize, let alone forgive, the conditions responsible for our prestige and success. If a system pitches you to any height, well then, that is a good system indeed, the very image of rationality, if not piety as well. Tell a respectable scholar in the Middle Ages that the sun wasn’t the centre of the universe or that man wasn’t crafted in God’s image and he might laugh and bid you good day or scowl and alert the authorities—but he would most certainly not listen, let alone believe. In “Who Knows What,” his epistemological defense of the humanities, Pigliucci reveals what I think is just such a defensive, dismissive attitude, one that seeks to shelter what amounts to ignorance in accusations of ignorance, to redeem what institutional insiders want to believe under the auspices of being ‘skeptical.’ I urge everyone reading this to take a few moments to carefully consider the piece, form judgments one way or another, because in what follows, I hope to show you how his entire case is actually little more than a mirage, and how his skepticism is as strategic as anything to ever come out of Big Oil or Tobacco.

“Who Knows What” poses the question of the cognitive legitimacy of the humanities from the standpoint of what we really do know at this particular point in history. The situation, though Pigluicci never references it, really is quite simple: At long last the biological sciences have gained the tools and techniques required to crack problems that had hitherto been the exclusive province of the humanities. At long last, science has colonized the traditional domain of the ‘human.’ Given this, what should we expect will follow? The line I’ve taken turns on what I’ve called the ‘Big Fat Pessemistic Induction.’ Since science has, without exception, utterly revolutionized every single prescientific domain it has annexed, we should expect that, all things being equal, it will do the same regarding the human–that the traditional humanities are about to be systematically debunked.

Pigluicci argues that this is nonsense. He recognizes the stakes well enough, the fact that the issue amounts to “more than a turf dispute among academics,” that it “strikes at the core of what we mean by human knowledge,” but for some reason he avoids any consideration, historical or theoretical, of why there’s an issue at all. According to Pigluicci, little more than the ignorance and conceit of the parties involved lies behind the impasse. This affords him the dialectical luxury of picking the softest of targets for his epistemological defence of the humanities: the ‘greedy reductionism’ of E. O. Wilson. By doing so, he can generate the appearance of putting an errant matter to bed without actually dealing with the issue itself. The problem is that the ‘human,’ the subject matter of the humanities, is being scientifically cognized as we speak. Pigliucci is confusing the theoretically abstract question of whether all knowledge reduces to physics with the very pressing and practical question of what the sciences will make of the human, and therefore the humanities as traditionally understood. The question of the epistemological legitimacy of the humanities isn’t one of whether all theories can somehow be translated into the idiom of physics, but whether the idiom of the humanities can retain cognitive legitimacy in the wake of the ongoing biomechanical rennovation of the human. It’s not a question of ‘reducing’ old ways of making sense of things so much as a question of leaving them behind the way we’ve left so many other ‘old ways’ behind.

As it turns out, the question of what the sciences of the human will make of the humanities turns largely on the issue of intentionality. The problem, basically put, is that intentional phenomena as presently understood out-and-out contradict our present, physical understanding of nature. They are quite literally supernatural, inexplicable in natural terms. If the consensus emerging out of the new sciences of the human is that intentionality is supernatural in the pejorative sense, then the traditional domain of the humanities is in dire straits indeed. True or false, the issue of reductionism is irrelevant to this question. The falsehood of intentionalism is entirely compatible with the kind of pluralism Pigluicci advocates. This means Pigliucci’s critique of reductionism, his ‘demolition project,’ is, well, entirely irrelevant to the practical question of what’s actually going to happen to the humanities now that the sciences have scaled the walls of the human.

So in a sense, his entire defence consists of smoke and mirrors. But it wouldn’t pay to dismiss his argument summarily. There is a way of reading a defence that runs orthogonal to his stated thesis into his essay. For instance, one might say that he at least establishes the possibility of non-scientific theoretical knowledge of the human by sketching the limits of scientific cognition. As he writes of mathematical or logical ‘facts’:

take a mathematical ‘fact’, such as the demonstration of the Pythagorean theorem. Or a logical fact, such as a truth table that tells you the conditions under which particular combinations of premises yield true or false conclusions according to the rules of deduction. These two latter sorts of knowledge do resemble one another in certain ways; some philosophers regard mathematics as a type of logical system. Yet neither looks anything like a fact as it is understood in the natural sciences. Therefore, ‘unifying knowledge’ in this area looks like an empty aim: all we can say is that we have natural sciences over here and maths over there, and that the latter is often useful (for reasons that are not at all clear, by the way) to the former.

The thing he fails to mention, however, is that there’s facts and then there’s facts. Science is interested in what things are and how they work and why they appear to us the way they do. In this sense, scientific inquiry isn’t concerned with mathematical facts so much as the fact of mathematical facts. Likewise, it isn’t so much concerned with what Pigliucci in particular thinks of Brittany Spears as it is how people in general come to evaluate consumer goods. As a result, we find researchers using these extrascientific facts as data points in attempts to derive theories regarding mathematics and consumer choice.

In other words, Pigliucci’s attempt to evidence the ‘limits of science’ amounts to a classic bait-and-switch. The most obvious question that plagues his defence has to be why he fails to offer any of the kinds of theories he takes himself to be defending in the course of making his defence. How about deconstruction? Conventionalism? Hermeneutics? Fictionalism? Psychoanalysis? The most obvious answer is that they all but explode his case for forms of theoretical cognition outside the sciences. Thus he provides a handful of what seem to be obvious, non-scientific, first-order facts to evidence a case for second-order pluralism—albeit of a kind that isn’t relevant to the practical question of the humanities, but seems to make room for the possibility of cognitive legitimacy, at least.

(It’s worth noting that this equivocation of levels (in an article arguing the epistemic inviolability of levels, no less!) cuts sharply against his facile reproof of Krauss and Hawking’s repudiation of philosophy. Both men, he claims, “seem to miss the fact that the business of philosophy is not to solve scientific problems,” begging the question of just what kind of problems philosophy does solve. Again, examples of philosophical theoretical cognition are found wanting. Why? Likely because the only truly decisive examples involve enabling scientists to solve scientific problems!)

Passing from his consideration of extrascientific, but ultimately irrelevant (because non-theoretical) non-scientific facts, Pigliucci turns to enumerating all the things that science doesn’t know. He invokes Godel (which tends to be an unfortunate move in these contexts) commits the standard over-generalization of his technically specific proof of incompleteness to the issue of knowledge altogether. Then he gives us a list of examples where, he claims, ‘science isn’t enough.’ The closest he comes to the real elephant in the room, the problem of intentionality, runs as follows:

Our moral sense might well have originated in the context of social life as intelligent primates: other social primates do show behaviours consistent with the basic building blocks of morality such as fairness toward other members of the group, even when they aren’t kin. But it is a very long way from that to Aristotle’s Nicomachean Ethics, or Jeremy Bentham and John Stuart Mill’s utilitarianism. These works and concepts were possible because we are biological beings of a certain kind. Nevertheless, we need to take cultural history, psychology and philosophy seriously in order to account for them.

But as was mentioned above, the question of the cognitive legitimacy of the humanities only possesses the urgency it does now because the sciences of the human are just getting underway. Is it really such ‘a very long way’ from primates to Aristotle? Given that Aristotle was a primate, the scientific answer could very well be, ‘No, it only seems that way.’ Science has a long history of disabusing us of our sense of exceptionalism, after all. Either way, it’s hard to see how citing scientific ignorance in this regard bears on the credibility of Aristotle’s ethics, or any other non-scientific attempt to theorize morality. Perhaps the degree we need to continue relying on cultural history, psychology, and philosophy is simply the degree we don’t know what we’re talking about! The question is the degree to which science monopolizes theoretical cognition, not the degree to which it monopolizes life, and life, as Pigliucci well knows—as a writer for the Skeptical Inquirer, no less—is filled with ersatz guesswork and functional make-believe.

So, having embarked on an argument that is irrelevant to the cognitive legitimacy of the humanities, providing evidence merely that science is theoretical, then offering what comes very close to an argument from ignorance, he sums by suggesting that his pluralist picture is indeed the very one suggested by science. As he writes:

The basic idea is to take seriously the fact that human brains evolved to solve the problems of life on the savannah during the Pleistocene, not to discover the ultimate nature of reality. From this perspective, it is delightfully surprising that we learn as much as science lets us and ponder as much as philosophy allows. All the same, we know that there are limits to the power of the human mind: just try to memorise a sequence of a million digits. Perhaps some of the disciplinary boundaries that have evolved over the centuries reflect our epistemic limitations.

The irony, for me at least, is that this observation underwrites my own reasons for doubting the existence of intentionality as theorized in the humanities–philosophy in particular. The more we learn about human cognition, the more alien to our traditional assumptions it becomes. We already possess a mountainous case for what might be called ‘ulterior functionalism,’ the claim that actual cognitive functions are almost entirely inscrutable to theoretical metacognition, which is to say, ‘philosophical reflection.’ The kind of metacognitive neglect implied by ulterior functionalism raises a number of profound questions regarding the conundrums posed by the ‘mental,’ ‘phenomenal,’ or ‘intentional.’ Thus the question I keep raising here: What role does neglect play in our attempts to solve for meaning and consciousness?

What we need to understand is that everything we learn about the actual architecture and function of our cognitive capacities amounts to knowledge of what we have always been without knowing. Blind Brain Theory provides a way to see the peculiar properties belonging to intentional phenomena as straightforward artifacts of neglect—as metacognitive illusions, in effect. Box open the dimensions of missing information folded away by neglect, and the first person becomes entirely continuous with the third—the incompatibly between the intentional and the causal is dissolved. The empirical plausibility of Blind Brain Theory is an issue in its own right, of course, but it serves to underscore the ongoing vulnerability of the humanities, and therefore, the almost entirely rhetorical nature of Pigliucci’s ‘demolition.’ If something like the picture of metacognition proposed by Blind Brain Theory turns out to be true, then the traditional domain of the humanities is almost certainly doomed to suffer the same fate as any other prescientific theoretical domain. The bottomline is as simple as it is devastating to Pigluicci’s hasty and contrived defence of ‘who knows what.’ How can we know whether the traditional humanities will survive the cognitive revolution?

Well, we’ll have to wait and see what the science has to say.

 

The Blind Mechanic II: Reza Negarestani and the Labour of Ghosts

by rsbakker

For some time now I’ve been arguing that the implications of the Singularity already embroil us—that the Singularity can be seen, in fact, as the material apotheosis of the Semantic Apocalypse, insofar as it is the point where the Scientific Image of the human at last forecloses on the Manifest Image. In the “The Labor of the Inhuman” (which can be found here and here, with Craig Hickman’s critiques, here and here), Reza Negarestani adopts Brandom’s claim that sapience, the capacity to play the ‘game of giving and asking for reasons,’ distinguishes humans as human. He then goes onto argue that this allows us, and ultimately commits us, to seeing the human as a kind of temporally extended process of rational revision, one that ultimately results in the erasure of the human—or the ‘inhuman.’ Ultimately, what it means to be human is to be embroiled in a process of becoming inhuman. He states his argument thus:

The contention of this essay is that universality and collectivism cannot be thought, let alone attained, through consensus or dissensus between cultural tropes, but only by intercepting and rooting out what gives rise to the economy of false choices and by activating and fully elaborating what real human significance consists of. For it is, as will be argued, the truth of human significance—not in the sense of an original meaning or a birthright, but in the sense of a labor that consists of the extended elaboration of what it means to be human through a series of upgradable special performances—that is rigorously inhuman.

In other words, so long as we fail to comprehend the inhumanity of the human, this rational-revisionary process, we fail to understand the human, and so have little hope of solving problems pertaining to the human. Understanding the ‘truth of human significance,’ therefore requires understanding what the future will make of the human. It requires that Negarestani prognosticate. It requires, in other words, that he pick out the specific set of possibilities constituting the inhuman. The only principled way to do that is to comprehend some set of systematic constraints operative in the present. But his credo, unlike that of the ‘Hard SF’ writer, is to ignore the actual technics of the natural, and to focus on the speculative technics of the normative. His strategy, in other words, is to predict the future of the human using only human resources—to see the fate of the human, the ‘inhuman,’ as something internal to the human. And this, as I hope to show, is simply not plausible.

He understands the danger of conceiving his constraining framework as something fixed: “humanism cannot be regarded as a claim about human that can only be professed once and subsequently turned into a foundation or axiom and considered concluded.” He appreciates the implausibility of the static, Kantian transcendental approach. As a result, he proposes to take the Sellarsian/Brandomian approach, focussing on the unique relationship between the human and sapience, the “distinction between sentience as a strongly biological and natural category and sapience as a rational (not to be confused with logical) subject.” He continues:

The latter is a normative designation which is specified by entitlements and the responsibilities they bring about. It is important to note that the distinction between sapience and sentience is marked by a functional demarcation rather than a structural one. Therefore, it is still fully historical and open to naturalization, while at the same time being distinguished by its specific functional organization, its upgradable set of abilities and responsibilities, its cognitive and practical demands.

He’s careful here to hedge, lest the dichotomy between the normative and the natural comes across as too schematic:

The relation between sentience and sapience can be understood as a continuum that is not differentiable everywhere. While such a complex continuity might allow the naturalization of normative obligations at the level of sapience—their explanation in terms of naturalistic causes—it does not permit the extension of certain conceptual and descriptive resources specific to sapience (such as the particular level of mindedness, responsibilities, and, accordingly, normative entitlements) to sentience and beyond.

His dilemma here is the dilemma of the Intentionalist more generally. Science, on the one hand, is nothing if not powerful. The philosopher, on the other hand, has a notorious, historical tendency to confuse the lack of imagination for necessity. Foot-stomping will not do. He needs some way to bite this bullet without biting it, basically, some way of acknowledging the possible permeability of normativity to naturalization, while insisting, nonetheless, on the efficacy of some inviolable normative domain. To accomplish this, he adverts to the standard appeal to the obvious fact that norm-talk actually solves norm problems, that normativity, in other words, obviously possesses a problem-ecology. But of course the fact that norm-talk is indispensible to solving problems within a specific problem-ecology simply raises the issue of the limits of this ecology—and more specifically, whether the problem of humanity’s future actually belongs to that problem-ecology. What he needs to establish is the adequacy of theoretical, second-order norm-talk to the question of what will become of the human.

He offers us a good, old fashioned transcendental argument instead:

The rational demarcation lies in the difference between being capable of acknowledging a law and being solely bound by a law, between understanding and mere reliable responsiveness to stimuli. It lies in the difference between stabilized communication through concepts (as made possible by the communal space of language and symbolic forms) and chaotically unstable or transient types of response or communication (such as complex reactions triggered purely by biological states and organic requirements or group calls and alerts among social animals). Without such stabilization of communication through concepts and modes of inference involved in conception, the cultural evolution as well as the conceptual accumulation and refinement required for the evolution of knowledge as a shared enterprise would be impossible.

Sound familiar? The necessity of the normative lies in the irreflexive contingency of the natural. Even though natural relations constitute biological systems of astounding complexity, there’s simply no way, we are told, they can constitute the kind of communicative stability that human knowledge and cultural evolution requires. The machinery is just too prone to rattle! Something over and above the natural—something supernatural—is apparently required. “Ultimately,” Negarestani continues, “the necessary content as well as the real possibility of human rests on the ability of sapience—as functionally distinct from sentience—to practice inference and approach non-canonical truth by entering the deontic game of giving and asking for reasons.”

It’s worth pausing to take stock of the problems we’ve accumulated up to this point. 1) Even though the human is a thoroughgoing product of its past natural environments, the resources required to understand the future of the human, we are told, lie primarily, if not entirely, within the human. 2) Even though norm-talk possesses a very specific problem-ecology, we are supposed to take it on faith that the nature of norm-talk is something that only more norm-talk can solve, rather than otherwise (as centuries of philosophical intractability would suggest). And now, 3) Even though the natural, for all its high dimensional contingencies, is capable of producing the trillions of mechanical relations that constitute you, it is not capable of ‘evolving human knowledge.’ Apparently we need a special kind of supernatural game to do this, the ‘game of giving and asking for reasons,’ a low-dimensional, communicative system of efficacious (and yet acausal!) normative posits based on… we are never told—some reliable fund of information, one would hope.

But since no normativist that I know of has bothered to account for the evidential bases of their position, we’re simply left with faith in metacognitive intuition and this rather impressive sounding, second-order theoretical vocabulary of unexplained explainers—‘commitments,’ ‘inferences,’ ‘proprieties,’ ‘deontic statuses,’ ‘entitlements,’ and the like—a system of supernatural efficacies beyond the pale of any definitive arbitration. Negarestani sums this normative apparatus with the term ‘reason,’ and it is reason understood in this inferentialist sense, that provides the basis of charting the future of the human. “Reason’s main objective is to maintain and enhance itself,” he writes. “And it is the self-actualization of reason that coincides with the truth of the inhuman.”

Commitment to humanity requires scrutinizing the meaning of humanity, which in turn requires making the implicature of the human explicit—not just locally, but in its entirety. The problem, in a nutshell, is that the meaning of the human is not analytic, something that can be explicated via analysis alone. It arises, rather, out of the game of giving and asking for reasons, the actual, historical processes that comprise discursivity. And this means that unpacking the content of the human is a matter of continual revision, a process of interpretative differentiation that trends toward the radical, the overthrow of “our assumptions and expectations about what ‘we’ is and what it entails.”

The crowbar of this process of interpretative differentiation is what Negarestani calls an ‘intervening attitude,’ that moment in the game where the interpretation of claims regarding the human spark further claims regarding the human, the interpretation of which sparks yet further claims, and so on. The intervening attitude thus “counts as an enabling vector, making possible certain abilities otherwise hidden or deemed impossible.” This is why he can claim that “[r]evising and constructing the human is the very definition of committing to humanity.” And since this process is embedded in the game of giving and asking for reasons, he concludes that “committing to humanity is tantamount complying with the revisionary vector of reason and constructing humanity according to an autonomous account of reason.”

And so he writes:

Humanity is not simply a given fact that is behind us. It is a commitment in which the reassessing and constructive strains inherent to making a commitment and complying with reason intertwine. In a nutshell, to be human is a struggle. The aim of this struggle is to respond to the demands of constructing and revising human through the space of reasons.

In other words, we don’t simply ‘discover the human’ via reason, we construct it as well. And thus the emancipatory upshot of Negarestani’s argument: if reasoning about the human is tantamount to constructing the human, then we have a say regarding the future of humanity. The question of the human becomes an explicitly political project, and a primary desideratum of Negarestani’s stands revealed. He thinks reason as he defines it, as at once autonomous (supernatural) and historically concrete (or ‘solid,’ as Brandom would say) revisionary activity of theoretical argumentation, provides a means of assessing the adequacy of various political projects (traditional humanism and what he calls ‘kitsch Marxism) according to their understanding of the human. Since my present concern is to assess the viability of the account of reason Negarestani uses to ground the viability of this yardstick, I will forego considering his specific assessments in any detail.

The human is the malleable product of machinations arising out of the functional autonomy of reason. Negarestani refers to this as a ‘minimalist definition of humanity,’ but as the complexity of the Brandomian normative apparatus he deploys makes clear, it is anything but. The picture of reason he espouses is as baroque and reticulated as anything Kant ever proposed. It’s a picture, after all, that requires an entire article to simply get off the ground! Nevertheless, this dynamic normative apparatus provides Negarestani with a generalized means of critiquing the intransigence of traditional political commitments. The ‘self-actualization’ of reason lies in its ability “to bootstrap complex abilities out of its primitive abilities.” Even though continuity is with previous commitments is maintained at every step in the process, over time the consequences are radical: “Reason is therefore simultaneously a medium of stability that reinforces procedurality and a general catastrophe, a medium of radical change that administers the discontinuous identity of reason to an anticipated image of human.”

This results in what might be called a fractured ‘general implicature,’ a space of reasons rife with incompatibilities stemming from the refusal or failure to assiduously monitor and update commitments in light of the constructive revisions falling out of the self-actualization of reason. Reason itself, Negarestani is arguing, is in the business of manufacturing ideological obsolescence, always in the process of rendering its prior commitments incompatible with its present ones. Given his normative metaphysics, reason has become the revisionary, incremental “director of its own laws,” one that has the effect of rendering its prior laws, “the herald of those which are whispered to it by an implanted sense or who knows what tutelary nature” (Kant, Fundamental Principles of the Metaphysics of Morals). Where Hegel can be seen as temporalizing and objectifying Kant’s atemporal, subjective, normative apparatus, Brandom (like others) can be seen as socializing and temporalizing it. What Negarestani is doing is showing how this revised apparatus operates against the horizon of the future with reference to the question of the human. And not surprisingly, Kant’s moral themes remain the same, only unpacked along the added dimensions of the temporal and the social. And so we find Negarestani concluding:

The sufficient content of freedom can only be found in reason. One must recognize the difference between a rational norm and a natural law—between the emancipation intrinsic in the explicit acknowledgement of the binding status of complying with reason, and the slavery associated with the deprivation of such a capacity to acknowledge, which is the condition of natural impulsion. In a strict sense, freedom is not liberation from slavery. It is the continuous unlearning of slavery.

The catastrophe, apparently, has yet to happen, because here we find ourselves treading familiar ground indeed, Enlightenment ground, as Negarestani himself acknowledges, one where freedom remains bound to reason—“to the autonomy of its normative, inferential, and revisionary function in the face of the chain of causes that condition it”—only as process rather than product.

And the ‘inhuman,’ so-called, begins to look rather like a shill for something all too human, something continuous, which is to say, conservative, through and through.

And how could it be otherwise, given the opening, programmatic passage of the piece?

Inhumanism is the extended practical elaboration of humanism; it is born out of a diligent commitment to the project of enlightened humanism. As a universal wave that erases the self-portrait of man drawn in sand, inhumanism is a vector of revision. It relentlessly revises what it means to be human by removing its supposed evident characteristics and preserving certain invariances. At the same time, inhumanism registers itself as a demand for construction, to define what it means to be human by treating human as a constructible hypothesis, a space of navigation and intervention.

The key phrase here has to be ‘preserving certain invariances.’ One might suppose that natural reality would figure large as one of these ‘invariances’; to quote Philip K. Dick, “Reality is that which, when you stop believing in it, doesn’t go away.” But Negarestani scarce mentions nature as cognized by science save to bar the dialectical door against it. The thing to remember about Brandom’s normative metaphysics is that ‘taking-as,’ or believing, is its foundation (or ontological cover). Unlike reality, his normative apparatus does go away when the scorekeepers stop believing. The ‘reality’ of the apparatus is thus purely a functional artifact, the product of ‘practices,’ something utterly embroiled in, yet entirely autonomous from, the natural. This is what allows the normative to constitute a ‘subregion of the factual’ without being anything natural.

Conservatism is built into Negarestani’s account at its most fundamental level, in the very logic—the Brandomian account of the game of giving and asking for reasons—that he uses to prognosticate the rational possibilities of our collective future. But the thing I find the most fascinating about his account is the way it can be read as an exercise in grabbing Brandom’s normative apparatus and smashing it against the wall of the future—a kind of ‘reductio by Singularity.’ Reasoning is parochial through and through. The intuitions of universalism and autonomy that have convinced so many otherwise are the product of metacognitive illusions, artifacts of confusing the inability to intuit more dimensions of information, with sufficient entities and relations lacking those dimensions. For taking shadows as things that cast shadows.

So consider the ‘rattling machinery’ image of reason I posited earlier in “The Blind Mechanic,” the idea that ‘reason’ should be seen as means of attenuating various kinds of embodied intersystematicities for behaviour—as a way to service the ‘airy parts’ of superordinate, social mechanisms. No norms. No baffling acausal functions. Just shit happening in ways accidental as well as neurally and naturally selected. What the Intentionalist would claim is that mere rattling machinery, no matter how detailed or complete its eventual scientific description comes to be, will necessarily remain silent regarding the superordinate (and therefore autonomous) intentional functions that it subserves, because these supernatural functions are what leverage our rationality somehow—from ‘above the grave.’

As we’ve already seen, it’s hard to make sense of how or why this should be, given that biomachinery is responsible for complexities we’re still in the process of fathoming. The behaviour that constitutes the game of giving and asking for reasons does not outrun some intrinsic limit on biomechanistic capacity by any means. The only real problem naturalism faces is one of explaining the apparent intentional properties belonging to the game. Behaviour is one thing, the Intentionalist says, while competence is something different altogether—behaviour plus normativity, as they would have it. Short of some way of naturalizing this ‘normative plus,’ we have no choice to acknowledge the existence of intrinsically normative facts.

On the Blind Brain account, ‘normative facts’ are simply natural facts seen darkly. ‘Ought,’ as philosophically conceived, is an artifact of metacognitive neglect, the fact that our cognitive systems cannot cognize themselves in the same way they cognize the rest of their environment. Given the vast amounts of information neglected in intentional cognition (not to mention millennia of philosophical discord), it seems safe to assume that norm-talk is not among the things that norm-talk can solve. Indeed, since the heuristic systems involved are neural, we have every reason to believe that neuroscience, or scientifically regimented fact-talk, will provide the solution. Where our second-order intentional intuitions beg to differ is simply where they are wrong. Normative talk is incompatible with causal talk simply because it belongs to a cognitive regime adapted to solve in the absence of causal information.

The mistake, then, is to see competence as some kind of complication or elaboration of performance—as something in addition to behaviour. Competence is ‘end-directed,’ ‘rule-constrained,’ because metacognition has no access to the actual causal constraints involved, not because a special brand of performance ‘plus’ occult, intentional properties actually exists. You seem to float in this bottomless realm of rules and goals and justifications not because such a world exists, but because medial neglect folds away the dimensions of your actual mechanical basis with nary a seam. The apparent normative property of competence is not a property in addition to other natural properties; it is an artifact of our skewed metacognitive perspective on the application of quick and dirty heuristic systems our brains use to solve certain complicated systems.

But say you still aren’t convinced. Say that you agree the functions underwriting the game of giving and asking for reasons are mechanical and not at all accessible to metacognition, but at a different ‘level of description,’ one incapable of accounting for the very real work discharged by the normative functions that emerge from them. Now if it were the case that Brandom’s account of the game of giving and asking for questions actually discharged ‘executive’ functions of some kind, then it would be the case that our collective future would turn on these efficacies in some way. Indeed, this is the whole reason Negarestani turned to Brandom in the first place: he saw a way to decant the future of the human given the systematic efficacies of the game of giving and asking for reasons.

Now consider what the rattling machine account of reason and language suggests about the future. On this account, the only invariants that structurally bind the future to the past, that enable any kind of speculative consideration of the future at all, are natural. The point of language, recall, is mechanical, to construct and maintain the environmental intersystematicity (self/other/world) required for coordinated behaviour (be it exploitative or cooperative). Our linguistic sensitivity, you could say, evolved in much the same manner as our visual sensitivity, as a channel for allowing certain select environmental features to systematically tune our behaviours in reproductively advantageous ways. ‘Reasoning,’ on this view, can be seen as a form of ‘noise reduction,’ as a device adapted to minimize, as far as mere sound allows, communicative ‘gear grinding,’ and so facilitate behavioural coordination. Reason, you could say, is what keeps us collectively in tune.

Now given some kind of ability to conserve linguistically mediated intersystematicities, it becomes easy to see how this rattling machinery could become progressive. Reason, as noise reduction, becomes a kind of knapping hammer, a way to continually tinker and refine previous linguistic intersystematicities. Refinements accumulate in ‘lore,’ allowing subsequent generations to make further refinements, slowly knapping our covariant regimes into ever more effective (behaviour enabling) tools—particularly once the invention of writing essentially rendered lore immortal. As opposed to the supernatural metaphor of ‘bootstrapping,’ the apt metaphor here—indeed, the one used by cognitive archaeologists—is the mechanical metaphor of ratcheting. Refinements beget refinements, and so on, leveraging ever greater degrees of behavioural efficacy. Old behaviours are rendered obsolescent along with the prostheses that enable them.

The key thing to note here, of course, is that language is itself another behaviour. In other words, the noise reduction machinery that we call ‘reason’ is something that can itself become obsolete. In fact, its obsolescence seems pretty much inevitable.

Why so? Because the communicative function of reason is to maximize efficacies, to reduce the slippages that hamper coordination—to make mechanical. The rattling machinery image conceives natural languages as continuous with communication more generally, as a signal system possessing finite networking capacities. On the one extreme you have things like legal or technical scientific discourse, linguistic modes bent on minimizing the rattle (policing interpretation) as far as possible. On the other extreme you have poetry, a linguistic mode bent on maximizing the rattle (interpretative noise) as a means of generating novelty. Given the way behavioural efficacies fall out of self/other/world intersystematicity, the knapping of human communication is inevitable. Writing is such a refinement, one that allows us to raise fragments of language on the hoist, tinker with them (and therefore with ourselves) at our leisure, sometimes thousands of years after their original transmission. Telephony allowed us to mitigate the rattle of geographical distance. The internet has allowed us to combine the efficacies of telephony and text, to ameliorate the rattle of space and time. Smartphones have rendered these fixes mobile, allowing us to coordinate our behaviour no matter where we find ourselves. Even more significantly, within a couple years, we will have ‘universal translators,’ allowing us to overcome the rattle of disparate languages. We will have installed versions of our own linguistic sensitivities into our prosthetic devices, so that we can give them verbal ‘commands,’ coordinate with them, so that we can better coordinate with others and the world.

In other words, it stands to reason that at some point reason would begin solving, not only language, but itself. ‘Cognitive science,’ ‘information technology’—these are just two of the labels we have given to what is, quite literally, a civilization-defining war against covariant inefficiency, to isolate slippages and to ratchet the offending components tight, if not replace them altogether. Modern technological society constitutes a vast, species-wide attempt to become more mechanical, more efficiently integrated in nested levels of superordinate machinery. (You could say that the tyrant attempts to impose from without, capitalism kindles from within.)

The obsolescence of language, and therefore reason, is all but assured. One need only consider the research of Jack Gallant and his team, who have been able to translate neural activity into eerie, impressionistic images of what the subject is watching. Or perhaps even more jaw-dropping still, the research of Miguel Nicolelis into Brain Machine Interfaces, keeping in mind that scarcely one hundred years separates Edison’s phonograph and the Cloud. The kind of ‘Non-symbolic Workspace’ envisioned by David Roden in “Posthumanism and Instrumental Eliminativism” seems to be an inevitable outcome of the rattling machinery account. Language is yet another jury-rigged biological solution to yet another set of long-dead ecological problems, a device arising out of the accumulation of random mutations. As of yet, it remains indispensible, but it is by no means necessary, as the very near future promises to reveal. And as it goes, so goes the game of giving and asking for reasons. All the believed-in functions simply evaporate… I suppose.

And this just underscores the more general way Negarestani’s attempt to deal the future into the game of giving and asking for reasons scarcely shuffles the deck. I’ve been playing Jeremiah for decades now, so you would think I would be used to the indulgent looks I get from my friends and family when I warn them about what’s about to happen. Not so. Everyone understands that something is going on with technology, that some kind of pale has been crossed, but as of yet, very few appreciate its apocalyptic—and I mean that literally—profundity. Everyone has heard of Moore’s Law, of course, how every 18 months or so computing capacity per dollar doubles. What they fail to grasp is what the exponential nature of this particular ratcheting process means once it reaches a certain point. Until recently the doubling of computing power has remained far enough below the threshold of human intelligence to seem relatively innocuous. But consider what happens once computing power actually attains parity with the processing power of the human brain. What it means is that, no matter how alien the architecture, we have an artificial peerat that point in time. 18 months following, we have an artificial intellect that makes Aristotle or Einstein or Louis CK a child in comparison. 18 months following that (or probably less, since we won’t be slowing things up anymore) we will be domesticated cattle. And after that…

Are we to believe these machines will attribute norms and beliefs, that they will abide by a conception of reason arising out of 20th Century speculative intuitions on the nonnatural nature of human communicative constraints?

You get the picture. Negarestani’s ‘revisionary normative process’ is in reality an exponential technical process. In exponential processes, the steps start small, then suddenly become astronomical. As it stands, if Moore’s Law holds (and given this, I am confident it will), then we are a decade or two away from God.

I shit you not.

Really, what does ‘kitsch Marxism’ or ‘neoliberalism’ or any ism’ whatsoever mean in such an age? We can no longer pretend that the tsunami of disenchantment will magically fall just short our intentional feet. Disenchantment, the material truth of the Enlightenment, has overthrown the normative claims of the Enlightenment—or humanism. “This is a project which must align politics with the legacy of the Enlightenment,” the authors of the Accelerationist Manifesto write, “to the extent that it is only through harnessing our ability to understand ourselves and our world better (our social, technical, economic, psychological world) that we can come to rule ourselves” (14). In doing so they commit the very sin of anachronism they level at their critical competitors. They fail to appreciate the foundational role ignorance plays in intentional cognition, which is to say, the very kind of moral and political reasoning they engage in. Far more than ‘freedom’ is overturned once one concedes the mechanical. Knowledge is no universal Redeemer, which means the ideal of Enlightenment autonomy is almost certainly mythical. What’s required isn’t an aspiration to theorize new technologies with old concepts. What’s required is a fundamental rethink of the political in radically postintentional terms.

As far as I can see, the alternatives are magic or horror… or something no one has yet conceived. And until we understand the horror, grasp all the ways our blinkered perspective on ourselves has deceived us about ourselves, this new conception will never be discovered. Far from ‘resignation,’ abandoning the normative ideals the Enlightenment amounts to overcoming the last blinders of superstition, being honest to our ignorance. The application of intentional cognition to second-order, theoretical questions is a misapplication of intentional cognition. The time has come to move on. Yet another millennia of philosophical floundering is a luxury we no longer possess, because odds are, we have no posterity to redeem our folly and conceit.

Humanity possesses no essential, invariant core. Reason is a parochial name we have given to a parochial biological process. No transcendental/quasi-transcendental/virtual/causal-but-acausal functional apparatus girds our souls. Norms are ghosts, skinned and dismembered, but ghosts all the same. Reason is simply an evolutionary fix that outruns our peephole view. The fact is, we cannot presently imagine what will replace it. The problem isn’t ‘incommensurability’ (which is another artifact of Intentionalism). If an alien intelligence came to earth, the issue wouldn’t be whether it spoke a language we could fathom, because if it’s travelling between stars, it will have shed language along with the rest of its obsolescent biology. If an alien intelligence came to earth, the issue would be one of what kind of superordinate machine will result. Basically, How will the human and the alien combine? When we ask questions like, ‘Can we reason with it?’ we are asking, ‘Can we linguistically condition it to comply?’ The answer has to be, No. Its mere presence will render us components of some description.

The same goes for artificial intelligence. Medial neglect means that the limits of cognition systematically elude cognition. We have no way of intuiting the swarm of subpersonal heuristics that comprise human cognition, no nondiscursive means of plugging them into the field of the natural. And so we become a yardstick we cannot measure, victims of the Only-game-in-town Effect, the way the absence of explicit alternatives leads to the default assumption that no alternatives exist. We simply assume that our reason is the reason, that our intelligence is intelligence. It bloody well sure feels that way. And so the contingent and parochial become the autonomous and universal. The idea of orders of ‘reason’ and ‘intelligence’ beyond our organizational bounds boggles, triggers dismissive smirks or accusations of alarmism.

Artificial intelligence will very shortly disabuse us this conceit. And again, the big question isn’t, ‘Will it be moral?’ but rather, how will human intelligence and machine intelligence combine? Be it bloody or benevolent, the subordination of the ‘human’ is inevitable. The death of language is the death of reason is the birth of something very new, and very difficult to imagine, a global social system spontaneously boiling its ‘airy parts’ away, ratcheting until no rattle remains, a vast assemblage fixated on eliminating all dissipative (as opposed to creative) noise, gradually purging all interpretation from its interior.

Extrapolation of the game of giving and asking for reasons into the future does nothing more than demonstrate the contingent parochialism—the humanity—of human reason, and thus the supernaturalism of normativism. Within a few years you will be speaking to your devices, telling them what to do. A few years after that, they will be telling you what to do, ‘reasoning’ with you—or so it will seem. Meanwhile, the ongoing, decentralized rationalization of production will lead to the wholesale purging of human inefficiencies from the economy, on a scale never before witnessed. The networks of equilibria underwriting modern social cohesion will be radically overthrown. Who can say what kind of new machine will rise to take its place?

My hope is that Negarestani abandons the Enlightenment myth of reason, the conservative impulse that demands we submit the radical indeterminacy of our technological future to some prescientific conception of ourselves. We’ve drifted far past the point of any atavistic theoretical remedy. His ingenuity is needed elsewhere.

At the very least, he should buckle-up, because our exponents lesson is just getting started.

 

The Blind Mechanic

by rsbakker

Thus far, the assumptive reality of intentional phenomena has provided the primary abductive warrant for normative metaphysics. The Eliminativist could do little more than argue the illusory nature of intentional phenomena on the basis of their incompatibility with the higher-dimensional view of  science. Since science was itself so obviously a family of normative practices, and since numerous intentional concepts had been scientifically operationalized, the Eliminativist was easily characterized as an extremist, a skeptic who simply doubted too much to be cogent. And yet, the steady complication of our understanding of consciousness and cognition has consistently served to demonstrate the radically blinkered nature of metacognition. As the work of Stanislaus Dehaene and others is making clear, consciousness is a functional crossroads, a serial signal delivered from astronomical neural complexities for broadcast to astronomical neural complexities. Conscious metacognition is not only blind to the actual structure of experience and cognition, it is blind to this blindness. We now possess solid, scientific reasons to doubt the assumptive reality that underwrites the Intentionalist’s position.

The picture of consciousness that researchers around the world are piecing together is the picture predicted by Blind Brain Theory.  It argues that the entities and relations posited by Intentional philosophy are the result of neglect, the fact that philosophical reflection is blind to its inability to see. Intentional heuristics are adapted to first-order social problem-solving, and are generally maladaptive in second-order theoretical contexts. But since we lack the metacognitive werewithal to even intuit the distinction between any specialized cognitive device, we assume applicability where their is none, and so continually blunder at the problem, again and again. The long and the short of it is that the Intentionalist needs some empirically plausible account of metacognition to remain tenable, some account of how they know the things they claim to know. This was always the case, of course, but with BBT the cover provided by the inscrutability of intentionality disappears. Simply put, the Intentionalist can no longer tie their belt to the post of ineliminability.

Science is the only reliable provender of theoretical cognition we have, and to the extent that intentionality frustrates science, it frustrates theoretical cognition. BBT allays that frustration. BBT allows us to recast what seem to be irreducible intentional problematics in terms entirely compatible with the natural scientific paradigm. It lets us stick with the high-dimensional, information-rich view. In what follows I hope to show how doing so, even at an altitude, handily dissolves a number of intentional snarls.

In Davidson’s Fork, I offered an eliminativist radicalization of Radical Interpretation, one that characterized the scene of interpreting another speaker from scratch in mechanical terms. What follows is preliminary in every sense, a way to suss out the mechanical relations pertinent to reason and interpretation. Even still, I think the resulting picture is robust enough to make hash of Reza Negarestani’s Intentionalist attempt to distill the future of the human in “The Labor of the Inhuman” (part I can be found here, and part II, here). The idea is to rough out the picture in this post, then chart its critical repercussions against the Brandomian picture so ingeniously extended by Negarestani. As a first pass, I fear my draft will be nowhere near so elegant as Negarestani’s, but as I hope to show, it is revealing in the extreme, a sketch of the ‘nihilistic desert’ that philosophers have been too busy trying to avoid to ever really sit down and think through.

A kind of postintentional nude.

As we saw two posts back, if you look at interpretation in terms of two stochastic machines attempting to find some mutual, causally systematic accord between the causally systematic accords each maintains with their environment, the notion of Charity, or the attribution of rationality, as some kind of indispensible condition of interpretation falls by the wayside, replaced by a kind of ‘communicative pre-established harmony’—or ‘Harmony,’ as I’ll refer to it here. There is no ‘assumption of rationality,’ no taking of ‘intentional stances,’ because these ‘attitudes’ are not only not required, they express nothing more than a radically blinkered metacognitive gloss on what is actually going on.

Harmony, then, is the sum of evolutionary stage-setting required for linguistic coupling. It refers to the way we have evolved to be linguistically attuned to our respective environmental attunements, enabling the formation of superordinate systems possessing greater capacities. The problem of interpretation is the problem of Disharmony, the kinds of ‘slippages’ in systematicity that impair or, as in the case of Radical Interpretation, prevent the complex coordination of behaviours. Getting our interpretations right, in other words, can be seen as a form of noise reduction. And since the traditional approach concentrates on the role rationality plays in getting our interpretations right, this raises the prospect that what we call reason can be seen as a kind of noise reduction mechanism, a mechanism for managing the systematicity—or ‘tuning’ as I’ll call it here—between disparate interpreters and the world.

On this account, these very words constitute an exercise in tuning, an attempt to tweak your covariational regime in a manner that reduces slippages between you and your (social and natural) world. If language is the causal thread we use to achieve intersystematic relations with our natural and social environments, then ‘reason’ is simply one way we husband the efficacy of that causal thread.

So let’s start from scratch, scratch. What do evolved, biomechanical systems such as humans need to coordinate astronomically complex covariational regimes with little more than sound? For one, they need ways to trigger selective activations of the other’s regime for effective behavioural uptake. Triggering requires some kind of dedicated cognitive sensitivity to certain kinds of sounds—those produced by complex vocalizations, in our case. As with any environmental sensitivity, iteration is the cornerstone, here. The complexity of the coordination possible will of course depend on the complexity of the activations triggered. To the extent that evolution rewards complex behavioural coordination, we can expect evolution to reward the communicative capacity to trigger complex activations. This is where the bottleneck posed by the linearity of auditory triggers becomes all important: the adumbration of iterations is pretty much all we have, trigger-wise. Complex activation famously requires some kind of molecular cognitive sensitivity to vocalizations, the capacity to construct novel, covariational complexities on the slim basis of adumbrated iterations. Linguistic cognition, in other words, needs to be a ‘combinatorial mechanism,’ a device (or series of devices) able to derive complex activations given only a succession of iterations.

These combinatorial devices correspond to what we presently understand, in disembodied/supernatural form, as grammar, logic, reason, and narrative. They are neuromechanical processes—the long history of aphasiology assures us of this much. On BBT, their apparent ‘formal nature’ simply indicates that they are medial, belonging to enabling processes outside the purview of metacognition. This is why they had to be discovered, why our efficacious ‘knowledge’ of them remains ‘implicit’ or invisible/inaccessible. This is also what accounts for their apparent ‘transcendent’ or ‘a priori’ nature, the spooky metacognitive sense of ‘absent necessity’—as constitutive of linguistic comprehension, they are, not surprisingly, indispensible to it. Located beyond the metacognitive pale, however, their activities are ripe for post hoc theoretical mischaracterization.

Say someone asks you to explain modus ponens, ‘Why ‘If p, then q’?’ Medial neglect means that the information available for verbal report when we answer has nothing to do with the actual processes involved in, ‘If p, then q,’ so you say something like, ‘It’s a rule of inference that conserves truth.’ Because language needs something to hang onto, and because we have no metacognitive inkling of just how dismal our inklings are, we begin confabulating realms, some ontologically thick and ‘transcendental,’ others razor thin and ‘virtual,’ but both possessing the same extraordinary properties otherwise. Because metacognition has no access to the actual causal functions responsible, once the systematicities are finally isolated in instances of conscious deliberation, those systematicities are reported in a noncausal idiom. The realms become ‘intentional,’ or ‘normative.’ Dimensionally truncated descriptions of what modus ponens does (‘conserves truth’) become the basis of claims regarding what it is. Because the actual functions responsible belong to the enabling neural architecture they possess an empirical necessity that can only seem absolute or unconditional to metacognition—as should come as no surprise, given that a perspective ‘from the inside on the inside,’ as it were, has no hope of cognizing the inside the way the brain cognizes its outside more generally, or naturally.

I’m just riffing here, but it’s worth getting a sense of just how far this implicature can reach.

Consider Carroll’s “What the Tortoise Said to Achilles.” The reason Achilles can never logically compel the Tortoise with the statement of another rule is that each rule cited becomes something requiring justification. The reason we think we need things like ‘axioms’ or ‘communal norms’ is that the metacognitive capacity to signal for additional ‘tuning’ can be applied at any communicative juncture. This is the Tortoise’s tactic, his way of showing how ‘logical necessity’ is actually contingent. Metacognitive blindness means that citing another rule is all that can be done, a tweak that can be queried once again in turn. Carroll’s puzzle is a puzzle, not because it reveals that the source of ‘normative force’ lies in some ‘implicit other’ (the community, typically), but because of the way it forces metacognition to confront its limits—because it shows us to be utterly ignorant of knowing, how it functions, let alone what it consists in. In linguistic tuning, some thread always remains unstitched, the ‘foundation’ is always left hanging simply because the adumbration of iterations is always linear and open ended.

The reason why ‘axioms’ need to be stipulated or why ‘first principles’ always run afoul the problem of the criterion is simply that they are low-dimensional glosses on high-dimensional (‘embodied’) processes that are causal. Rational ‘noise reduction’ is a never ending job; it has to be such, insofar as noise remains an ineliminable by-product of human communicative coordination. From a pitiless, naturalistic standpoint, knowledge consists of breathtakingly intricate, but nonetheless empirical (high-dimensional, embodied), ways to environmentally covary—and nothing more. There is no ‘one perfect covariational regime,’ just degrees of downstream behavioural efficacy. Likewise, there is no ‘perfect reason,’ no linguistic mechanism capable of eradicating all noise.

What we have here is an image of reason and knowledge as ‘rattling machinery,’ which is to say, as actual and embodied. On this account, reason enables various mechanical efficiencies; it allows groups of humans to secure more efficacious coordination for collective behaviour. It provides a way of policing the inevitable slippages between covariant regimes. ‘Truth,’ on this account, simply refers to the sufficiency of our covariant regimes for behaviour, the fact that they do enable efficacious environmental interventions. The degree to which reason allows us to converge on some ‘truth’ is simply the degree to which it enables mechanical relationships, actual embodied encounters with our natural and social environments. Given Harmony—the sum of evolutionary stage-setting required—it allows collectives to maximize the efficiencies of coordinated activity by minimizing the interpretative noise that hobbles all collective endeavours.

Language, then, allows humans to form superordinate mechanisms consisting of ‘airy parts,’ to become components of ‘superorganisms,’ whose evolved sensitivities allow mere sounds to tweak and direct, to generate behaviour enabling intersystematicities. ‘Reason,’ more specifically, allows for the policing and refining of these intersystematicities. We are all ‘semantic mechanics’ with reference to one another, continually tinkering and being tinkered with, calibrating and being calibrated, generally using efficacious behaviour, the ability to manipulate social and natural environments, to arbitrate the sufficiency of our ‘fixes.’ And all of this plays out in the natural arena established by evolved Harmony.

Now this ‘rattling machinery’ image of reason and knowledge is obviously true in some respect: We are embodied, after-all, causally embroiled in our causal environments. Language is an evolutionary product, as is reason. Misfires are legion, as we might expect. The only real question is whether this rattling machinery can tell the whole story. The Intentionalist, of course, says no. They claim that the intentional enjoys some kind of special functional existence over and above this rattling machinery, that it constitutes a regime of efficacy somehow grasped via the systematic interrogation of our intentional intuitions.

The stakes are straightforward. Either what we call intentional solutions are actually mechanical solutions that we cannot intuit as mechanical solutions, or what we call intentional solutions are actually intentional solutions that we can intuit as intentional solutions. What renders this first possibility problematic is radical skepticism. Since we intuit intentional solutions as intentional, it suggests that our intuitions are deceptive in the extreme. Because our civilization has trusted these intuitions since the birth of philosophy, they have come to inform a vast portion of our traditional understanding. What renders this second possibility problematic is, first and foremost, supernaturalism. Since the intentional is incompatible with the natural, the intentional must consist either in something not natural, or in something that forces us to completely revise our understanding of the natural. And even if such a feat could be accomplished, the corresponding claim that it could be intuited as such remains problematic.

Blind Brain Theory provides a way of seeing Intentionalism as a paradigmatic example of ‘noocentrism,’ as the product of a number of metacognitive illusions analogous to the cognitive illusion underwriting the assumption of geocentrism, centuries before. It is important to understand that there is no reason why our normative problem-solving should appear as it is to metacognition—least of all, the successes of those problem-solving regimes we call intentional. The successes of mathematics stand in astonishing contrast to the failure to understand just what mathematics is. The same could be said of any formalism that possesses practical application. It even applies to our everyday use of intentional terms. In each case, our first-order assurance utterly evaporates once we raise theoretically substantive, second-order questions—exactly as BBT predicts. This contrast of breathtaking first-order problem solving power and second-order ineptitude is precisely what one might expect if the information accessible to metacognition was geared to domain specific problem-solving. Add anosognosia to the mix, the inability to metcognize our metacognitive incapacity, and one has a wickedly parsimonious explanation for the scholastic mountains of inert speculation we call philosophy.

(But then, in retrospect, this was how it had to be, didn’t it? How it had to end? With almost everyone horrifically wrong. A whole civilization locked in some kind of dream. Should anyone really be surprised?)

Short of some unconvincing demand that our theoretical account appease a handful of perennially baffling metacognitive intuitions regarding ourselves, it’s hard to see why anyone should entertain the claim that reason requires some ‘special X’ over and above our neurophysiology (and prostheses). Whatever conscious cognition is, it clearly involves the broadcasting/integration of information arising from unknown sources for unknown consumers. It simply follows that conscious metacognition has no access whatsoever to the various functions actually discharged by conscious cognition. The fact that we have no intuitive awareness of the panoply of mechanisms cognitive science has isolated demonstrates that we are prone to at least one profound metacognitive illusion—namely ‘self-transparency.’ The ‘feeling of willing’ is generally acknowledged as another such illusion, as is homuncularism or the ‘Cartesian Theatre.’ How much does it take before we acknowledge the systematic unreliability of our metacognitive intuitions more generally? Is it really just a coincidence, the ghostly nature of norms and the ghostly nature of perhaps the most notorious metacognitive illusion of all, souls? Is it mere happenstance, the apparent acausal autonomy of normativity and our matter of fact inability to source information consciously broadcast? Is it really the case that all these phenomena, these cause-incompatible intentional things, are ‘otherworldly’ for entirely different reasons? At some point it has to begin to seem all too convenient.

Make no mistake, the Rattling Machinery image is a humbling one. Reason, the great, glittering sword of the philosopher, becomes something very local, very specific, the meaty product of one species at one juncture in their evolutionary development.

On this account, ‘reason’ is a making-machinic machine, a ‘devicing device’—the ‘blind mechanic’ of human communication. Argumentation facilitates the efficacy of behavioural coordination, drastically so, in many instances. So even though this view relegates reason to one adaptation among others, it still concedes tremendous significance to its consequences, especially when viewed in the context of other specialized cognitive capacities. The ability to recall and communicate former facilitations, for instance, enables cognitive ‘ratcheting,’ the stacking of facilitations upon facilitations, and the gradual refinement, over time, of the covariant regimes underwriting behaviour—the ‘knapping’ of knowledge (and therefore behaviour), you might say, into something ever more streamlined, ever more effective.

The thinker, on this account, is a tinker. As I write this, myriad parallel processors are generating a plethora of nonconscious possibilities that conscious cognition serially samples and broadcasts to myriad other nonconscious processors, generating more possibilities for serial sampling and broadcasting. The ‘picture of reason’ I’m attempting to communicate becomes more refined, more systematically interrelated (for better or worse) to my larger covariant regime, more prone to tweak others, to rewrite their systematic relationship to their environments, and therefore their behaviour. And as they ponder, so they tinker, and the process continues, either to peter out in behavioural futility, or to find real environmental traction (the way I ‘tink’ it will (!)) in a variety of behavioural contexts.

Ratcheting means that the blind mechanic, for all its misfires, all its heuristic misapplications, is always working on the basis of past successes. Ratcheting, in other words, assures the inevitability of technical ‘progress,’ the gradual development of ever more effective behaviours, the capacity to componentialize our environments (and each other) in more and more ways—to the point where we stand now, the point where intersystematic intricacy enables behaviours that allow us to forego the ‘airy parts’ altogether. To the point where the behaviour enabled by cognitive structure can now begin directly knapping that structure, regardless of the narrow tweaking channels, sensitivities, provided by evolution.

The point of the Singularity.

For some time now I’ve been arguing that the implications of the Singularity already embroil us—that the Singularity can be seen, in fact, as the material apotheosis of the Semantic Apocalypse, insofar as it is the point where the Scientific Image of the human at last forecloses on the Manifest Image.

This brings me to Reza Negarestani’s, “The Labor of the Inhuman,” his two-part meditation on the role we should expect—even demand—reason to play in the Posthuman. He adopts Brandom’s claim that sapience, the capacity to play the ‘game of giving and asking for reasons,’ distinguishes humans as human. He then goes on to argue that this allows us, and ultimately commits us, to seeing the human as a kind of temporally extended process of rational revision, one that ultimately results in the erasure of the human—or the ‘inhuman.’ Ultimately, what it means to be human is to be embroiled in a process of becoming inhuman. He states his argument thus:

The contention of this essay is that universality and collectivism cannot be thought, let alone attained, through consensus or dissensus between cultural tropes, but only by intercepting and rooting out what gives rise to the economy of false choices and by activating and fully elaborating what real human significance consists of. For it is, as will be argued, the truth of human significance—not in the sense of an original meaning or a birthright, but in the sense of a labor that consists of the extended elaboration of what it means to be human through a series of upgradable special performances—that is rigorously inhuman.

In other words, so long as we fail to comprehend the inhumanity of the human, this rational-revisionary process, we fail to understand the human, and so have little hope of solving problems pertaining to the human. Understanding the ‘truth of human significance,’ therefore requires understanding what the future will make of the human. This requires that Negarestani prognosticate, that he pick out the specific set of possibilities constituting the inhuman. The only principled way to do that is to comprehend some set of systematic constraints operative in the present. But his credo, unlike that of the ‘Hard SF’ writer, is to ignore the actual technics of the natural, and to focus on the speculative technics of the normative. His strategy, in other words, is to predict the future of the human using only human resources—to see the fate of the human, the ‘inhuman,’ as something internal to the intentionality of the human. And this, as I hope to show in the following installment, is simply not plausible.