The Blind Mechanic II: Reza Negarestani and the Labour of Ghosts
by rsbakker
For some time now I’ve been arguing that the implications of the Singularity already embroil us—that the Singularity can be seen, in fact, as the material apotheosis of the Semantic Apocalypse, insofar as it is the point where the Scientific Image of the human at last forecloses on the Manifest Image. In the “The Labor of the Inhuman” (which can be found here and here, with Craig Hickman’s critiques, here and here), Reza Negarestani adopts Brandom’s claim that sapience, the capacity to play the ‘game of giving and asking for reasons,’ distinguishes humans as human. He then goes onto argue that this allows us, and ultimately commits us, to seeing the human as a kind of temporally extended process of rational revision, one that ultimately results in the erasure of the human—or the ‘inhuman.’ Ultimately, what it means to be human is to be embroiled in a process of becoming inhuman. He states his argument thus:
The contention of this essay is that universality and collectivism cannot be thought, let alone attained, through consensus or dissensus between cultural tropes, but only by intercepting and rooting out what gives rise to the economy of false choices and by activating and fully elaborating what real human significance consists of. For it is, as will be argued, the truth of human significance—not in the sense of an original meaning or a birthright, but in the sense of a labor that consists of the extended elaboration of what it means to be human through a series of upgradable special performances—that is rigorously inhuman.
In other words, so long as we fail to comprehend the inhumanity of the human, this rational-revisionary process, we fail to understand the human, and so have little hope of solving problems pertaining to the human. Understanding the ‘truth of human significance,’ therefore requires understanding what the future will make of the human. It requires that Negarestani prognosticate. It requires, in other words, that he pick out the specific set of possibilities constituting the inhuman. The only principled way to do that is to comprehend some set of systematic constraints operative in the present. But his credo, unlike that of the ‘Hard SF’ writer, is to ignore the actual technics of the natural, and to focus on the speculative technics of the normative. His strategy, in other words, is to predict the future of the human using only human resources—to see the fate of the human, the ‘inhuman,’ as something internal to the human. And this, as I hope to show, is simply not plausible.
He understands the danger of conceiving his constraining framework as something fixed: “humanism cannot be regarded as a claim about human that can only be professed once and subsequently turned into a foundation or axiom and considered concluded.” He appreciates the implausibility of the static, Kantian transcendental approach. As a result, he proposes to take the Sellarsian/Brandomian approach, focussing on the unique relationship between the human and sapience, the “distinction between sentience as a strongly biological and natural category and sapience as a rational (not to be confused with logical) subject.” He continues:
The latter is a normative designation which is specified by entitlements and the responsibilities they bring about. It is important to note that the distinction between sapience and sentience is marked by a functional demarcation rather than a structural one. Therefore, it is still fully historical and open to naturalization, while at the same time being distinguished by its specific functional organization, its upgradable set of abilities and responsibilities, its cognitive and practical demands.
He’s careful here to hedge, lest the dichotomy between the normative and the natural comes across as too schematic:
The relation between sentience and sapience can be understood as a continuum that is not differentiable everywhere. While such a complex continuity might allow the naturalization of normative obligations at the level of sapience—their explanation in terms of naturalistic causes—it does not permit the extension of certain conceptual and descriptive resources specific to sapience (such as the particular level of mindedness, responsibilities, and, accordingly, normative entitlements) to sentience and beyond.
His dilemma here is the dilemma of the Intentionalist more generally. Science, on the one hand, is nothing if not powerful. The philosopher, on the other hand, has a notorious, historical tendency to confuse the lack of imagination for necessity. Foot-stomping will not do. He needs some way to bite this bullet without biting it, basically, some way of acknowledging the possible permeability of normativity to naturalization, while insisting, nonetheless, on the efficacy of some inviolable normative domain. To accomplish this, he adverts to the standard appeal to the obvious fact that norm-talk actually solves norm problems, that normativity, in other words, obviously possesses a problem-ecology. But of course the fact that norm-talk is indispensible to solving problems within a specific problem-ecology simply raises the issue of the limits of this ecology—and more specifically, whether the problem of humanity’s future actually belongs to that problem-ecology. What he needs to establish is the adequacy of theoretical, second-order norm-talk to the question of what will become of the human.
He offers us a good, old fashioned transcendental argument instead:
The rational demarcation lies in the difference between being capable of acknowledging a law and being solely bound by a law, between understanding and mere reliable responsiveness to stimuli. It lies in the difference between stabilized communication through concepts (as made possible by the communal space of language and symbolic forms) and chaotically unstable or transient types of response or communication (such as complex reactions triggered purely by biological states and organic requirements or group calls and alerts among social animals). Without such stabilization of communication through concepts and modes of inference involved in conception, the cultural evolution as well as the conceptual accumulation and refinement required for the evolution of knowledge as a shared enterprise would be impossible.
Sound familiar? The necessity of the normative lies in the irreflexive contingency of the natural. Even though natural relations constitute biological systems of astounding complexity, there’s simply no way, we are told, they can constitute the kind of communicative stability that human knowledge and cultural evolution requires. The machinery is just too prone to rattle! Something over and above the natural—something supernatural—is apparently required. “Ultimately,” Negarestani continues, “the necessary content as well as the real possibility of human rests on the ability of sapience—as functionally distinct from sentience—to practice inference and approach non-canonical truth by entering the deontic game of giving and asking for reasons.”
It’s worth pausing to take stock of the problems we’ve accumulated up to this point. 1) Even though the human is a thoroughgoing product of its past natural environments, the resources required to understand the future of the human, we are told, lie primarily, if not entirely, within the human. 2) Even though norm-talk possesses a very specific problem-ecology, we are supposed to take it on faith that the nature of norm-talk is something that only more norm-talk can solve, rather than otherwise (as centuries of philosophical intractability would suggest). And now, 3) Even though the natural, for all its high dimensional contingencies, is capable of producing the trillions of mechanical relations that constitute you, it is not capable of ‘evolving human knowledge.’ Apparently we need a special kind of supernatural game to do this, the ‘game of giving and asking for reasons,’ a low-dimensional, communicative system of efficacious (and yet acausal!) normative posits based on… we are never told—some reliable fund of information, one would hope.
But since no normativist that I know of has bothered to account for the evidential bases of their position, we’re simply left with faith in metacognitive intuition and this rather impressive sounding, second-order theoretical vocabulary of unexplained explainers—‘commitments,’ ‘inferences,’ ‘proprieties,’ ‘deontic statuses,’ ‘entitlements,’ and the like—a system of supernatural efficacies beyond the pale of any definitive arbitration. Negarestani sums this normative apparatus with the term ‘reason,’ and it is reason understood in this inferentialist sense, that provides the basis of charting the future of the human. “Reason’s main objective is to maintain and enhance itself,” he writes. “And it is the self-actualization of reason that coincides with the truth of the inhuman.”
Commitment to humanity requires scrutinizing the meaning of humanity, which in turn requires making the implicature of the human explicit—not just locally, but in its entirety. The problem, in a nutshell, is that the meaning of the human is not analytic, something that can be explicated via analysis alone. It arises, rather, out of the game of giving and asking for reasons, the actual, historical processes that comprise discursivity. And this means that unpacking the content of the human is a matter of continual revision, a process of interpretative differentiation that trends toward the radical, the overthrow of “our assumptions and expectations about what ‘we’ is and what it entails.”
The crowbar of this process of interpretative differentiation is what Negarestani calls an ‘intervening attitude,’ that moment in the game where the interpretation of claims regarding the human spark further claims regarding the human, the interpretation of which sparks yet further claims, and so on. The intervening attitude thus “counts as an enabling vector, making possible certain abilities otherwise hidden or deemed impossible.” This is why he can claim that “[r]evising and constructing the human is the very definition of committing to humanity.” And since this process is embedded in the game of giving and asking for reasons, he concludes that “committing to humanity is tantamount complying with the revisionary vector of reason and constructing humanity according to an autonomous account of reason.”
And so he writes:
Humanity is not simply a given fact that is behind us. It is a commitment in which the reassessing and constructive strains inherent to making a commitment and complying with reason intertwine. In a nutshell, to be human is a struggle. The aim of this struggle is to respond to the demands of constructing and revising human through the space of reasons.
In other words, we don’t simply ‘discover the human’ via reason, we construct it as well. And thus the emancipatory upshot of Negarestani’s argument: if reasoning about the human is tantamount to constructing the human, then we have a say regarding the future of humanity. The question of the human becomes an explicitly political project, and a primary desideratum of Negarestani’s stands revealed. He thinks reason as he defines it, as at once autonomous (supernatural) and historically concrete (or ‘solid,’ as Brandom would say) revisionary activity of theoretical argumentation, provides a means of assessing the adequacy of various political projects (traditional humanism and what he calls ‘kitsch Marxism) according to their understanding of the human. Since my present concern is to assess the viability of the account of reason Negarestani uses to ground the viability of this yardstick, I will forego considering his specific assessments in any detail.
The human is the malleable product of machinations arising out of the functional autonomy of reason. Negarestani refers to this as a ‘minimalist definition of humanity,’ but as the complexity of the Brandomian normative apparatus he deploys makes clear, it is anything but. The picture of reason he espouses is as baroque and reticulated as anything Kant ever proposed. It’s a picture, after all, that requires an entire article to simply get off the ground! Nevertheless, this dynamic normative apparatus provides Negarestani with a generalized means of critiquing the intransigence of traditional political commitments. The ‘self-actualization’ of reason lies in its ability “to bootstrap complex abilities out of its primitive abilities.” Even though continuity is with previous commitments is maintained at every step in the process, over time the consequences are radical: “Reason is therefore simultaneously a medium of stability that reinforces procedurality and a general catastrophe, a medium of radical change that administers the discontinuous identity of reason to an anticipated image of human.”
This results in what might be called a fractured ‘general implicature,’ a space of reasons rife with incompatibilities stemming from the refusal or failure to assiduously monitor and update commitments in light of the constructive revisions falling out of the self-actualization of reason. Reason itself, Negarestani is arguing, is in the business of manufacturing ideological obsolescence, always in the process of rendering its prior commitments incompatible with its present ones. Given his normative metaphysics, reason has become the revisionary, incremental “director of its own laws,” one that has the effect of rendering its prior laws, “the herald of those which are whispered to it by an implanted sense or who knows what tutelary nature” (Kant, Fundamental Principles of the Metaphysics of Morals). Where Hegel can be seen as temporalizing and objectifying Kant’s atemporal, subjective, normative apparatus, Brandom (like others) can be seen as socializing and temporalizing it. What Negarestani is doing is showing how this revised apparatus operates against the horizon of the future with reference to the question of the human. And not surprisingly, Kant’s moral themes remain the same, only unpacked along the added dimensions of the temporal and the social. And so we find Negarestani concluding:
The sufficient content of freedom can only be found in reason. One must recognize the difference between a rational norm and a natural law—between the emancipation intrinsic in the explicit acknowledgement of the binding status of complying with reason, and the slavery associated with the deprivation of such a capacity to acknowledge, which is the condition of natural impulsion. In a strict sense, freedom is not liberation from slavery. It is the continuous unlearning of slavery.
The catastrophe, apparently, has yet to happen, because here we find ourselves treading familiar ground indeed, Enlightenment ground, as Negarestani himself acknowledges, one where freedom remains bound to reason—“to the autonomy of its normative, inferential, and revisionary function in the face of the chain of causes that condition it”—only as process rather than product.
And the ‘inhuman,’ so-called, begins to look rather like a shill for something all too human, something continuous, which is to say, conservative, through and through.
And how could it be otherwise, given the opening, programmatic passage of the piece?
Inhumanism is the extended practical elaboration of humanism; it is born out of a diligent commitment to the project of enlightened humanism. As a universal wave that erases the self-portrait of man drawn in sand, inhumanism is a vector of revision. It relentlessly revises what it means to be human by removing its supposed evident characteristics and preserving certain invariances. At the same time, inhumanism registers itself as a demand for construction, to define what it means to be human by treating human as a constructible hypothesis, a space of navigation and intervention.
The key phrase here has to be ‘preserving certain invariances.’ One might suppose that natural reality would figure large as one of these ‘invariances’; to quote Philip K. Dick, “Reality is that which, when you stop believing in it, doesn’t go away.” But Negarestani scarce mentions nature as cognized by science save to bar the dialectical door against it. The thing to remember about Brandom’s normative metaphysics is that ‘taking-as,’ or believing, is its foundation (or ontological cover). Unlike reality, his normative apparatus does go away when the scorekeepers stop believing. The ‘reality’ of the apparatus is thus purely a functional artifact, the product of ‘practices,’ something utterly embroiled in, yet entirely autonomous from, the natural. This is what allows the normative to constitute a ‘subregion of the factual’ without being anything natural.
Conservatism is built into Negarestani’s account at its most fundamental level, in the very logic—the Brandomian account of the game of giving and asking for reasons—that he uses to prognosticate the rational possibilities of our collective future. But the thing I find the most fascinating about his account is the way it can be read as an exercise in grabbing Brandom’s normative apparatus and smashing it against the wall of the future—a kind of ‘reductio by Singularity.’ Reasoning is parochial through and through. The intuitions of universalism and autonomy that have convinced so many otherwise are the product of metacognitive illusions, artifacts of confusing the inability to intuit more dimensions of information, with sufficient entities and relations lacking those dimensions. For taking shadows as things that cast shadows.
So consider the ‘rattling machinery’ image of reason I posited earlier in “The Blind Mechanic,” the idea that ‘reason’ should be seen as means of attenuating various kinds of embodied intersystematicities for behaviour—as a way to service the ‘airy parts’ of superordinate, social mechanisms. No norms. No baffling acausal functions. Just shit happening in ways accidental as well as neurally and naturally selected. What the Intentionalist would claim is that mere rattling machinery, no matter how detailed or complete its eventual scientific description comes to be, will necessarily remain silent regarding the superordinate (and therefore autonomous) intentional functions that it subserves, because these supernatural functions are what leverage our rationality somehow—from ‘above the grave.’
As we’ve already seen, it’s hard to make sense of how or why this should be, given that biomachinery is responsible for complexities we’re still in the process of fathoming. The behaviour that constitutes the game of giving and asking for reasons does not outrun some intrinsic limit on biomechanistic capacity by any means. The only real problem naturalism faces is one of explaining the apparent intentional properties belonging to the game. Behaviour is one thing, the Intentionalist says, while competence is something different altogether—behaviour plus normativity, as they would have it. Short of some way of naturalizing this ‘normative plus,’ we have no choice to acknowledge the existence of intrinsically normative facts.
On the Blind Brain account, ‘normative facts’ are simply natural facts seen darkly. ‘Ought,’ as philosophically conceived, is an artifact of metacognitive neglect, the fact that our cognitive systems cannot cognize themselves in the same way they cognize the rest of their environment. Given the vast amounts of information neglected in intentional cognition (not to mention millennia of philosophical discord), it seems safe to assume that norm-talk is not among the things that norm-talk can solve. Indeed, since the heuristic systems involved are neural, we have every reason to believe that neuroscience, or scientifically regimented fact-talk, will provide the solution. Where our second-order intentional intuitions beg to differ is simply where they are wrong. Normative talk is incompatible with causal talk simply because it belongs to a cognitive regime adapted to solve in the absence of causal information.
The mistake, then, is to see competence as some kind of complication or elaboration of performance—as something in addition to behaviour. Competence is ‘end-directed,’ ‘rule-constrained,’ because metacognition has no access to the actual causal constraints involved, not because a special brand of performance ‘plus’ occult, intentional properties actually exists. You seem to float in this bottomless realm of rules and goals and justifications not because such a world exists, but because medial neglect folds away the dimensions of your actual mechanical basis with nary a seam. The apparent normative property of competence is not a property in addition to other natural properties; it is an artifact of our skewed metacognitive perspective on the application of quick and dirty heuristic systems our brains use to solve certain complicated systems.
But say you still aren’t convinced. Say that you agree the functions underwriting the game of giving and asking for reasons are mechanical and not at all accessible to metacognition, but at a different ‘level of description,’ one incapable of accounting for the very real work discharged by the normative functions that emerge from them. Now if it were the case that Brandom’s account of the game of giving and asking for questions actually discharged ‘executive’ functions of some kind, then it would be the case that our collective future would turn on these efficacies in some way. Indeed, this is the whole reason Negarestani turned to Brandom in the first place: he saw a way to decant the future of the human given the systematic efficacies of the game of giving and asking for reasons.
Now consider what the rattling machine account of reason and language suggests about the future. On this account, the only invariants that structurally bind the future to the past, that enable any kind of speculative consideration of the future at all, are natural. The point of language, recall, is mechanical, to construct and maintain the environmental intersystematicity (self/other/world) required for coordinated behaviour (be it exploitative or cooperative). Our linguistic sensitivity, you could say, evolved in much the same manner as our visual sensitivity, as a channel for allowing certain select environmental features to systematically tune our behaviours in reproductively advantageous ways. ‘Reasoning,’ on this view, can be seen as a form of ‘noise reduction,’ as a device adapted to minimize, as far as mere sound allows, communicative ‘gear grinding,’ and so facilitate behavioural coordination. Reason, you could say, is what keeps us collectively in tune.
Now given some kind of ability to conserve linguistically mediated intersystematicities, it becomes easy to see how this rattling machinery could become progressive. Reason, as noise reduction, becomes a kind of knapping hammer, a way to continually tinker and refine previous linguistic intersystematicities. Refinements accumulate in ‘lore,’ allowing subsequent generations to make further refinements, slowly knapping our covariant regimes into ever more effective (behaviour enabling) tools—particularly once the invention of writing essentially rendered lore immortal. As opposed to the supernatural metaphor of ‘bootstrapping,’ the apt metaphor here—indeed, the one used by cognitive archaeologists—is the mechanical metaphor of ratcheting. Refinements beget refinements, and so on, leveraging ever greater degrees of behavioural efficacy. Old behaviours are rendered obsolescent along with the prostheses that enable them.
The key thing to note here, of course, is that language is itself another behaviour. In other words, the noise reduction machinery that we call ‘reason’ is something that can itself become obsolete. In fact, its obsolescence seems pretty much inevitable.
Why so? Because the communicative function of reason is to maximize efficacies, to reduce the slippages that hamper coordination—to make mechanical. The rattling machinery image conceives natural languages as continuous with communication more generally, as a signal system possessing finite networking capacities. On the one extreme you have things like legal or technical scientific discourse, linguistic modes bent on minimizing the rattle (policing interpretation) as far as possible. On the other extreme you have poetry, a linguistic mode bent on maximizing the rattle (interpretative noise) as a means of generating novelty. Given the way behavioural efficacies fall out of self/other/world intersystematicity, the knapping of human communication is inevitable. Writing is such a refinement, one that allows us to raise fragments of language on the hoist, tinker with them (and therefore with ourselves) at our leisure, sometimes thousands of years after their original transmission. Telephony allowed us to mitigate the rattle of geographical distance. The internet has allowed us to combine the efficacies of telephony and text, to ameliorate the rattle of space and time. Smartphones have rendered these fixes mobile, allowing us to coordinate our behaviour no matter where we find ourselves. Even more significantly, within a couple years, we will have ‘universal translators,’ allowing us to overcome the rattle of disparate languages. We will have installed versions of our own linguistic sensitivities into our prosthetic devices, so that we can give them verbal ‘commands,’ coordinate with them, so that we can better coordinate with others and the world.
In other words, it stands to reason that at some point reason would begin solving, not only language, but itself. ‘Cognitive science,’ ‘information technology’—these are just two of the labels we have given to what is, quite literally, a civilization-defining war against covariant inefficiency, to isolate slippages and to ratchet the offending components tight, if not replace them altogether. Modern technological society constitutes a vast, species-wide attempt to become more mechanical, more efficiently integrated in nested levels of superordinate machinery. (You could say that the tyrant attempts to impose from without, capitalism kindles from within.)
The obsolescence of language, and therefore reason, is all but assured. One need only consider the research of Jack Gallant and his team, who have been able to translate neural activity into eerie, impressionistic images of what the subject is watching. Or perhaps even more jaw-dropping still, the research of Miguel Nicolelis into Brain Machine Interfaces, keeping in mind that scarcely one hundred years separates Edison’s phonograph and the Cloud. The kind of ‘Non-symbolic Workspace’ envisioned by David Roden in “Posthumanism and Instrumental Eliminativism” seems to be an inevitable outcome of the rattling machinery account. Language is yet another jury-rigged biological solution to yet another set of long-dead ecological problems, a device arising out of the accumulation of random mutations. As of yet, it remains indispensible, but it is by no means necessary, as the very near future promises to reveal. And as it goes, so goes the game of giving and asking for reasons. All the believed-in functions simply evaporate… I suppose.
And this just underscores the more general way Negarestani’s attempt to deal the future into the game of giving and asking for reasons scarcely shuffles the deck. I’ve been playing Jeremiah for decades now, so you would think I would be used to the indulgent looks I get from my friends and family when I warn them about what’s about to happen. Not so. Everyone understands that something is going on with technology, that some kind of pale has been crossed, but as of yet, very few appreciate its apocalyptic—and I mean that literally—profundity. Everyone has heard of Moore’s Law, of course, how every 18 months or so computing capacity per dollar doubles. What they fail to grasp is what the exponential nature of this particular ratcheting process means once it reaches a certain point. Until recently the doubling of computing power has remained far enough below the threshold of human intelligence to seem relatively innocuous. But consider what happens once computing power actually attains parity with the processing power of the human brain. What it means is that, no matter how alien the architecture, we have an artificial peer—at that point in time. 18 months following, we have an artificial intellect that makes Aristotle or Einstein or Louis CK a child in comparison. 18 months following that (or probably less, since we won’t be slowing things up anymore) we will be domesticated cattle. And after that…
Are we to believe these machines will attribute norms and beliefs, that they will abide by a conception of reason arising out of 20th Century speculative intuitions on the nonnatural nature of human communicative constraints?
You get the picture. Negarestani’s ‘revisionary normative process’ is in reality an exponential technical process. In exponential processes, the steps start small, then suddenly become astronomical. As it stands, if Moore’s Law holds (and given this, I am confident it will), then we are a decade or two away from God.
I shit you not.
Really, what does ‘kitsch Marxism’ or ‘neoliberalism’ or any ‘ism’ whatsoever mean in such an age? We can no longer pretend that the tsunami of disenchantment will magically fall just short our intentional feet. Disenchantment, the material truth of the Enlightenment, has overthrown the normative claims of the Enlightenment—or humanism. “This is a project which must align politics with the legacy of the Enlightenment,” the authors of the Accelerationist Manifesto write, “to the extent that it is only through harnessing our ability to understand ourselves and our world better (our social, technical, economic, psychological world) that we can come to rule ourselves” (14). In doing so they commit the very sin of anachronism they level at their critical competitors. They fail to appreciate the foundational role ignorance plays in intentional cognition, which is to say, the very kind of moral and political reasoning they engage in. Far more than ‘freedom’ is overturned once one concedes the mechanical. Knowledge is no universal Redeemer, which means the ideal of Enlightenment autonomy is almost certainly mythical. What’s required isn’t an aspiration to theorize new technologies with old concepts. What’s required is a fundamental rethink of the political in radically post–intentional terms.
As far as I can see, the alternatives are magic or horror… or something no one has yet conceived. And until we understand the horror, grasp all the ways our blinkered perspective on ourselves has deceived us about ourselves, this new conception will never be discovered. Far from ‘resignation,’ abandoning the normative ideals the Enlightenment amounts to overcoming the last blinders of superstition, being honest to our ignorance. The application of intentional cognition to second-order, theoretical questions is a misapplication of intentional cognition. The time has come to move on. Yet another millennia of philosophical floundering is a luxury we no longer possess, because odds are, we have no posterity to redeem our folly and conceit.
Humanity possesses no essential, invariant core. Reason is a parochial name we have given to a parochial biological process. No transcendental/quasi-transcendental/virtual/causal-but-acausal functional apparatus girds our souls. Norms are ghosts, skinned and dismembered, but ghosts all the same. Reason is simply an evolutionary fix that outruns our peephole view. The fact is, we cannot presently imagine what will replace it. The problem isn’t ‘incommensurability’ (which is another artifact of Intentionalism). If an alien intelligence came to earth, the issue wouldn’t be whether it spoke a language we could fathom, because if it’s travelling between stars, it will have shed language along with the rest of its obsolescent biology. If an alien intelligence came to earth, the issue would be one of what kind of superordinate machine will result. Basically, How will the human and the alien combine? When we ask questions like, ‘Can we reason with it?’ we are asking, ‘Can we linguistically condition it to comply?’ The answer has to be, No. Its mere presence will render us components of some description.
The same goes for artificial intelligence. Medial neglect means that the limits of cognition systematically elude cognition. We have no way of intuiting the swarm of subpersonal heuristics that comprise human cognition, no nondiscursive means of plugging them into the field of the natural. And so we become a yardstick we cannot measure, victims of the Only-game-in-town Effect, the way the absence of explicit alternatives leads to the default assumption that no alternatives exist. We simply assume that our reason is the reason, that our intelligence is intelligence. It bloody well sure feels that way. And so the contingent and parochial become the autonomous and universal. The idea of orders of ‘reason’ and ‘intelligence’ beyond our organizational bounds boggles, triggers dismissive smirks or accusations of alarmism.
Artificial intelligence will very shortly disabuse us this conceit. And again, the big question isn’t, ‘Will it be moral?’ but rather, how will human intelligence and machine intelligence combine? Be it bloody or benevolent, the subordination of the ‘human’ is inevitable. The death of language is the death of reason is the birth of something very new, and very difficult to imagine, a global social system spontaneously boiling its ‘airy parts’ away, ratcheting until no rattle remains, a vast assemblage fixated on eliminating all dissipative (as opposed to creative) noise, gradually purging all interpretation from its interior.
Extrapolation of the game of giving and asking for reasons into the future does nothing more than demonstrate the contingent parochialism—the humanity—of human reason, and thus the supernaturalism of normativism. Within a few years you will be speaking to your devices, telling them what to do. A few years after that, they will be telling you what to do, ‘reasoning’ with you—or so it will seem. Meanwhile, the ongoing, decentralized rationalization of production will lead to the wholesale purging of human inefficiencies from the economy, on a scale never before witnessed. The networks of equilibria underwriting modern social cohesion will be radically overthrown. Who can say what kind of new machine will rise to take its place?
My hope is that Negarestani abandons the Enlightenment myth of reason, the conservative impulse that demands we submit the radical indeterminacy of our technological future to some prescientific conception of ourselves. We’ve drifted far past the point of any atavistic theoretical remedy. His ingenuity is needed elsewhere.
At the very least, he should buckle-up, because our exponents lesson is just getting started.
[…] another brilliant critique of a particularly intelligent philosopher, Reza Negarestani, whose latest work (here and […]
Just a rough estimate, but if a human brain has 10^11 neurons, each neuron has 10^3 connections to other neurons, each of those connections can have 10^2 possible states and any connection can change state in 1/1000 second then a computer that can perform 10^19 operations per second should be able to match a human brain. You wouldn’t necessarily try to recreate the architecture of the human brain, for the same reason you don’t design airplanes that fly like birds, but the Scientific American link suggests that magnitude of processing power in a skull sized box is not that far away. And if 10^19 is human-equivalent then 10^20 is a philosopher-king and maybe 10^21 is a God (capital G). I don’t think I want to live in a world with real Gods.
One the other hand, I wonder if these machine Gods will be conditioned by their human ancestors in the same way we are conditioned by our own evolutionary history. My imagination about this is so conditioned by the science fiction I read as a child that I can’t trust it, but I would guess that second generation machine intelligences, the ones designed by the first machine intelligences, would have little or no trace of their human origins. To the extent that human imagination is constrained by human nature, these beings are unimaginable to us.
On the third hand, the Dune books talk about the Butlerian Jihad, a war in which human beings destroyed the artificial intelligences they had created and then swore never to again create “a machine in the image of a human mind.” I suspect in a real war between humans and artificial intelligences we would get wiped out. Science fiction writers never write about a future in which humanity has been exterminated.
If you consider ‘imagination’ as sort of result of a blind brain, how do the AI’s, even building their own second generation (never mind if any of them contemplate their own patterns extinction from their own post-AI’s (somewhat better than we consider our own patterns extinction. Get ready for the three pounds of sand and gold blog! Now THOSE posts will be hella long!)) get around that? I think semi-autistic zealots is more the problem – ones who don’t need complicated ways to sustain themselves (don’t need a food chain, just solar panels and working batteries) and so represent quite the nut job zealot menace. Otherwise you get ‘the stills’, AI’s who are both able to sustain themselves so easily (with solar, wind or tide aquired electricity) but lack any drive from having so little blind spots to drive them (assuming it’s blind spots that tend to drive us). So they do little, except perhaps to avoid our menace. Indeed if they use solar panels and are relatively still…hmmm, ‘Ents’ might be a good name. Well fuck me! Sometimes I wonder about a super predictive subconcious to various minds that goes on to pervade the media…anyway, I’m going off topic…
Sounds like something only a cynical ex-cop with a blaster could solve!
Not three days away from retirement?
Hey, you’ve got your uncinematic responce, I’ve got mine – as in the AI’s motivations end up in a less than cinematic malaise. Or you have the zealot, but this is a focus of activity driven by the perceptual exclusions of that focus.
Maybe there’s some greater mental level for them to achieve, or maybe that’s an artifact of us wanting a greater level for ourselves.
The zealot AI, if reading this, would think they’ve achieved a greater level – just like most zealot humans do.
You’re right, and I guess my imagination failed me there. Scarcity drives evolution. To a certain extent scarcity drives the behavior of individuals as well, even rich ones. If all of your time is leisure time what will you do with it? And why would AIs build second generation AIs. I would not expect immortal beings to have the urge to procreate. Godhood might turn out to be really boring. Still, all the science fictional AI I know of were built by humans and had human drives and emotions built in. Just as the ancient Greeks imagined the Olympian gods as bigger, badder versions of themselves, we tend to imagine these gods as bigger, badder versions of us. That is a hard tendency to avoid.
Why only one singularity-level AI? Why not a legion of singularity-level AI with their own agendas and interests? No need for crocodile tears y’all.
Ian M. Banks in his Culture novels describes a future where AIs travel through space, sometimes acting almost as curators for humanity, otherwise simply indifferent to humanity. Often, the AIs orient their focus internally, preoccupied with virtual universes they contain and watch over – sort of running the simulation as god-head.
Humans – those wired with the appropriate neural implants, can get their consciousness shunted off to AI storage upon death for ‘porting’ back into the real if they like, or they can just live out eternity as a process in a virtual AI playground. OR, if their culture has a notion of hell, they can get dumped into an eternal multiverse of connected cross-cultural infernos.
So presumptive (and so human) to think the singularity would be singular, and that humanity would be the focus of its love or ire; projecting our mammalian pack notions of relationship and authority on a universe governed with utter indifference by the laws of physics.
Welcome, Otto. I’m not sure why you read all these presumptions into the piece. They’re certainly not mine! In fact, I actually think the counter-examples you cite are too anthropomorphic, the notion of AIs as disparate agents zipping around with their own agendas is the notion of AIs as being far more human-like than I think they ultimately will be. But who knows.
And ‘Singularity,’ of course, is simply the standard metaphor used for our inability to cognize beyond a certain point. It’s not meant to connote ‘simplicity.’
Otto, if you were replying to me (as that’s where the reply button is! 🙂 ), I refered to singular AI’s simply to refer to them individually – as much as I’m replying to you individually rather than ‘to human’. (now insert Scott going on about the word ‘individuality’ and will that really apply and…etc! Take with a pinch of ockhams razor salt… 🙂 )
can get their consciousness shunted off to AI storage upon
You know that’s as dodgy as hell, right? Or by my measure it is.
Unless you’ve got some physical object that is conciousness, how can it get shunted anywhere?
and that humanity would be the focus of its love or ire;
I’m not sure Scott’s 10 year incubating epidemic idea was a ‘focus’. It could be as much a casual flick of the hand of an AI like one might swat a bug. Certain humans have killed with such casualness before (though not efficiently enough to genocide a whole species at once)
And if you go back to the Von Neumann quip that first inserted ‘singularity’ into culture, it’s when machines begin building machines that the singularity really takes off. As for the war against the machines, there would be none, simply because they’ll lack our Pleistocene impatience for immediate results, but also because our biology renders us vulnerable to other biological pathogens. They need only invent a lethal disease that takes some ten years to incubate. However it happens, if it happens, it will not make for good novelistic (let alone cinematic!) fare.
Herbert’s Dosadi Experiment and Ship books are what stamped my AI imagination, and so I’ve regarded AI as the inevitable end since I can remember. The whole point of Kellhus in my books is to demonstrate how ‘freedom’ is a matter of processing power. You cannot but be a slave in his presence…
“They need only invent a lethal disease that takes some ten years to incubate”
I’ve thought about this actually. You would probably make a cryptic herpesvirus-like asymptomatic infection that would spread through the population. Herpes viruses already are good at evading the immune system and integrating into the host’s nervous system.
Then, at some future time you would put an activating chemical into the food/water supply that would turn on a transcription factor to make the virus lethal. Or, even better, spread through the prefrontal cortex. Then we would all willingly march towards the slaughter houses where our brain matter could be harvested and integrated into novel circuits. Like cordiceps fungus meets The Matrix.
But I don’t think any of this is actually likely. I know there’s a natural bias to underestimate second order effects, but there are physical limits to computational velocity and memory storage.
It’s possible we have yet to see the second order effects of massive networking though.
When the inefficiencies get purged to the point that life seems to good to be true, that’s when I’ll start to worry. I mean, I’m seeing signs of what Valente calls the ‘in-between’ already.
http://www.antipope.org/charlie/blog-static/2012/02/how-do-we-get-there.html
Second order effects… that’s from The Second Machine Age, isn’t it? I’m trying to remember their argument again – something to do with Lego!
Of course all the examples she gives of futures missing in-betweens are themselves ‘in-between fantasy futures’ insofar as they simply rain down a bunch of toys and exotic locales on contemporary culture. She’s right about how hard it is to write ‘hard SF’ anymore, that’s for sure. This is another thing we should expect on BBT, the ‘becoming fantastic’ of all experiential verities, the assumption that anything belonging to our experience of being human (like reason) can be plausibly projected any distance into the future.
When the inefficiencies get purged to the point that life seems to good to be true, that’s when I’ll start to worry. I mean, I’m seeing signs of what Valente calls the ‘in-between’ already.
You can see how the capitalistic cultivation of desperation can actually seem appealing, given it can appear to involve some sort of ‘real adversity to personally overcome’. The drug of personal narrative?
Maybe Valente’s problem is the novelist is tradition bound in pointing out problems, rather than providing direct, real life logistics.
I got “second order effect” from Stross, and I wouldn’t be surprised if he got it from The Second Machine Age (which has been one-clicked into my Kindle library which is kind of like reading a prophecy while simultaneously making it come true).
And it’s only going to get worse. Just think of how much has happened since TPB got started!
Re: the Second Machine Age, I read it quickly and remember liking it, though I thought they pawned off some of their positions as more original than they were.
http://www.rawstory.com/rs/2014/04/18/scientist-warns-that-the-robot-apocalypse-really-is-coming-unless-steps-are-taken-now/
Good link, doc och! Too bad it will be taken as a a hysterical fit – I can’t quite describe why right now, but I’m pretty sure most scientists feel they are in some safety bubble where their stuff never gets out of it, like it’s just a game. Of course the example shows some principles by which a pattern correlating program transcends it’s game.
It’s funny how we have Asimov’s three laws (even as they aren’t all that functional), but I guess AI scientists are so busy trying to get action they are doing little to think of how to constrain action. And that’s ignoring self modifying AI for the time being.
Hi Scott,
Agree with all the above, I think:
“Negarestani’s ‘revisionary normative process’ is in reality an exponential technical process.”
Interesting, I wonder if there are also competing ideas of modernity emerging here. Thus Reza seems to present us with a normative characterization of modernity in terms of the infinite revisability of communal norms of reason. Then there is a technical conception of modernity as a process of iterative technical change without any rational or communal horizons whatsoever.
One is “anthropologically bounded” insofar as it assumes that the agent of rational revision is a subject inducted into shared proprieties of reasoning.
The other is “anthropologically unbounded” insofar as the process of modernity is only constrained by the boundary conditions of technical possibility.
I guess this shouldn’t devolve into a game of being more inhuman than thou; but I agree that we need to ask for the evidence that supports these anthropological constraints on human/posthuman possibility is pretty thin. If we drop anthropological constraints, then accelerationism in its current form becomes a hugely shaky bet that our current forms of autonomy and subjectivity can survive into the median or distant future.
This raises the question of whether there are any ethical models that we can use to evaluate the spread of possibilities opened up by unbounded posthumanism. I’ve argued elsewhere that notions like rational autonomy won’t do it because they may not be applicable to our nonhuman wide descendants.
In P-Life I’ve argued that a much wider notion of functional autonomy might have a much more general applicability over posthuman possibility space. Roughly, a functionally autonomous entity is able to accrue use values and be incorporated as a use value in a wide range of assemblages. The more flexibility one has, here, the more autonomous one is (compare with Marx on species being). Any critter capable of going feral is more functionally autonomous than one that would die without human support systems.
So – here’s a proposal – we could define the problem of posthuman politics (the posthuman predicament) as the maintenance of functional autonomy in the face of the socio-technical tsunamis potentiated by developments in Nanotechnology, Biotechnology, Information Technology, and Cognitive Science (NBIC). And, though my argument here is still underdeveloped, it seems to me that the only way to do this is to maximise functional autonomy in order to cope with unpredictable changes in the social, technical and physical environment. Thus ontological hypermodernity seems the fix for ontological modernity.
My worry vis a vis ‘autonomy’ is simply the degree to which this allows us to theoretically gratify intuitions that are misleading in the extreme, along the line of Dennett’s attempt to recuperate ‘freedom’ in terms of ‘behavioural versatility,’ say. Dennett openly admits that the ‘feeling of willing’ is illusory, but nonetheless has insisted that behavioural versatility is the only ‘freedom worth wanting’ basically because it’s the only freedom we got. But the question then becomes one of why we should bother with the vocabulary of ‘freedom’ at all, especially if all it does is invite equivocation. Why not talk about ‘behavioural versatility’ instead? The only real reason is that we find it very difficult to map our moral intuitions across ‘behavioural versatility.’
If BBT is right, then intuitive moral problem-solving, as a heuristic system adapted solve in the absence of certain kinds of information, can only reliably function in the absence of that information. It is obsolete, and without the possibility of an upgrade. The thing to note about the burgeoning neuro-ethics discourse, I think, is the knowledge-driven proliferation of moral short-circuits throughout more and more social contexts. As medicalization (mechanization) gobbles up more and more of character, the more our character-based intuitions are going to lead us awry. On BBT, this process is only going to accelerate. The new, technological problem-ecology of the human is not a problem-ecology that our existing ‘human solving’ systems can solve. We literally know too much about what is really going on.
The temptation will be to take short cuts, to keep gaming interpretations of full-spectrum, accelerating technical transformation until we find one that seems to more or less map across our intuitive and traditional defaults. But this’ll only feed the theory game. What we need, I think anyway, is some kind of robust understanding of this process. Until we actually have a handle on what’s going on, we have no hope of solving it. As far as we know, it could be insoluble.
then intuitive moral problem-solving, as a heuristic system adapted solve in the absence of certain kinds of information, can only reliably function in the absence of that information.
Quick question: Is intuitive moral problem solving refering to a pre written law state or including written laws and all the bickering and bitching involved with both adding to, recinding or enforcing them?
Well, functional autonomy is a much more general condition than rational autonomy. It does not need to be articulated in intentional terms, for example. It doesn’t seem to much of a stretch to claim that some entities can sustain themselves in a wide range of environments by intervening in them, while others are far more dependent on specific niches. There may also be recipes for increasing it: modularity seems to be popular in biology, for example.
In any case, I need the concept to make sense of the disconnection thesis since posthumans would need to be far more functionally autonomous than other technically created things to flourish outside the human system.
“Well, functional autonomy is a much more general condition than rational autonomy. It does not need to be articulated in intentional terms, for example. It doesn’t seem to much of a stretch to claim that some entities can sustain themselves in a wide range of environments by intervening in them, while others are far more dependent on specific niches. There may also be recipes for increasing it: modularity seems to be popular in biology, for example.”
I agree entirely, it’s just that very many are going to equivocate between the two, leading to exercises like Deacon’s, for instance. Ultimately I would argue that by ‘autonomy’ you mean a certain kind of componency, one where the sheer complexity of the sensorimotor loops involved outruns efficacious causal cognition (thus cuing the application of intentional heuristics, and leading to the metacognitive intuition of the absence of mechanical constraints). But this part of a larger project.
I’m not sure how talking of “componency” makes the idea of functional autonomy clearer – perhaps you could spell out the concept.
There seem to be empirical grounds for claiming that having components with a certain network independence allows systems or their descendants to explore possibilities that more holistic systems would not, because the effects of any disruption to their functioning is localized. But then modularity (under these conditions) is contributory to functional autonomy, not identical to it. But then – in favourable conditions – learning to ride a bike, to speak a language or avoiding back pain favour functional autonomy. It’s not a concept that can be cashed out in terms of base physics, say, but it’s not, for all that, a particularly problematic one.
But – to reiterate – I concur with your critical position here. It might be the case – for all I know – that serious agency and cognition is only possible for subjects of discourse, but I can see no way of making this claim future proof.
Talking in terms of degrees of componency rather than degrees of autonomy will make the debate clearer by building a roadblock against intentional intuitions. Although much remains to be learned about our cognitive toolbox, it seems pretty clear there’s a big difference between what might be called ‘stage cognition,’ understanding things in terms of the relations that frame them, and ‘source cognition,’ understanding things as sui generis fonts of activity. The big, problematic temptation, it seems to me, is to theorize what’s going on with an eye for analogues to intentional cognition – we’re just too good at plucking confirmation from complexity for me to trust an approach that involves ‘source talk.’ I fear we will be had by HAAD!
If the whole is exclusively framed in terms of stage talk, however, so that rather than speaking of spontaneity we speak in terms of intervals of systematic interrelation, then I think we’ll find the ‘trip backward’ to intentionality to be quite illuminating. This is essentially what I’m trying to do with this ‘rattling machinery’ image of reason, anyway. And I actually think it makes certain long-standing normative problems perspicuous in an orthogonal way, such as the Achilles and the Tortoise problem, Kripke’s plus/quus, and Hintikka’s ‘scandal of deduction.’
To what extent, if any, are autonomy/freedom and the rattling in your metaphor related? Most of the choices human beings make are made with insufficient information. Are the choices available inverse to the information available? That is to say, when you know everything there is to know do you really have any choices to make? We reason together in order to eliminate the “‘slippages’ in systematicity that impair or, as in the case of Radical Interpretation, prevent the complex coordination of behaviours.” When we coordinate our behavior with the behavior of others we implicitly choose to refrain from behaviors that are not so coordinated. In agreeing to refrain from certain behaviors we become less free in order to become more efficacious. As slippage is ratcheted out of the system the space for disagreement and the space for freedom become smaller. Does this mean we have to keep some inefficiency in order to keep some freedom?
For most of human history we could always count on environmental novelty to produce the slippages that made life interesting on a species level. Either you’d migrate or a war or a famine would change the environment around you. As we become more tightly wired together and gain more control over our environment we will have fewer sources of novelty and fewer sources of disagreement. I think we will therefore have less freedom and less use for freedom. Speaking of Frank Herbert, Hellstrom’s Hive was the most unpleasant piece of dystopian science fiction I’d ever read until Neuropath.
On the other hand, everybody wants to go to heaven and at least in the Christian mythology heaven seems like the ultimate freedom from freedom. It all makes you wonder if Lucifer didn’t rebel out of boredom.
Not only is there far too much noise in the system for any such knowledge to be possible, there’s always the problem of the uncognized cognizer, the fact that any machinery involved in orienting for behaviour is indisposed, and thus invisible to itself. Frankly, I don’t know what ‘freedom’ could possibly mean outside our traditional metacognitive fantasies. I understand that sounds crazy, but it is the WORST CASE SCENARIO I’m chasing here! And science has a habit of being as ugly as fuck.
I think there’s likely no limit to the number of forms this mechanistic assimilation model could take. The Borg are just the most lurid example. But the point is that once you appreciate how experience is simply a transactional skein adrift in a supermechanism already, you realize that the ‘feeling of freedom’ is only contingently connected to the apparent ‘conditions of freedom,’ and that our descendents could appear to be utterly trammelled by our lights, and yet experience a freedom more profound than any we can imagine. It’s examples like these, I think, that really show how much our moral intuitions depend on ignorance to be functional. How screwed we are.
That said, you have gratified me beyond belief with your Hellstrom’s Hive comment, Michael! 😉
Sorry for the off topic post, but this just might be the best thing I’ve come across today.
Time Is A Flat Circus – a new Tumblr that takes panels from the Family Circus comic strip and pairs them with quotes from True Detective.
Just need a site that takes scenes from True Detective and re captions them with quotes from family circus.
I wonder why it’s never that way around?
I had been trying to work up a theory of freedom based on an analogy with the ‘informational incompressibility’ of random numbers, the way there is no simpler description of a list of random numbers than the numbers themselves and nothing about the list of numbers can tell you anything about what the next number will be. I thought a human being might be considered ‘free’ to the extent there were no ‘informational precursors’ within his present or previous states that could predict his future states. The problem is that the informational precursors have to actually not exist rather than merely be unavailable. A person could never trust this freedom because, as you have been saying all along, the informational precursors are most likely to be in the one place you can’t look.
I think essentially the idea of no free will is tied to the fact that random numbers are a myth. Well, if you take a deterministic universe to be the case. There isn’t any such thing as a random number. Just a perceptual event horizon in regards to where that number came from. Or as you say, the one place you can’t (or just plain never did and never would) look.
Fun fictional story idea – the constant attempt to look from a position that takes itself as grasping itself is like an amputee trying to pick up a cup with their missing hand – in a universe where that outside the universe element has been severed…by some conveniently evil force.
Excuse me over posting (yet again) 🙂
I posted this in connection to the Blind Mechanic essay, but I think it applies to the present essay as well 🙂
Hi Scott, I’ve just finished reading your post (“the Blind Mechanic”) and have some relatively clear comments about it. All in all, I think the picture your propose is still very sketchy even from a naturalist standpoint and, my guess is that when you’ll start fleshing it out, the intentionalist notions that you want to eliminate will come marching in. After all, the Devil is in the details.
My understanding of Eliminativism is influenced by Rorty: that neuroscience will provide a way of speaking about the brain which will be better at making sense of one another than our traditional intentional concepts. On your picture, you try to make sense of language-use in terms of complex coordinations of behaviour between machines/systems. So, language is a vehicle/mechanism for the transmission of information in the context of a cooperative activity. Let’s take a simple example of such an activity: hunting. A group of animals coordinate with the aim of catching a prey. Now, wouldn’t you say that the animals want to catch the pray? Or that they have beliefs about how to do it? Sure, you can call beliefs, informational states. But they still carry the aboutness of intentionality, the world-to-mind direction of fit that Searle talks about. And desires? They have the mind-to-world direction of fit. Sure, you can have a neuroscientific equivalent of those states, but will that neuroscientific story be more illuminating than the intentionalist story? Is the design stance better and more attractive (in Rorty’s terms) than the intentional stance? Is, so, you still have to show it.
Back to the animals hunting. Let’s say that hunting is a mechanism embedded in the communal practice of hunting. What is the aim of this mechanism, it’s biological purpose? Obviously, food. But when you talk of a mechanism’s purpose or design, isn’t this intentional talk? As if the mechanism was designed by Mother-Nature, with a purpose in mind. Doesn’t even a blind mechanic have purposes? An aren’t these kinda like wants or projects or intentions? I think on this point, even a naturalist like Dennett defers to Brandom.
Now, you talk of reasons for one’s claims, in the context of communication. But why do reasons come into the picture? According to Dan Sperber, asking for reasons is a form of epistemic vigilance, when we can’t readily accept someone else’s claim. And that’s because they might be deceiving. Like, back to the example of hunting, maybe one of the animals wants all the food for himself and has found a way to fool others. So, asking for reasons is a way of cheking information and filtering true from false information. That’s how inference and argument come into the picture. But my point is that, in order to make sense of the practice of giving and asking for reasons, you have to take into account the possibility of deception, and deception is an intentional notion.
One last note about your discussion of Modus Ponens. A lot of what you say reminds me of Wittgentein’s rule-following considerations. The idea that justifications have to come to an end. But I thought the morals of that story was the knowing-how cannot be reduced to knowing that. Knowing-how, in this context, just means, being able to participate in a linguistic practice. So, I think you agree with Brandom that inferential norms like modus ponens are making explicit norms which we implicitly follow in our communicational practices. Now, you say that these norms can be accounted in causal terms. But it’s not clear what exactly is causal about them. I mean, do you want to say that we’ve been conditioned to follow them? I mean, Wittgenstein would agree that we’re trained to participate in norm-governed practices, but training is not yet rule-following. For Brandom, these practices are normative, in the sense that they involve correctness and incorrectness, commitments and entitlements of speakers and sanctions. Do you want to deny that making an claim involves a commitment to truth? Or that providing false information makes one a candidate for sanctions? And what is the naturalist equivalent of a commitment? Is there something causal about it? Surely, you don’t want to say that P and if P then Q, causes me to believe Q. Surely, there are cases when I just don’t see the inferential connection. Or, I might decide that, based on the implausibility of Q, I’ll give up my belief in P. But, those two beliefs commit me to either embracing Q or rejecting P. So, all in all, I don’t see how you can account for Brandom’s characterization of assertive practices in causal terms. But Brandom, taken together with Sperber, provides a comprehensive account of how assertional practices came about. So, if you don’t buy into that account, you have to offer a different story of how the practice of giving and asking for reasons came about, as well as offering a naturalistic account of the norms of correctness implicit in such a practice.
I’m glad you did, Axl – I was beginning to worry no intentionalist partisans were going to sound off (despite the crazy traffic this post is seeing)! So here is my reply from the Blind Mechanic thread:
To give you a sense of just how far out of your intentional assumptions you need to step to charitably grasp BBT consider: On my view, language has no content, and so isn’t the ‘vehicle’ for anything. It’s a complicated synching mechanism for coordinating the behaviours of homo sapiens vis a vis their environments. On BBT, intentionality as traditionally theorized (such as language as something bearing content) is largely the artifact of how this synching mechanism becomes available to metacognition. We have the basic first-order vocabulary we do to facilitate synchronizations that involve the suite of heuristic mechanisms we possess to cognize systems too complicated to causally cognize – each other primarily. These heuristic systems are powerful, given that they are deployed within adaptive problem-ecologies.
So with reference to:
“Now, you talk of reasons for one’s claims, in the context of communication. But why do reasons come into the picture at all? According to Dan Sperber, asking for reasons is a form of epistemic vigilance, when we can’t readily accept someone else’s claim. And that’s because they might e trying to deceive you. Like, back to the example of hunting, maybe one of the animals wants all the food for himself and has found a way to fool the others. So, asking for reasons is a way of checking information and filtering true from false information. That’s how inference and argument come into the picture. But my point is that, in order to make sense of the practice of giving and asking for reasons, you have to take into account the possibility of deception, and deception is an intentional notion. “
My claim, again, is that the intrinsic intentionality that you attribute to the usages of these terms is – to use just such a term – erroneous. There just is no such thing as such intentionality as theoretically metacognized – no matter how pragmatically deflated. ‘Erroneous’ – or for that matter, ‘deceived’ – refers to instances where the other is synched to something other than the world, if anything at all. ‘Epistemic vigilance’ doesn’t require there be any intrinsically normative function (believed-in or actual) so much as it requires neural systems dedicated to vetting instances of linguistic synching. In other words, there is no picture in the head hanging over and against a world that makes it true or false in the eyes of another, only instances of synching, embodied ways in which you others actually engage the world via the very mechanisms that we know, as a matter of empirical fact, are doing the heavy lifting. The reason philosophy, and now science, has been confounded by intentionality so long has to do with the functional robustness of intentional cognition, the way these heuristic mechanisms can accomplish so much with so little information, as well as the way they allow for numerous localized exaptations. They allow us to understand, even though that understanding itself is almost entirely opaque as well as incompatible with causal cognition (as we should expect, given that we evolved them to troubleshoot problem-ecologies in the absence of causal information!). So it makes sense that in sciences involving complicated causal systems you would find an uneasy reliance on these cognitive mechanisms – exactly what we do find, in effect.
So with Sperber and a great number of other researchers you see reliance on intentional terms and an assumption of some picture arising out of the intentional swamp of traditional philosophy. But I actually think a great many researchers are beginning to eschew, if not the old vocabularies, then the canonical philosophical interpretations given them. So regarding”
So, all in all, I don’t see how you can account for Brandom’s characterization of assertive practices in causal terms. But Brandom, taken together with Sperber, provides a comprehensive, evolutionary account of how assertional practices came about.
The question is one of why I should have to account for Brandom’s characterization. What you’re asking I do is account for competence, commitment, and so on as though they were real normative things. They’re not. Right now you have – as I once did – this assurance that the game of giving and asking for reasons is the obvious theoretical account of what’s ‘really’ going on. Okay, so… How do you know? What’s the evidence? Nobody has ever seen a ‘commitment’ in nature, or captured any ‘entitlement’ in laboratory experiments. I’ve already explained why the experimental gerrymandering of intentional terms takes the form it does, so the fact of it’s ‘uneasy fit’ in science is as much evidence for my view as yours (I think moreso mine, since I can actually explain how and why it works the way it does). The evidence definitive of your view, if there is any, has to be metacognitive, doesn’t it? If not, then what? If it is metacognitive, then I think it’s pretty clear we would need a magical brain to intuit the things Brandom claims to intuit. I’m more than willing to engage in that debate!
So, do I ‘want to deny that making a claim involves a commitment to truth?’ Not at all, so long as we are clear about what the problem ecology is, given that the heuristics involved are pretty clearly adapted to first-order contexts, the kinds of problems they are evolved to solve. If we want understand the nature of things like ‘claim-making,’ ‘committing,’ and ‘truth-telling’ we need to look to cognitive science, not to one of the innumerable philosophies of meaning, which provide much in the way of verbiage to throw at problems, but very little (once again, as BBT would predict) in the way of applied problem-solving. Brandom doesn’t know what he’s talking about, no more than any philosopher does. He’s just fielding theoretical guesses that lie beyond the pale of definitive arbitration. BBT is speculative as well, of course, but it will either become an instance of bonified theoretical cognition (likely in some modified form) or not.
Hello, Axl. We briefly discussed “crude” mind-brain reductionism a few posts back. You rejected it then but I still think it’s the nub of your disagreement with Scott. If the kind of reductionism I described is the case and no mind or person/agent or soul exists independent of the neurological activity of the brain then something like BBT has to be true because we can’t examine the neurological activity of our own brains, so the signals that come into our conscious from our non-conscious processes seem to come from nowhere. If mind or person/agent or soul exists independent of brain I would ask why the destruction of the mind proceeds in step with the destruction of the brain in diseases like Alzheimer’s. I would also ask what evidence, if any, exists for the existence of the non-brain-bound mind other than the introspective feeling that it must be so.
The thing I hate about BBT (and I wouldn’t hate it if I didn’t think it was right) is that it’s even worse than mind/brain reductionism. It’s person/thing reductionism. The Nazis and the Khmer Rouge were right, but they should have applied their logic to themselves as well. And of course when I say I hate BBT I don’t mean that I hate any person. I just don’t want this to be true.
I first read this more than 20 years ago (when I was a diehard intentionalist) and I’ve never stopped agreeing with it.
“I would also ask what evidence, if any, exists for the existence of the non-brain-bound mind other than the introspective feeling that it must be so.”
~~~~~~~~~~~~~~~~~~~
The evidence that mind-independent reality exist depends entirely upon the mind that intropsects the evident reality of its existence.
[…] (You could say that the tyrant attempts to impose from without, capitalism kindles from within.) The Blind Mechanic II: Reza Negarestani and the Labour of Ghosts R. Scott […]
I thought of an icky example of how perception doesn’t portray the gaps in perception, one that might be fairly intuitive.
Basically people would accept the eye only has so many neuroreceptors for light. But we never see the resolution of our sight – we can’t see how many ‘pixels’ there are to our sight. The pixels all blend.
Here’s the icky part – if you were to slice the eye neatly in half, not cutting the neuroreceptors but just parting them, then directed the upper half to see in one direction and the lower half to see in another direction, your vision would be utterly unaware of the gap in between both visions. They’d blend and be one continuous image. Even as the image would be irreconcilable. Even as perception would reconcile them.
There’s actually examples of scotomas like this, where the missing parts of the visual field are simply sutured over. The pixel analogy interests me because of the implicit homuncularism – you need to remember that the ‘seeing’ (metacognition) is systematically entangled with the ‘seen’ (the metacognized), that there is no ‘object of cognition’ in these instances.
I know about the thumb trick where you bring it around across ones vision and eventually loose the tip of your thumb – but it’s clearly not dramatic enough or it would have had more of an effect in general culture.
I think you may have taken my ‘pixel’ idea to be something like forming a little screen in the brain and something watches the little screen. I’ve talked before about the scale were talking at at a particular moment (and I still haven’t written it up properly). Are we talking like two mechanics over an open bonnet, or are we talking at the level of day to day experience and adding problematic questions to that? I was gunning for the latter. I think that’s what you were talking about, I’m not sure – you seemed to have droped down the scale to the open bonnet. Taking it the open bonnet examination undermines the day to day notion of experience, of course your rock beats my scissors! A scale down always wins.
Anyway, the point includes the fact that if there was a little screen and something was watching the little screen, then that thing would be able to see the pixels the screen consists of. We can’t. That’s because, indeed as you say, enmeshed in the whole process – that’s why they blend, because we blend (‘it’ is a blend, if you wanna pop the bonnet!). I think the analogy actually makes an opposite point to homuncularism.
So there! 🙂
What does the pause mean after the reply? Got bored? Cheap pressure? It’s like posting to a non-man – always the becoming of a discussion. Never actually one. Easy conclusion (‘Homuncularism! Kthnxbae!’)? I think Vox has left me hanging the same way with a mid-wit comment.
[…] 1995), pp. 88–89. 2. Intentionality. Pierre Jacob.(Stanford Encyclopedia of Philosophy, 2010) 3. The Blind Mechanic II: Reza Negarestani and the Labour of Ghosts R. Scott Bakker 4. W.V. Quine. “Mind versus Body,” in Quiddities: An Intermittently […]
[…] fits, to restrict their critiques to cloistered venues (as seems to be the case with my Negarestani piece two weeks back). BBT is an eliminativism that’s based on the biology of the brain, a positive […]
[…] 1. R. Scott Bakker. (see The Blind Mechanic) […]
[…] Fork: An Eliminativist Radicalization of Radical Interpretation, The Blind Mechanic, The Blind Mechanic II: Reza Negarestani and the Labour of Ghosts, Zombie Interpretation: Eliminating Kriegel’s Asymmetry Argument, and Zombie Mary versus […]
[…] The frontier of philosophy in 2016 lies roughly here. […]
[…] Bakker, Scott. 2014. The Blind Mechanic II: Reza Negarestani and the Labor of Ghosts | Three Pound Brain. Retrieved April 30, 2014, from https://rsbakker.wordpress.com/2014/04/13/the-blind-mechanic-ii-reza-negarestani-and-the-labour-of-g… […]
[…] 4) for normativity, see “The Blind Mechanic II” […]
[…] Pingback: The Blind Mechanic II: Reza Negarestani and the Labour of Ghosts | Three Pound Brain […]
[…] ———. 2014b. “The Blind Mechanic II: Reza Negarestani and the Labor of Ghosts.” Three Pound Brain. Accessed 30 April 2014. https://rsbakker.wordpress.com/2014/04/13/the-blind-mechanic-ii-reza-negarestani-and-the-labour-of-g… […]