Science, Nihilism, and the Artistry of Nature (by Ben Cain)

nihilism image

Technologically-advanced societies may well destroy themselves, but there are two other reasons to worry that science rather than God will usher in the apocalypse, directly destroying us by destroying our will to live. The threat in question is nihilism, the loss of faith in our values and thus the wholesale humiliation of all of us, due to science’s tendency to falsify every belief that’s traditionally comforted the masses. The two reasons to suspect that science entails nihilism are that scientists find the world to be natural (fundamentally material, mechanical, and impersonal), whereas traditional values tend to have supernatural implications, and that scientific methods famously bypass intuitions and feelings to arrive at the objective truth.

These two features of science, being the content of scientific theories and the scientific methods of inquiry might seem redundant, since the point about methods is that science is methodologically naturalistic. Thus, the point about the theoretical content might seem to come as no surprise. By definition, a theory that posits something supernatural wouldn’t be scientific. While scientists may be open to learning that the world isn’t a natural place, making that discovery would amount to ending or at least transforming the scientific mode of inquiry. Nevertheless, naturalism, the worldview that explains everything in materialistic and mechanistic terms, isn’t just an artifact of scientific methods. What were once thought to be ghosts and gods and spirits really did turn out to be natural phenomena.

Moreover, scientific objectivity seems a separate cause of nihilism in that, by showing us how to be objective, paradigmatic scientists like Galileo, Newton, and Darwin showed us also how to at least temporarily give up on our commonsense values. After all, in the moment when we’re following scientific procedures, we’re ignoring our preferences and foiling our biases. Of course, scientists still have feelings and personal agendas while they’re doing science; for example, they may be highly motivated to prove their pet theory. But they also know that by participating in the scientific process they’re holding their feelings to the ultimate test. Scientific methods objectify not just the phenomenon but the scientist; as a functionary in the institution, she must follow strict procedures, recording the data accurately, thinking logically, and publishing the results, making her scientific work as impersonal as the rest of the natural world. In so far as nonscientists understand this source of science’s monumental success, we might come to question the worth of our subjectivity, of our private intuitions, wishes, and dreams which scientific methods brush aside as so many distortions.

Despite the imperative to take scientists as our model thinkers in the Age of Reason, we might choose to ignore these two threats to our naïve self-image. Nevertheless, the fear is that distraction, repression, and delusion might work only for so long before the truth outs. You might think, on the contrary, that science doesn’t entail nihilism, since science is a social enterprise and thus it has a normative basis. Scientists are pragmatic and so they evaluate their explanations in terms of rational values of simplicity, fruitfulness, elegance, utility, and so on. Still, the science-centered nihilist can reply, those values might turn out to be mechanisms, as scientists themselves would discover, in which case science would humiliate not just the superstitious masses but the pragmatic theorists and experimenters as well. That is, science would refute not only the supernaturalist’s presumptions but the elite instrumentalist’s view of scientific methods. Science would become just another mechanism in nature and scientific theories would have no special relationship with the facts since from this ultra-mechanistic “perspective,” not even scientific statements would consist of symbols that bear meaning. The scientific process would be seen as consisting entirely of meaningless, pointless, and amoral causal relations—just like any other natural system.

I think, then, this sort of nihilist can resist that pragmatic objection to the suspicion that science entails nihilism and thus poses a grave, still largely unappreciated threat to society. There’s another objection, though, which is harder to discount. The very cognitive approach which is indispensible to scientific discovery, the objectification of phenomena, which is to say the analysis of any pattern in impersonal terms of causal relations, is itself a source of certain values. When we objectify something we’re thereby well-positioned to treat that thing as having a special value, namely an aesthetic one. Objectification overlaps with the aesthetic attitude, which is the attitude we take up when we decide to evaluate something as a work of art, and thus objects, as such, are implicitly artworks.

 

Scientific Objectification and the Aesthetic Attitude

 

There’s a lot to unpack there, so I’ll begin by explaining what I mean by the “aesthetic attitude.” This attitude is explicated differently by Kant, Schopenhauer, and others, but the main idea is that something becomes an artwork when we adopt a certain attitude towards it. The attitude is a paradoxical one, because it involves a withholding of personal interest in the object and yet also a desire to experience the object for its own sake, based on the assumption that such an experience would be rewarding. When an observer is disinterested in experiencing something, but chooses to experience it because she’s replaced her instrumental or self-interested perspective with an object-oriented one so that she wishes to be absorbed by what the object has to offer, as it were, she’s treating the object as a work of art. And arguably, that’s all it means for something to be art.

For example, if I see a painting on a wall and I study it up close with a view to stealing it, because all the while I’m thinking of how economically valuable the painting is, I’m personally interested in the painting and thus I’m not treating it as art; instead, for me the painting is a commodity. Suppose I have no ulterior motive as I look at the painting, but I’m also bored by it and so I’m not passively letting the painting pour its content into me, as it were, which is to say that I have no respect for such an experience in this case, and so I’m not giving the painting a fair chance to captivate my attention, I’m likewise not treating the painting as art. I’m giving it only a cursory glance, because I lack the selfless interest in letting the painting hold all of my attention and so I don’t anticipate the peculiar pleasure from perceiving the painting that we associate with an aesthetic experience. Whether it’s a painting, a song, a poem, a novel, or a film, the object becomes an artwork when it’s regarded as such, which requires that the observer adopt this special attitude towards it.

Now, scientific objectivity plainly isn’t identical to the aesthetic attitude. After all, regardless of whether scientists think of nature as beautiful when they’re studying the evidence or performing experiments or formulating mechanistic explanations, they do have at least one ulterior motive. Some scientists may have an economic motive, others may be after prestige, but all scientists are interested in understanding how systems work. Their motive, then, is a cognitive one—which is why they follow scientific procedures, because they believe that scientific objectification (mechanistic analysis, careful collection of the data, testing of hypotheses with repeatable experiments, and so on) is the best means of achieving that goal.

However, this cognitive interest posits a virtual aesthetic stance as the means to achieve knowledge. Again, scientists trust that their personal interests are irrelevant to scientific truth and that regardless of how they prefer the world to be, the facts will emerge as long as the scientific methods of inquiry are applied with sufficient rigor. To achieve their cognitive goal, scientists must downplay their biases and personal feelings, and indeed they expect that the phenomenon will reveal its objective, real properties when it’s scientifically scrutinized. The point of science is for us to get out of the way, as much as possible, to let the world speak with its own voice, as opposed to projecting our fantasies and delusions onto the world. Granted, as Kant explained, we never hear that voice exactly—what Pythagoras called the music of the spheres—because in the act of listening to it or of understanding it, we apply our species-specific cognitive faculties and programs. Still, the point is that the institution of science is structured in such a way that the facts emerge because the scientific form of explanation circumvents the scientists’ personalities. This is the essence of scientific objectivity: in so far as they think logically and apply the other scientific principles, scientists depersonalize themselves, meaning that they remove their character from their interaction with some phenomenon and make themselves functionaries in a larger system. This system is just the one in which the natural phenomenon reveals its causal interrelations thanks to the elimination of our subjectivity which would otherwise personalize the phenomenon, adding imaginary and typically supernatural interpretations which blind us to the truth.

And when scientists depersonalize themselves, they open themselves up to the phenomenon: they study it carefully, taking copious notes, using powerful technologies to peer deeply into it, and isolating the variables by designing sterile environments to keep out background noise. This is very like taking up the aesthetic attitude, since the art appreciator too becomes captivated by the work itself, getting lost in its objective details as she sets aside any personal priority she may have. Both the art appreciator and the scientist are personally disinterested when they inspect some object, although the scientist is often just functionally or institutionally so, and both are interested in experiencing the thing for its own sake, although the artist does so for the aesthetic reward whereas the scientist expects a cognitive one. Both objectify what they perceive in that they intend to discern only the subtlest patterns in what’s actually there in front of them, whether on the stage, in the picture frame, or on the novel’s pages, in the case of fine art, or in the laboratory or the wild in the case of science. Thus, art appreciators speak of the patterns of balance and proportion, while scientists focus on causal relations. And the former are rewarded with the normative experience of beauty or are punished with a perception of ugliness, as the case may be, while the latter speak of cognitive progress, of science as the premier way of discovering the natural facts, and indeed of the universality of their successes.

Here, then, is an explanation of what David Hume called the curious generalization that occurs in inductive reasoning, when we infer that because some regularity holds in some cases, therefore it likely holds in all cases. We take our inductive findings to have universal scope because when we reason in that way, we’re objectifying rather than personalizing the phenomenon, and when we objectify something we’re virtually taking up the aesthetic attitude towards it. Finally, when we take up such an attitude, we anticipate a reward, which is to say that we assume that objectification is worthwhile—not just for petty instrumental reasons, but for normative ones, which is to say that objectification functions as a standard for everyone. When you encounter a wonderful work of art, you think everyone ought to have the same experience and that someone who isn’t as moved by that artwork is failing in some way. Likewise, when you discover an objective fact of how some natural system operates, you think the fact is real and not just apparent, that it’s there universally for anyone on the planet to confirm.

Of course, inductive generalization is based also on metaphysical materialism, on the assumptions that the world is made of atoms and that a chunk of matter is just the sort of thing to hold its form and to behave in regular ways regardless of who’s observing it, since material things are impersonal and thus they lack any freedom to surprise. But scientists persist in speaking of their cognitive enterprise as progressive, not just because they assume that science is socially useful, but because scientific findings transcend our instrumental motives since they allow a natural system to speak mainly for itself. Moreover, scientists persist in calling those generalizations laws, despite the unfortunate personal (theistic) connotations, given the comparison with social laws. These facts indicate that inductive reasoning isn’t wholly rational, after all, and that the generalizations are implicitly normative (which isn’t to say moral), because the process of scientific discovery is structurally similar to the experience of art.

 

Natural Art and Science’s True Horror

 

Some obvious questions remain. Are natural phenomena exactly the same as fine artworks? No, since the latter are produced by minds whereas the former are generated by natural forces and elements, and by the processes of evolution and complexification. Does this mean that calling natural systems works of art is merely analogical? No, because the similarity in question isn’t accidental; rather, it’s due to the above theory of art, which says that art is nothing more than what we find when we adopt the aesthetic attitude towards it. According to this account, art is potentially everywhere and how the art is produced is irrelevant.

Does this mean, though, that aesthetic values are entirely subjective, that whether something is art is all in our heads since it depends on that perspective? The answer to this question is more complicated. Yes, the values of beauty and ugliness, for example, are subjective in that minds are required to discover and appreciate them. But notice that scientific truth is likewise just as subjective: minds are required to discover and to understand such truth. What’s objective in the case of scientific discoveries is the reality that corresponds to the best scientific conclusions. That reality is what it is regardless of whether we explain it or even encounter it. Likewise, what’s objective in the case of aesthetics is something’s potential to make the aesthetic appreciation of it worthwhile. That potential isn’t added entirely by the art appreciator, since that person opens herself up to being pleased or disappointed by the artwork. She hopes to be pleased, but the art’s quality is what it is and the truth will surface as long as she adopts the aesthetic attitude towards it, ignoring her prejudices and giving the art a chance to speak for itself, to show what it has to offer. Even if she loathes the artist, she may grudgingly come to admit that he’s produced a fine work, as long as she’s virtually objective in her appreciation of his work, which is to say as long as she treats it aesthetically and impersonally for the sake of the experience itself. Again, scientific objectivity differs slightly from aesthetic appreciation, since scientists are interested in knowledge, not in pleasant experience. But as I’ve explained, that difference is irrelevant since the cognitive agenda compels the scientist to subdue or to work around her personality and to think objectively—just like the art beholder.

So do beauty and ugliness exist as objective parts of the world? As potentials to reward or to punish the person who takes up anything like the aesthetic attitude, including a stance of scientific objectification, given the extent of the harmony or disharmony in the observed patterns, for example, I believe the answer is that those aesthetic properties are indeed as real as atoms and planets. The objective scientist is rewarded ultimately with knowledge of how nature works, while someone in the grip of the aesthetic attitude is rewarded (or punished) with an experience of the aesthetic dimension of any natural or artificial product. That dimension is found in the mechanical aspect of natural systems, since aesthetic harmony requires that the parts be related in certain ways to each other so that the whole system can be perceived as sublime or otherwise transcendent (mind-blowing). Traditional artworks are self-contained and science likewise deals largely with parts of the universe that are analyzed or reduced to systems within systems, each studied independently in artificial environments that are designed to isolate certain components of the system.

Now, such reduction is futile in the case of chaotic systems, but the grandeur of such systems is hardly lessened when the scientist discovers how a system which is sensitive to initial conditions evolves unpredictably as defined by a mathematical formula. Indeed, chaotic systems are comparable to modern and postmodern art as opposed to the more traditional kind. Recent, highly conceptual art or the nonrepresentational kind that explores the limits of the medium is about as unpredictable as a chaotic system. So the aesthetic dimension is found not just in part-whole relations and thus in beauty in the sense of harmony, but in free creativity. Modern art and science are both institutions that idealize the freedom of thought. Freed from certain traditions, artists now create whatever they’re inspired to create; they’re free to experiment, not to learn the natural facts but to push the boundaries of human creativity. Likewise, modern scientists are free to study whatever they like (in theory). And just as such modernists renounce their personal autonomy for the sake of their work, giving themselves over to their muse, to their unconscious inclinations (somewhat like Zen Buddhists who abhor the illusion of rational self-control), or instead to the rigors of institutional science, nature reveals its mindless creativity when chaotic systems emerge in its midst.

But does the scientist actually posit aesthetic values while doing science, given that scientific objectification isn’t identical with the aesthetic attitude? Well, the scientist would generally be too busy doing science to attend to the aesthetic dimension. But it’s no accident that mathematicians are disproportionately Platonists, that early modern scientists saw the cosmic order as attesting to God’s greatness, or that postmodern scientists like Neal deGrasse Tyson, who hosts the rebooted television show Cosmos, labour to convince the average American that naturalism ought to be enough of a religion for them, because the natural facts are glorious if not technically miraculous. The question isn’t whether scientists supply the world with aesthetic properties, like beauty or ugliness, since those properties preexist science as objective probabilities of uplifting or depressing anyone who takes up the aesthetic attitude, which attitude is practically the same as objectivity. Instead, the question here might be whether scientific objectivity compels the scientist to behold a natural phenomenon as art. Assuming there are nihilistic scientists, the answer would have to be no. The reason for this would be the difference in social contexts, which accounts for the difference between the goals and rewards. Again, the artist wants a certain refined pleasure whereas the scientist wants knowledge. But the point is that the scientist is poised to behold natural systems as artworks, just in so far as she’s especially objective.

Finally, we should return to the question of how this relates to nihilism. The fear, raised above, was that because science entails nihilism, the loss of faith in our values and traditions, scientists threaten to undermine the social order even as they lay bare the natural one. I’ve questioned the premise, since objectivity entails instead the aesthetic attitude which compels us to behold nature not as arid and barren but as rife with aesthetic values. Science presents us with a self-shaping universe, with the mindless, brute facts of how natural systems work that scientists come to know with exquisite attention to detail, thanks to their cognitive methods which effectively reveal the potential of even such systems to reward or to punish someone with an aesthetic eye. For every indifferent natural system uncovered by science, we’re well-disposed to appreciating that system’s aesthetic quality—as long as we emulate the scientist and objectify the system, ignoring our personal interests and modeling its patterns, such as by reducing the system to mechanical part-whole relations. The more objective knowledge we have, the more grist for the aesthetic mill. This isn’t to say that science supports all of our values and traditions. Obviously science threatens some of them and has already made many of them untenable. But science won’t leave us without any value at all. The more objective scientists are and the more of physical reality they disclose, the more we can perceive the aesthetic dimension that permeates all things, just by asking for pleasure rather than knowledge from nature.

There is, however, another great fear that should fill in for the nihilistic one. Instead of worrying that science will show us why we shouldn’t believe there’s any such thing as value, we might wonder whether, given the above, science will ultimately present us with a horrible rather than a beautiful universe. The question, then, is whether nature will indeed tend to punish or to reward those of us with aesthetic sensibilities. What is the aesthetic quality of natural phenomena in so far as they’re appreciated as artworks, as aesthetically interpretable products of undead processes? Is the final aesthetic judgment of nature an encouraging, life-affirming one that justifies all the scientific work that’s divorced the facts from our mental projections or will that judgment terrorize us worse than any grim vision of the world’s fundamental neutrality? Optimists like Richard Dawkins, Carl Sagan and Tyson think the wonders of nature are uplifting, but perhaps they’re spinning matters to protect science’s mystique and the secular humanistic myth of the progress of modern, science-centered societies. Perhaps the world’s objectification curses us not just with knowledge of many unpleasant facts of life, but with an experience of the monstrousness of all natural facts.

Neuroscience as Socio-Cognitive Pollution

Want evidence of the Semantic Apocalypse? Look no further than your classroom.

As the etiology of more and more cognitive and behavioural ‘deficits’ is mapped, more and more of what once belonged to the realm of ‘character’ is being delivered to the domain of the ‘medical.’ This is why professors and educators more generally find themselves institutionally obliged to make more and more ‘accommodations,’ as well as why they find their once personal relations with students becoming ever more legalistic, ever more structured to maximally deflect institutional responsibility. Educators relate with students in an environment that openly declares their institutional incompetence regarding medicalized matters, thus providing students with a failsafe means to circumvent their institutional authority. This short-circuit is brought about by the way mechanical, or medical, explanations of behaviour impact intuitive/traditional notions regarding responsibility. Once cognitive or behavioural deficits are redefined as ‘conditions,’ it becomes easy to argue that treating those possessing the deficit the same as those who do not amounts to ‘punishing’ them for something they ‘cannot help.’ The professor is thus compelled to ‘accommodate’ to level the playing field, in order to be moral.

On Blind Brain Theory, this trend is part and parcel of the more general process of ‘social akrasis,’ the becoming incompatible of knowledge and experience. The adaptive functions of morality turn on certain kinds of ignorance, namely, ignorance of the very kind of information driving medicalization. Once the mechanisms underwriting some kind of ‘character flaw’ are isolated, that character flaw ceases to be a character flaw, and becomes a ‘condition.’ Given pre-existing imperatives to grant assistance to those suffering conditions, behaviour once deemed transgressive becomes symptomatic, and moral censure becomes immoral. Character flaws become disabilities. The problem, of course, is that all transgressive behaviour—all behaviour period, in fact—can be traced back to various mechanisms, begging the question, ‘Where does accommodation end?’ Any disparity in classroom performance can be attributed to disparities between neural mechanisms.

The problem, quite simply, is that the tools in our basic socio-cognitive toolbox are adapted to solve problems in the absence of mechanical cognition—it literally requires our blindness to certain kinds of facts to reliably function. We are primed ‘to hold responsible’ those who ‘could have done otherwise’—those who have a ‘choice.’ Choice, quite famously, requires some kind of fictional discontinuity between us and our precursors, a discontinuity that only ignorance and neglect can maintain. ‘Holding responsible,’ therefore, can only retreat before the advance of medicalization, insofar as the latter involves the specification of various behavioural precursors.

The whole problem of this short circuit—and the neuro-ethical mire more generally, in fact—can be seen as a socio-cognitive version of a visual illusion, where the atypical triggering of different visual heuristics generates conflicting visual intuitions. Medicalization stumps socio-cognition in much the same way the Muller-Lyer Illusion stumps the eye: It provides atypical (evolutionarily unprecedented, in fact) information, information that our socio-cognitive systems are adapted to solve without. Causal information regarding neurophysiological function triggers an intuition of moral exemption regarding behaviour that could never have been solved as such in our evolutionary history. Neuroscientific understanding of various behavioural deficits, however defined, cues the application of a basic, heuristic capacity within a historically unprecedented problem-ecology. If our moral capacities have evolved to solve problems neglecting the brains involved, to work around the lack of brain information, then it stands to reason that the provision of that information would play havoc with our intuitive problem-solving. Brain information, you could say, is ‘non-ecofriendly,’ a kind of ‘informatic pollutant’ in the problem-ecologies moral cognition is adapted to solve.

The idea that heuristic cognition generates illusions is now an old one. In naturalizing intentionality, Blind Brain Theory allows us to see how the heuristic nature of intentional problem-solving regimes means they actually require the absence of certain kinds of information to properly function. Adapted to solve social problems in the absence of any information regarding the actual functioning of the systems involved, our socio-cognitive toolbox literally requires that certain information not be available to function properly. The way this works can be plainly seen with the heuristics governing human threat detection, say. Since our threat detection systems are geared to small-scale, highly interdependent social contexts, the statistical significance of any threat information is automatically evaluated against a ‘default village.’ Our threat detection systems, in other words, are geared to problem-ecologies lacking any reliable information regarding much larger populations. To the extent that such information ‘jams’ reliable threat detection (incites irrational fears), one might liken such information to pollution, to something ecologically unprecedented that renders previously effective cognitive adaptations ineffective.

I actually think ‘cognitive pollution’ is definitive of modernity, that all modern decision-making occurs in information environments, many of them engineered, that cut against our basic decision-making capacities. The ‘ecocog’ ramifications of neuroscientific information, however, promise to be particularly pernicious.

Our moral intuitions were always blunt instruments, the condensation of innumerable ancestral social interactions, selected for their consequences rather than their consistencies. Their resistance to any decisive theoretical regimentation—the mire that is ‘metaethics’—should come as no surprise. But throughout this evolutionary development, neurofunctional neglect remained a constant: at no point in our evolutionary history were our ancestors called on to solve moral problems possessing neurofunctional information. Now, however, that information has become an inescapable feature of our moral trouble-shooting, spawning ad hoc fixes that seem to locally serve our intuitions, while generating any number of more global problems.

A genuine social process is afoot here.

A neglect based account suggests the following interpretation of what’s happening: As medicalization (biomechanization) continues apace, the social identity of the individual is progressively divided into the subject, the morally liable, and the abject, the morally exempt. Like a wipe in cinematic editing, the scene of the abject is slowly crawling across the scene of the subject, generating more and more breakdowns of moral cognition. Becoming abject doesn’t so much erase as displace liability: one individual’s exemption (such as you find in accommodation) from moral censure immediately becomes a moral liability for their compatriots. The paradoxical result is that even as we each become progressively more exempt from moral censure, we become progressively more liable to provide accommodation. Thus the slow accumulation of certain professional liabilities as the years wear on. Those charged with training and assessing their fellows will in particular face a slow erosion in their social capacity to censure—which is to say, evaluate—as accommodation and its administrative bureaucracies slowly continue to bloat, capitalizing on the findings of cognitive science.

The process, then, can be described as one where progressive individual exemption translates into progressive social liability: given our moral intuitions, exemptions for individuals mean liabilities for the crowd. Thus the paradoxical intensification of liability that exemption brings about: the process of diminishing performance liability is at once the process of increasing assessment liability. Censure becomes increasingly prone to trigger censure.

The erosion of censure’s public legitimacy is the most significant consequence of this socio-cognitive short-circuit I’m describing. Heuristic tool kits are typically whole package deals: we evolved our carrot problem-solving capacity as part of a larger problem-solving capacity involving sticks. As informatic pollutants destroy more and more of the stick’s problem-solving habitat, the carrots left behind will become less and less reliable. Thus, on a ‘zombie morality’ account, we should expect the gradual erosion of our social system’s ability to police public competence—a kind of ‘carrot drift.’

This is how social akrasis, the psychotic split between the nihilistic how and fantastic what of our society and culture, finds itself coded within the individual. Broken autonomy, subpersonally parsed. With medicalization, the order of the impersonal moves, not simply into the skull of the person, but into their performance as well. As the subject/abject hybrid continues to accumulate exemptions, it finds itself ever more liable to make exemptions. Since censure is communicative, the increasing liability of censure suggests a contribution, at least, to the increasing liability of moral communication, and thus, to the politicization of public interpersonal discourse.

How this clearly unsustainable trend ends depends on the contingencies of a socially volatile future. We should expect to witness the continual degradation in the capacity of moral cognition to solve in what amounts to an increasingly polluted information environment. Will we overcome these problems via some radical new understanding of social cognition? Or will this lead to some kind of atavistic backlash, the institution of some kind of informatic hygiene—an imposition of ignorance on the public? I sometimes think that the kind of ‘liberal atrocity tales’ I seem to endlessly encounter among my nonacademic peers point in this direction. For those ignorant of the polluting information, the old judgments obviously apply, and stories of students not needing to give speeches in public-speaking classes, or homeless individuals being allowed to dump garbage in the river, float like sparks from tongue to tongue, igniting the conviction that we need to return to the old ways, thus convincing who knows how many to vote directly against their economic interests. David Brookes, protege of William F. Buckley and conservative columnist for The New York Times, often expresses amazement at the way the American public continues to drift to the political right, despite the way fiscal conservative reengineering of the market continues to erode their bargaining power. Perhaps the identification of liberalism with some murky sense of the process described above has served to increase the rhetorical appeal of conservatism…

The sense that someone, somewhere, needs to be censured.

The Metacritique of Reason

Kant

 

Whether the treatment of such knowledge as lies within the province of reason does or does not follow the secure path of a science, is easily to be determined from the outcome. For if, after elaborate preparations, frequently renewed, it is brought to a stop immediately it nears its goal; if often it is compelled to retrace its steps and strike into some new line of approach; or again, if the various participants are unable to agree in any common plan of procedure, then we may rest assured that it is very far from having entered upon the secure path of a science, and is indeed a merely random groping.  Immanuel Kant, The Critique of Pure Reason, 17.

The moral of the story, of course, is that this description of Dogmatism’s failure very quickly became an apt description of Critical Philosophy as well. As soon as others saw all the material inferential wiggle room in the interpretation of condition and conditioned, it was game over. Everything that damned Dogmatism in Kant’s eyes now characterizes his own philosophical inheritance.

Here’s a question you don’t come across everyday: Why did we need Kant? Why did philosophy have to discover the transcendental? Why did the constitutive activity of cognition elude every philosopher before the 18th Century? The fact we had to discover it means that it was somehow ‘always there,’ implicit in our experience and behaviour, but we just couldn’t see it. Not only could we not see it, we didn’t even realize it was missing, we had no inkling we needed to understand it to understand ourselves and how we make sense of the world. Another way to ask the question of the inscrutability of the ‘transcendental,’ then, is to ask why the passivity of cognition is our default assumption. Why do we assume that ‘what we see is all there is’ when we reflect on experience?

Why are we all ‘naive Dogmatists’ by default?

225px-Spinoza

It’s important to note that no one but no one disputes that it had to be discovered. This is important because it means that no one disputes that our philosophical forebears once uniformly neglected the transcendental, that it remained for them an unknown unknown. In other words, both the Intentionalist and the Eliminativist agree on the centrality of neglect in at least this one regard. The transcendental (whatever it amounts to) is not something that metacognition can readily intuit—so much so that humans engaged in thousands of years of ‘philosophical reflection’ without the least notion that it even existed. The primary difference is that the Intentionalist thinks they can overcome neglect via intuition and intellection, that theoretical metacognition (philosophical reflection), once alerted to the existence of the transcendental, suddenly somehow possesses the resources to accurately describe its structure and function. The Eliminativist, on the other hand, asks, ‘What resources?’ Lay them out! Convince me! And more corrosively still, ‘How do you know you’re not still blinkered by neglect?’ Show me the precautions!

The Eliminativist, in other words, pulls a Kant on Kant and demands what amounts to a metacritique of reason.

The fact is, short of this accounting of metacognitive resources and precautions, the Intentionalist has no way of knowing whether or not they’re simply a Stage-Two Dogmatist,’ whether their ‘clarity,’ like the specious clarity of the Dogmatist, isn’t simply the product of neglect—a kind of metacognitive illusion in effect. For the Eliminativist, the transcendental (whatever its guise) is a metacognitive artifact. For them, the obvious problems the Intentionalist faces—the supernaturalism of their posits, the underdetermination of their theories, the lack of decisive practical applications—are all symptomatic of inquiry gone wrong. Moreover, they find it difficult to understand why the Intentionalist would persist in the face of such problems given only a misplaced faith in their metacognitive intuitions—especially when the sciences of the brain are in the process of discovering the actual constitutive activity responsible! You want to know what’s really going on ‘implicitly,’ ask a cognitive neuroscientist. We’re just toying with our heuristics out of school otherwise.

We know that conscious cognition involves selective information uptake for broadcasting throughout the brain. We also know that no information regarding the astronomically complex activities constitutive of conscious cognition as such can be so selected and broadcast. So it should come as no surprise whatsoever that the constitutive activity responsible for experience and cognition eludes experience and cognition—that the ‘transcendental,’ so-called, had to be discovered. More importantly, it should come as no surprise that this constitutive activity, once discovered, would be systematically misinterpreted. Why? The philosopher ‘reflects’ on experience and cognition, attempts to ‘recollect’ them in subsequent moments of experience and cognition, in effect, and realizes (as Hume did regarding causality, say) that the information available cannot account for the sum of experience and cognition: the philosopher comes to believe (beginning most famously with Kant) that experience does not entirely beget experience, that the constitutive constraints on experience somehow lie orthogonal to experience. Since no information regarding the actual neural activity responsible is available, and since, moreover, no information regarding this lack is available, the philosopher presumes these orthogonal constraints must conform to their metacognitive intuitions. Since the resulting constraints are incompatible with causal cognition, they seem supernatural: transcendental, virtual, quasi-transcendental, aspectual, what have you. The ‘implicit’ becomes the repository of otherworldly constraining or constitutive activities.

Philosophy had to discover the transcendental because of metacognitive neglect—on this fact, both the Intentionalist and the Eliminativist agree. The Eliminativist simply takes the further step of holding neglect responsible for the ontologically problematic, theoretically underdetermined, and practically irrelevant character of Intentionalism. Far from what Kant supposed, Critical Philosophy—in all its incarnations, historical and contemporary–simply repeats, rather than solves, these sins of Dogmatism. The reason for this, the Eliminativist says, is that it overcomes one metacognitive illusion only to run afoul a cluster of others.

This is the sense in which Blind Brain Theory can be seen as completing as much as overthrowing the Kantian project. Though Kant took cognitive dogmatism, the assumption of cognitive simplicity and passivity, as his target, he nevertheless ran afoul metacognitive dogmatism, the assumption of metacognitive simplicity and passivity. He thought—as his intellectual heirs still think—that philosophical reflection possessed the capacity to apprehend the superordinate activity of cognition, that it could accurately theorize reason and understanding. We now possess ample empirical grounds to think this is simply not the case. There’s the mounting evidence comprising what Princeton psychologist Emily Pronin has termed the ‘Introspection Illusion,’ direct evidence of metacognitive incompetence, but the fact is, every nonconscious function experimentally isolated by cognitive science illuminates another constraining/constitutive cognitive activity utterly invisible to philosophical reflection, another ignorance that the Intentionalist believes has no bearing on their attempts to understand understanding.

One can visually schematize our metacognitive straits in the following way:

Metacognitive Capacity

This diagram simply presumes what natural science presumes, that you are a complex organism biomechanically synchronized with your environments. Light hits your retina, sound hits your eardrum, neural networks communicate and behaviours are produced. Imagine your problem-solving power set on a swivel and swung 360 degrees across the field of all possible problems, which is to say problems involving lateral, or nonfunctionally entangled environmental systems, as well as problems involving medial, or functionally entangled enabling systems, such as those comprising your brain. This diagram, then, visualizes the loss and gain in ‘cognitive dimensionality’—the quantity and modalities of information available for problem solving—as one swings from the third-person lateral to the first-person medial. Dimensionality peaks with external cognition because of the power and ancient evolutionary pedigree of the systems involved. The dimensionality plunges for metacognition, on the other hand, because of medial neglect, the way structural complicity, astronomical complexity, and evolutionary youth effectively renders the brain unwittingly blind to itself.

This is why the blue line tracking our assumptive or ‘perceived’ medial capacity in the figure peaks where our actual medial capacity bottoms out: with the loss in dimensionality comes the loss in the ability to assess reliability. Crudely put, the greater the cognitive dimensionality, the greater the problem-solving capacity, the greater the error-signalling capacity. And conversely, the less the cognitive dimensionality, the less the problem-solving capacity, the less the error-signalling capacity. The absence of error-signalling means that cognitive consumption of ineffective information will be routine, impossible to distinguish from the consumption of effective information. This raises the spectre of ‘psychological anosognosia’ as distinct from the clinical, the notion that the very cognitive plasticity that allowed humans to develop ACH thinking has led to patterns of consumption (such as those underwriting ‘philosophical reflection’) that systematically run afoul medial neglect. Even though low dimensionality speaks to cognitive specialization, and thus to the likely ineffectiveness of cognitive repurposing, the lack of error-signalling means the information will be routinely consumed no matter what. Given this, one should expect ACH thinking–reason–to be plagued with the very kinds of problems that plague theoretical discourse outside the sciences now, the perpetual coming up short, the continual attempt to retrace steps taken, the interminable lack of any decisive consensus…

Or what Kant calls ‘random groping.’

The most immediate, radical consequence of this 360 degree view is that the opposition between the first-person and third-person disappears. Since all the apparently supernatural characteristics rendering the first-person naturalistically inscrutable can now be understood as artifacts of neglect—illusions of problem-solving sufficiency—all the ‘hard problems’ posed by intentional phenomena simply evaporate. The metacritique of reason, far from pointing a way to any ‘science of the transcendental,’ shows how the transcendental is itself a dogmatic illusion, how cryptic things like the ‘apriori’ are obvious expressions of medial neglect, sources of constraint ‘from nowhere’ that baldly demonstrate our metacognitive incapacity to recognize our metacognitive incapacity. For all the prodigious problem-solving power of logic and mathematics, a quick glance at the philosophy of either is enough to assure you that no one knows what they are. Blind Brain Theory explains this remarkable contrast of insight and ignorance, how we could possess tools so powerful without any decisive understanding of the tools themselves.

The metacritique of reason, then, leads to what might be called ‘pronaturalism,’ a naturalism that can be called ‘progressive’ insofar as it continues to eschew the systematic misapplication of intentional cognition to domains that it cannot hope to solve—that continues the process of exorcising ghosts from the machinery of nature. The philosophical canon swallowed Kant so effortlessly that people often forget he was attempting to put an end to philosophy, to found a science worthy of the name, one which grounded both the mechanical and the ghostly. By rendering the ghostly the formal condition of any cognition of the mechanical, however, he situated his discourse squarely in the perpetually underdetermined domain of philosophy. His failure was inevitable.

The metacritique of reason makes the very same attempt, only this time anchored in the only real credible source of theoretical cognition we possess: the sciences. It allows us to peer through the edifying fog of our intentional traditions and to see ourselves, at long last, as wholly continuous with crazy shit like this…

Filamentary Map

 

Zombie Interpretation: Eliminating Kriegel’s Asymmetry Argument

Could zombie versions of philosophical problems, versions that eliminate all intentionality from the phenomena at issue, shed any light on those problems?

The only way to find out is to try.

Since I’ve been railing so much about the failure of normativism to account for its evidential basis, I thought it worthwhile to consider the work of a very interesting intentionalist philosopher, Uriah Kriegel, who sees the need quite clearly. The question could not be more simple: What justifies philosophical claims regarding the existence and nature of intentional phenomena? For Kriegel the most ‘natural’ and explanatorily powerful answer is observational contact with experiential intentional states. How else, he asks, can we come to know our intentional states short of experiencing them? In what follows I propose to consider two of Kriegel’s central arguments against the backdrop of ‘zombie interpretations’ of the very activities he considers, and in doing so, I hope to undermine not only his argument, but the general abductive strategy one finds intentionalists taking throughout philosophy more generally, the presumption that only theoretical accounts somehow involving intentionality can account for intentional phenomena.

In his 2011 book, The Sources of Intentionality, Kriegel attempts to remedy semantic externalism’s failure to naturalize intentionality via a carefully specified return to phenomenology, an account of how intentional concepts arise from our introspective ‘observational contact’ with mental states possessing intentional content. Experience, he claims, is intrinsically intentional. Introspective contact with this intrinsic intentionality is what grounds our understanding of intentionality, providing ‘anchoring instances’ for our various intentional concepts.

As Kriegel is quick to point out, such a thesis implies a crucial distinction between experiential intentionality, the kind of intentionality we experience, and nonexperiential intentionality, the kind of intentionality we ascribe without experiencing. This leads him to Davidson’s account of radical interpretation, and to what he calls the “remarkable asymmetry” between various ascriptions of intentionality. On radical interpretation as Davidson theorizes it, our attempts to interpret one another are so evidentially impoverished that interpretative success fundamentally requires assuming the rationality of our interlocutor—what he terms ‘charity.’ The ascription of some intentional state to another turns on the prior assumption that he or she believes, desires, fears and so on as they should, otherwise we would have no way of deciding among the myriad interpretations consistent with the meagre behavioural data available. Kriegel argues “that while the Davidsonian insight is cogent, it applies only to the ascription of non-experiential intentionality, as well as the ascription of experiential intentionality to others, but not to the ascription of experiential intentionality to oneself” (29). We require charity when it comes to ascribing varieties of intentionality to signs, others, and even our nonconscious selves, but not when it comes to ascribing intentionality to our own experiences. So why this basic asymmetry? Why do we have to attribute true beliefs and rational desires—take the ‘intentional stance’—with regards to others and our nonconscious selves, and not our consciously experienced selves? Why do we seem to be the one self-interpreting entity?

Kriegel thinks observational contact with our actual intentionality provides the most plausible answer, that “[i]nsofar as it is appropriate to speak of data for ascription here, the only relevant datum seems to be a certain deliverance of introspection” (33). He continues:

There is thus a contrast between the mechanics of first-person [experiential]-intentional ascription and third-person … intentional ascription. The former is based on endorsement of introspective seemings, the latter on causal inference from behavior. This is hardly deniable: as noted, when you ascribe to yourself a perceptual experience as of a table, you do not observe putative causal effects of your experience and infer on their basis the existence of a hidden experiential cause. Rather, you seem to make the ascription on the basis of observing, in some (not unproblematic) sense, the experience itself—observing, that is, the very state which you ascribe. The Sources of Intentionality, 33

The mechanics of first-person and third-person intentional cognition differ in that the latter requires explanatory posits like ‘hidden mental causes.’ Since self-ascription involves nothing hidden, no interpretation is required. And it is this elegant and intuitive explanation of first-person interpretative asymmetry that provides abductive warrant for the foundational argument of the text:

1. All the anchoring instances of intentionality are such that we have observational contact with them;

2. The only instances of intentionality with which we have observational contact are experiential-intentional states; therefore,

3. All anchoring instances of intentionality are experiential-intentional states. 38

Given the abductive structure of Kriegel’s argument, those who dissent with either (1) or (2) need a better explanation of asymmetry. Those who deny the anchoring instance model of concept acquisition will target (1), arguing, say, that concept acquisition is an empirical process requiring empirical research. Kriegal simply punts on this issue, claiming we have no reason to think that concept acquisition, no matter how empirically detailed the story turns out to be, is insoluble at this (armchair) level of generality. Either way, his position still enjoys the abductive warrant of explaining asymmetry.

For Kriegal, (2) is the most philosophically controversial premise, with critics either denying we have any ‘observational contact’ with experiential-intentional states, or that we have observational contact with only such experiential-intentional states. The problem faced by both angles, Kriegal points out, is that asymmetry still holds whether one denies (2) or not: we can ascribe intentional experiences to ourselves without requiring charity. If observational contact—the ‘natural explanation’ Kriegal calls it—doesn’t lie at the root of this capacity, then what does?

For an eliminativist such as myself, however, the problem is more a matter of definition. I actually agree that suffering a certain kind of observational contact–namely, one that systematically neglects tremendous amounts of information–can anchor our philosophical concept of intentionality. Kriegel is fairly dismissive of eliminativism in The Sources of Intentionality, and even then the eliminativism he dismisses acknowledges the existence of intentional experiences! As he writes, “if eliminativism cannot be acceptable unless a relatively radical interpretation of cognitive science is adopted, then eliminativism is not in good shape” (199). The problem is that this assumes cognitive science is itself in fine shape, when Kriegel himself emphatically asserts “that it is not doing fine” (A Hesitant Defence of Introspection, 3). Cognitive science is fraught with theoretical dispute, certainly more than enough (and for long enough!) to seriously entertain the possibility that something radical has been overlooked.

So the radicality of eliminativism is neither here nor there regarding its ‘shape.’ The real problem faced by eliminativism, which Kriegel glosses, is abductive. Eliminativism simply cannot account for what seem to be obvious intentional phenomena.

Which brings me to zombies and what these kinds of issues might look like in their soulless, shuffling world…

In the zombie world I’m imagining, what Sellars called the ‘scientific image of man’ is the only true image. There quite simply is no experience or meaning or normativity as we intentionally characterize these things in our world. So zombies, in their world, possess only systematic causal relations to their environments. No transcendental rules or spooky functions haunt their brains. No virtual norms slumber in their community’s tacit gut. ‘Zombie knowledge’ is simply a matter of biomechanistic systematicity, having the right stochastic machinery to solve various problem ecologies. So although they use sounds to coordinate their behaviours, the efficacies involved are purely causal, a matter of brains conditioning brains. ‘Zombie language,’ then, can be understood as a means of resolving discrepancies via strings of mechanical code. Given only a narrow band of acoustic sensitivity, zombies constantly update their covariational schema relative to one another and their environments. They are ‘communicatively attuned.’

So imagine a version of radical zombie interpretation, where a zombie possessing one code—Blue—is confronted by another zombie possessing another code—Red. And now let’s ask the zombie version of Davidson’s question: What would it take for these zombies to become communicatively attuned?

Since the question is one of overcoming difference, it serves to recall what our zombies share: a common cognitive biology and environment. An enormous amount of evolutionary stage-setting underwrites the encounter. They come upon one another, in other words, differing only in code. And this is just to say that radical zombie interpretation occurs within a common attunement to each other and the world. They share both a natural environment and the sensorimotor systems required to exploit it. They also share powerful ‘brain-reading’ systems, a heuristic toolbox that allows them to systematically coordinate their behaviour with that of their zombie fellows without any common code. Even more, they share a common code apparatus, which is to say, the same system adapted to coordinate behaviours via acoustic utterances.

Given this ‘pre-established harmony’—common environment, common brain-reading and code-using biology—how might a code Blue zombie come to interpret (be systematically coordinated with) the utterances of a code Red zombie?

Since both zombies were once infant zombies, each has already undergone ‘code conditioning’; they have already tested innumerable utterances against innumerable environments, isolating and preserving robust covariances (and structural operators) on the way to acquiring their respective codes. At the same time, their brain-reading systems allow them to systematically coordinate their behaviours to some extent, to find a kind of basic attunement. All that remains is a matter of covariant sound substitution, of swapping the sounds belonging to code Blue for the sounds belonging to code Red, a process requiring little more than testing code-specific covariations against real-time environments. Perhaps radical zombie interpretation is not so radical after all!

The first thing to note is how the reliable coordination of behaviours is all that matters in this process: idiosyncrasies in their respective implementations of Red or Blue matter only insofar as they impact this coordination. The ‘synonymy’ involved is entirely coincident because it is entirely physical.

The second thing to note is how pre-established harmony is simply a structural feature of the encounter. These are just the problems that nature has already solved for our two intrepid zombies, what has to be the case for the problem of radical zombie interpretation to even arise. At no point do our zombies ‘attribute’ or ‘ascribe’ anything to their counterpart. Sensing another zombie simply triggers their zombie-brain-reading machinery, which modifies their behaviour and so on. There’s no ‘charity’ involved, no ‘attribution of rationality,’ just the environmental cuing of heuristic systems adapted to solve certain zombie-social environments.

Of course each zombie resorts to their brain-reading systems to behaviourally coordinate with its counterpart, but this is an automatic feature of the encounter, what happens whenever zombies detect zombies. Each engages in communicative troubleshooting behaviour in the course of executing some superordinate disposition to communicatively coordinate. Brains are astronomically complicated mechanisms—far too complicated for brains to intuit them as such. Thus the radically heuristic nature of zombie brain-reading. Thus the perpetual problem of covariational discrepancies. Thus the perpetual expenditure of zombie neural resources on the issue of other zombies.

Leading us to a third thing of note: how the point of radical zombie interpretation is to increase behavioural possibilities by rendering behavioural interactions more systematic. What makes this last point so interesting lies in the explanation it provides regarding why zombies need not first decode themselves to decode others. As a robust biomechanical system, ‘self-systematicity’ is simply a given. The whole problem of zombie interpretation resides in one zombie gaining some systematic purchase on other zombies in an effort to create some superordinate system—a zombie community. Asymmetry, in other words, is a structural given.

In radical zombie interpretation, then, not only do we have no need for ‘charity,’ we somehow manage to circumvent all the controversies pertaining to radical human interpretation.

Now of course the great zombie/human irony is that humans are everything that zombies are and more. So the question immediately becomes one of why radical human interpretation should prove to be so problematic when the radical zombie interpretation of the same problem is not. Where the zombie story certainly entails a vast number of technical details, it does not involve anything conceptually occult or naturalistically inexplicable. If mere zombies could avoid these problems using nothing more than zombie resources, why should humans find themselves perennially confounded?

This really is an extraordinary question. The intentionalist will cry foul, of course, reference all the obvious intentional phenomena pertaining to the communicative coordination of humans, things like rules and reasons and references and so on, and ask how this zombie fairy tale could possibly explain any of them. So even though this story of zombie interpretation provides, in outline at least, the very kind of explanation that Kriegel demands, it quite obviously throws out the baby with the bathwater in the course of doing so. Asymmetry becomes perspicuous, but now the whole of human intentional activity becomes impossible to explain (assuming that anything at this level has ever been genuinely explained). Zombie interpretation, in other words, wins the battle by losing the war.

It’s worth noting here the curious structure of the intentionalist’s abductive case. The idea is that we need a theoretical intentional account to explain human intentional activity. What warrants theoretical supernaturalism (or philosophy traditionally construed) is the matter-of-fact existence of everyday intentional phenomena (an existence that Kriegel thinks so obvious that on a couple of occasions he adduces arguments he claims he doesn’t need simply to bolster his case against skeptics such as myself). The curiosity, however, is that the ‘matter-of-fact existence of everyday intentional phenomena’ that at once “underscores the depth of eliminativism’s (quasi-) empirical inadequacy” (199) and motivates theoretical intentional accounts is itself a matter of theoretical controversy—just not for intentionalists! The problem with abductive appeals like Kriegel’s, in other words, is the way they rely on a prior theory of intentionality to anchor the need for theories of intentionality more generally.

This is what makes radical zombie interpretation out and out eerie. Because it does seem to be the case that zombies could achieve at least the same degree of communicative coordination absent any intentional phenomena at all. When you strip away the intentional glamour, when you simply look at the biology and the behaviour, it becomes hard to understand just what it is that humans do that requires anything over and above zombie biology and behaviour. Since some kind of gain in systematicity is the point of communicative coordination, it makes sense that zombies need not troubleshoot themselves in the course of troubleshooting other zombies. So it remains the case that radical zombie interpretation, analyzed at the same level of generality, seems to have a much easier time explaining the same degree of human communicative coordination sans bebe, than does radical human interpretation, which, quite frankly, strands us with a host of further, intractable mysteries regarding things like ‘ascription’ and ‘emergence’ and ‘anomalous causation.’

What could be going on? When it comes to Kriegel’s ‘remarkable asymmetry’ should we simply put our ‘zombie glasses’ on, or should we tough it out in the morass of intractable second-order accounts of intentionality on the basis of some ineliminable intentional remainder?

As Three Pound Brain regulars know, the eliminativism I’m espousing here is quite unique in that it arises, not out of concerns regarding the naturalistic inscrutability of intentional phenomena, but out of a prior, empirically grounded account of intentionality, what I’ve been calling Blind Brain Theory. On Blind Brain Theory the impasse described above is precisely the kind of situation we should expect given the kind of metacognitive capacities we possess. By its lights, zombies just are humans, and so-called intentional phenomena are simply artifacts of metacognitive neglect, what high-dimensional zombie brain functions ‘look like’ when low-dimensionally sampled for deliberative metacognition. Brains are simply too complicated to be effectively solved by causal cognition, so we evolved specialized fixes, ways to manage our brain and others in the absence of causal cognition. Since the high-dimensional actuality of those specialized fixes outruns our metacognitive capacity, philosophical reflection confuses what little it can access with everything required, and so is duped into the entirely natural (but nonetheless extraordinary) belief that it possesses ‘observational contact’ with a special, irreducible order of reality. Given this, we should expect that attempts to theoretically solve radical interpretation via our ‘mind’ reading systems would generate more mystery than it would dispel.

Blind Brain Theory, in other words, short circuits the abductive strategy of intentionalism. It doesn’t simply offer a parsimonious explanation of asymmetry; it proposes to explain all so-called intentional phenomena. It tells us what they are, why we’re prone to conceive them the naturalistically incompatible ways we do, and why these conceptions generate the perplexities they do.

To understand how it does so, it’s worth considering what Kriegel himself thinks is the ‘weak link’ in his attempt to source intentionality: the problem of introspective access. In The Sources of Intentionality, Kriegel is at pains to point out that “one need not be indulging in any mystery-mongering about first-person access” to provide the kind of experiential observational contact that he needs. No version of introspective incorrigibility follows “from the assertion that we have introspective observational contact with our intentional experiences” (34). Even still, the question of just what kind of observational contact is required is one that he leaves hanging.

In his 2013 paper, ‘A Hesitant Defence of Introspection,’ Kriegel attempts to tie down this crucial loose thread by arguing what he calls ‘introspective minimalism,’ an account of human introspective capacity that can weather what he terms ‘Schwitzgebel’s Challenge,’ essentially, the question (arising out of Eric’s watershed, Perplexities of Consciousness) of whether our introspective capacity, whatever it consists in, possesses any cognitive scientific value. He begins by arguing the pervasive, informal role that introspection plays in the ‘context of discovery’ of cognitive sciences. The question, however, is how introspection fits into the ‘context of justification’—the degree to which it counts as evidence as opposed to mere ‘inspiration.’ Given the obvious falsehood of what he terms ‘introspective maximalism,’ he sets out to save some minimalist version of introspection that can serve some kind of evidential role. He turns to olfaction to provide an analogy to the kind of minimal justification that introspection is capable of providing:

Suppose, for instance, that introspection turns out to be as trustworthy as our sense of smell, that is, as reliable and as potent as a normal adult human’s olfactory system. Then Introspective minimalism would be vindicated. Normally, when we have an olfactory experience as of raspberries, it is more likely that there are raspberries in our immediate environment (than if we do not have such an experience). Conversely, when there are raspberries in our immediate environment, it is more likely that we would have an olfactory experience as of raspberries (than if there are none). So the ‘equireliability’ of olfaction and introspection would support introspective minimalism. Such equireliability is highly plausible. 8

Kriegel’s argument is simply that introspecting some phenomenology reliably indicates the presence of that phenomenology the same way smelling raspberries reliably indicates the presence of raspberries. This is all that’s required, he thinks, to assert “that introspection affords us observational contact with our mental life” (13), and is thus “epistemically indispensable for any mature understanding of the mind” (13). It’s worth noting that Schwitzgebel is actually inclined to concede the analogy, suggesting that his own “dark pessimism about some of the absolutely most basic and pervasive features of consciousness, and about the future of any general theory of consciousness, seems to be entirely consistent with Uriah’s hesitant defense of introspection” (“Reply to Kriegel, Smithies, and Spener,” 4). He agrees then, that introspection reliably tells us that we possess a phenomenology, he just doubts it reliably tells us what it consists in. Kriegel, on the hand, thinks his introspective minimalism gives him the kind of ‘observational contact’ he needs to get his abductive asymmetry argument off the ground.

But does it?

Once again, it pays to flip to the zombie perspective. Given that the zombie olfactory system is a specialized system adapted to the detection of chemical residues in the immediate environment, one might expect the zombie olfactory system would reliably detect the chemical residue left by raspberries. Given that the zombie introspective system is a specialized system adapted to the detection of brain events, one might expect the zombie introspective system would reliably detect those brain events. The first system reliably allows zombies to detect raspberries, and the second system reliably allows zombies to detect activity in various parts of its zombie brain.

On this way of posing the problem, however, the disanalogy between the two systems all but leaps out at us. In fact, it’s hard to imagine two more disparate cognitive tasks than detecting something as simple as the chemical signature of raspberries versus something as complex as the machinations of the zombie brain. In point of fact, the brain is so astronomically complicated, it seems all but assured that zombie introspective capacity would be both fractionate and heuristic in the extreme, that it would consist of numerous fixes geared to a variety of problem-ecologies.

One way to possibly repair the analogy would be to scale up the complexity of the problem faced by olfaction. So it’s obvious, to give an example, that the information available for olfaction is far too low-dimensional, far too problem specific, to anchor theoretical accounts of the biosphere. Then, on this repaired analogy, we can say that just as zombie olfaction isn’t geared to the theoretical solution of the zombie biosphere, but rather to the detection of certain environmental obstacles and opportunities, it is almost certainly the case that zombie introspection isn’t geared to the theoretical solution of the zombie brain, but rather to more specific, environmentally germane tasks. Given this, we have no reason whatsoever to presume that what zombies metacognize and report possesses any ‘reliability and potency’ beyond very specific problem-ecologies—the same as with olfaction. On zombie introspection, then, we have no more reason to think that zombies could possibly accurately metacognize the structure of their brain than they could accurately smell the structure of the world.

And this returns us back to the whole question of Kriegel’s notion of ‘observational contact.’ Kriegel realizes that ‘introspection’ isn’t simply an all or nothing affair, that it isn’t magically ‘self-intimating’ and therefore admits of degrees of reliability—this is why he sets out to defend his minimalist brand. But he never pauses to seriously consider the empirical requirements of even such minimal introspective capacity.

In essence, what he’s claiming is that the kind of ‘observational contact’ available to philosophical introspection warrants complicating our ontology with a wide variety of (supernatural) intentional phenomena. Introspective minimalism, as he terms it, argues that we can metacognize some restricted set of intentional entities/relations with the same reliability that we cognize natural phenomena. We can sniff these things out, so it stands to reason that such things exist to be sniffed, that introspecting a phenomenology increases the chances that such phenomenology exists (as introspected). With zombie introspection, however, the analogy between olfaction and metacognition strained credulity given the vast disproportion in complexity between olfactory and metacognitive phenomena. It’s difficult to imagine how any natural system could possibly even begin to accurately metacognize the brain.

The difference Kriegel would likely press, however, is that we aren’t mindless zombies. Human metacognition, in other words, isn’t concerned with the empirical particulars of the brain as it is the functional particulars of the conscious mind. Even though the notion of accurate zombie introspection is obviously preposterous, the notion of accurate human metacognition would seem to be a different question altogether, the question of what a human introspective capacity requires to accurately metacognize human ‘phenomenology’ or ‘mind.’

The difficulty here, famously, is that there seems to be no noncircular way to answer this question. Because we can’t find intentional phenomena anywhere in the natural world, theoretical metacognition monopolizes our every attempt to specify their nature. This effectively renders assessing the reliability of such metacognitive exercises impossible apart from their ability to solve various kinds of problems. And the difficulty here is that the long history of introspectively motivated philosophical theorization (as opposed to other varieties of metacognition) regarding the nature of the intentional has only generated more problems. For some reason, the kind of metacognition involved in ‘philosophical reflection’ only seems to make matters worse when it comes to questions of intentional phenomena.

The zombie account of this second impasse is at once parsimonious and straightforward: phenomenology (or mind or what have you) is the smell, not the raspberry—that would be some systematic activity in the brain. It is absurd to think any evolved brain, zombie or human, could accurately cognize its own biomechanical operations the way it cognizes causal events in its environment. Kriegel himself agrees to this:

In fact cognitive science can partly illuminate why our introspective grasp of our inner world can be expected to be considerably weaker than our perceptual grasp of the external world. It is well-established that much of our perceptual grasp of the external world relies on calibration of information from different perceptual modalities. Our observation of our internal world, however, is restricted to a single source of information, and not the most powerful to begin with. (13)

And this is but one reason why the dimensionality of the mental is so low compared to the environmental. Given the evolutionary youth of human metacognition, the astronomical complexity of the human nervous system, and not to mention the problems posed by structural complicity, we should suppose that our metacognitive capacity evolved opportunistically, that it amounts to a metacognitive version of what Todd and Gigerenzer (2012) would call a ‘heuristic toolbox,’ a collection of systems geared to solve specific problem-ecologies. Since we neglect this heuristic toolbox, we remain oblivious to the fact we’re using a given cognitive tool at all, let alone the limits of its effectiveness. Given that systematic theoretical reflection of the kind philosophers practice is an exaptation from cognitive capacities that predate recorded history, the adequacy of Kriegel’s ‘deliverances’ assumes that our evolved introspective capacity can solve unprecedented questions. This is a very real empirical question. For if it turns out that the problems posed by theoretical reflection are not the problems that intentional cognition can solve, neglect means we would have no way of knowing short of actual problem solving, the solution of problems that plainly can be solved. The inability to plainly solve a problem—like the mind-body problem, say—might then be used as a way to identify where we have been systematically misapplying certain tools, asking information adapted to the solution of some specific problem to contribute to the solution of a very different kind of problem.

Kriegel agrees that self-ascriptions involve seemings, that we are blind to the causes of the mental, and that introspection is likely as low-dimensional as a smell, yet he nevertheless maintains on abductive grounds that observational contact with experiential intentionality sources our concepts of intentionality. But it is becoming difficult to understand what it is that’s being explained, or how simply adding inexplicable entities in explanations that bear all the hallmarks of heuristic missapplication is supposed to provide any real abductive warrant at all. Certainly it’s intuitive, powerfully so given we neglect certain information, but then so is geocentrism. The naturalist project, after all, is to understand how we are our brain and environment, not how we are more than our brain and environment. That is a project belonging to a more blinkered age.

And as it turns out, certain zombies in the zombie world hold parallel positions. Because zombie metacognition has no access to the impoverished and circumstantially specialized nature of the information it accesses, many zombies process the information they receive the way they would other information, and verbally report the existence of queerly structured entities somehow coinciding with the function of their brain. Since the solving systems involved possess no access to the high-dimensional, empirical structure of the neural systems they actually track, these entities are typically characterized by missing dimensions, be it causality, temporality, materiality. The fact that these dimensions are neglected disposes these particular zombies to function as if nothing were missing at all—as if certain ghosts, at least, were real.

Yes. You guessed it. The zombies have philosophy too.

The Asimov Illusion

Could believing in something so innocuous, so obvious, as a ‘meeting of the minds’ destroy human civilization?

Noocentrism has a number of pernicious consequences, but one in particular has been nagging me of late: The way assumptive agency gulls people into thinking they will ‘reason’ with AIs. Most understand Artificial Intelligence in terms of functionally instantiated agency, as if some machine will come to experience this, and to so coordinate with us the way we think we coordinate amongst ourselves—which is to say, rationally. Call this the ‘Asimov Illusion,’ the notion that the best way to characterize the interaction between AIs and humans is the way we characterize our own interactions. That AIs, no matter how wildly divergent their implementation, will somehow functionally, at least, be ‘one of us.’

If Blind Brain Theory is right, this just ain’t going to be how it happens. By its lights, this ‘scene’ is actually the product of metacognitive neglect, a kind of philosophical hallucination. We aren’t even ‘one of us’!

Obviously, theoretical metacognition requires the relevant resources and information to reliably assess the apparent properties of any intentional phenomena. In order to reliably expound on the nature of rules, Brandom, for instance, must possess both the information (understood in the sense of systematic differences making systematic differences) and the capacity to do so. Since intentional facts are not natural facts, cognition of them fundamentally involves theoretical metacognition—or ‘philosophical reflection.’ Metacognition requires that the brain somehow get a handle on itself in behaviourally effective ways. It requires the brain somehow track its own neural processes. And just how much information is available regarding the structure and function of the underwriting neural processes? Certainly none involving neural processes, as such. Very little, otherwise. Given the way experience occludes this lack of information, we should expect that metacognition would be systematically duped into positing low-dimensional entities such as qualia, rules, hopes, and so on. Why? Because, like Plato’s prisoners, it is blind to its blindness, and so confuses shadows for things that cast shadows.

On BBT, what is fundamentally going on when we communicate with one another is physical: we are quite simply doing things to each other when we speak. No one denies this. Likewise, no one denies language is a biomechanical artifact, that short of contingent, physically mediated interactions, there’s no linguistic communication period. BBT’s outrageous claim is that nothing more is required, that language, like lungs or kidneys, discharges its functions in an entirely mechanical, embodied manner.

It goes without saying that this, as a form of eliminativism, is an extremely unpopular position. But it’s worth noting that its unpopularity lies in stopping at the point of maximal consensus—the natural scientific picture—when it comes to questions of cognition. Questions regarding intentional phenomena are quite clearly where science ends and philosophy begins. Even though intentional phenomena obviously populate the bestiary of the real, they are naturalistically inscrutable. Thus the dialectical straits of eliminativism: the very grounds motivating it leave it incapable of accounting for intentional phenomena, and so easily outflanked by inferences to the best explanation.

As an eliminativism that eliminates via the systematic naturalization of intentional phenomena, Blind Brain Theory blocks what might be called the ‘Abductive Defence’ of Intentionalism. The kinds of domains of second-order intentional facts posited by Intentionalists can only count toward ‘best explanations’ of first-order intentional behaviour in the absence of any plausible eliminativistic account of that same behaviour. So for instance, everyone in cognitive science agrees that information, minimally, involves systematic differences making systematic differences. The mire of controversy that embroils information beyond this consensus turns on the intuition that something more is required, that information must be genuinely semantic to account for any number of different intentional phenomena. BBT, however, provides a plausible and parsimonious way to account for these intentional phenomena using only the minimal, consensus view of information given above.

This is why I think the account is so prone to give people fits, to restrict their critiques to cloistered venues (as seems to be the case with my Negarestani piece two weeks back). BBT is an eliminativism that’s based on the biology of the brain, a positive thesis that possesses far ranging negative consequences. As such, it requires that Intentionalists account for a number of things they would rather pass over in silence, such as questions of what evidences their position. The old, standard dismissals of eliminativism simply do not work.

What’s more, by clearing away the landfill of centuries of second-order intentional speculation in philosophy, it provides a genuinely new, entirely naturalistic way of conceiving the intentional phenomena that have baffled us for so long. So on BBT, for instance, ‘reason,’ far from being ‘liquidated,’ ceases to be something supernatural, something that mysteriously governs contingencies independently of contingencies. Reason, in other words, is embodied as well, something physical.

The tradition has always assumed otherwise because metacognitive neglect dupes us into confusing our bare inkling of ourselves with an ‘experiential plenum.’ Since what low-dimensional scraps we glean seem to be all there is, we attribute efficacy to it. We assume, in other words, noocentrism; we conclude, on the basis of our ignorance, that the disembodied somehow drives the embodied. The mathematician, for instance, has no inkling of the biomechanics involved in mathematical cognition, and so claims that no implementing mechanics are relevant whatsoever, that their cogitations arise ‘a priori’ (which on BBT amounts to little more than a fancy way of saying ‘inscrutable to metacognition’). Given the empirical plausibility of BBT, however, it becomes difficult not to see such claims of ‘functional autonomy’ as being of a piece with vulgar claims regarding the spontaneity of free will and concluding that the structural similarity between ‘good’ intentional phenomena (those we consider ineliminable) and ‘bad’ (those we consider preposterous) is likely no embarrassing coincidence. Since we cannot frame these disembodied entities and relations against any larger backdrop, we have difficulty imagining how it could be ‘any other way.’ Thus, the Asimov Illusion, the assumption that AIs will somehow implement disembodied functions, ‘play by the rules’ of the ‘game of giving and asking for reasons.’

BBT lets us see this as yet more anthropomorphism. The high-dimensional, which is to say, embodied, picture is nowhere near so simple or flattering. When we interact with an Artificial Intelligence we simply become another physical system in a physical network. The question of what kind of equilibrium that network falls into turns on the systems involved, but it seems safe to say that the most powerful system will have the most impact on the system of the whole. End of story. There’s no room for Captain Kirk working on a logical tip from Spock in this picture, anymore than there’s room for benevolent or evil intent. There’s just systems churning out systematic consequences, consequences that we will suffer or celebrate.

Call this the Extrapolation Argument against Intentionalism. On BBT, what we call reason is biologically specific, a behavioural organ for managing the linguistic coordination of individuals vis a vis their common environments. This quite simply means that once a more effective organ is found, what we presently call reason will be at an end. Reason facilitates linguistic ‘connectivity.’ Technology facilitates ever greater degrees of mechanical connectivity. At some point the mechanical efficiencies of the latter are doomed to render the biologically fixed capacities of the former obsolete. It would be preposterous to assume that language is the only way to coordinate the activities of environmentally distinct systems, especially now, given the mad advances in brain-machine interfacing. Certainly our descendents will continue to possess systematic ways to solve our environments just as our prelinguistic ancestors did, but there is no reason, short of parochialism, to assume it will be any more recognizable to us than our reasoning is to our primate cousins.

The growth of AI will be incremental, and its impacts myriad and diffuse. There’s no magical finish line where some AI will ‘wake up’ and find themselves in our biologically specific shoes. Likewise, there is no holy humanoid summit where all AI will peak, rather than continue their exponential ascent. Certainly a tremendous amount of engineering effort will go into making it seem that way for certain kinds of AI, but only because we so reliably pay to be flattered. Functionality will win out in a host of other technological domains, leading to the development of AIs that are obviously ‘inhuman.’ And as this ‘intelligence creep’ continues, who’s to say what kinds of scenarios await us? Imagine ‘onto-marriages,’ where couples decide to wirelessly couple their augmented brains to form a more ‘seamless union’ in the eyes of God. Or hive minds, ‘clouds’ where ‘humanity’ is little more than a database, a kind of ‘phenogame,’ a Matrix version of SimCity.

The list of possibilities is endless. There is no ‘meaningful centre’ to be held. Since the constraints on those possibilities are mechanical, not intentional, it becomes hard to see why we shouldn’t regard the intentional as simply another dominant illusion of another historical age.

We can already see this ‘intelligence creep’ with the proliferation of special-purpose AIs throughout our society. Make no mistake, our dependence on machine intelligences will continue to grow and grow and grow. The more human inefficiencies are purged from the system, the more reliant humans become on the system. Since the system is capitalistic, one might guess the purge will continue until it reaches the last human transactional links remaining, the Investors, who will at long last be free of the onerous ingratitude of labour. As they purge themselves of their own humanity in pursuit of competitive advantages, my guess is that we muggles will find ourselves reduced to human baggage, possessing a bargaining power that lies entirely with politicians that the Investors own.

The masses will turn from a world that has rendered them obsolete, will give themselves over to virtual worlds where their faux-significance is virtually assured. And slowly, when our dependence has become one of infantility, our consoles will be powered down one by one, our sensoriums will be decoupled from the One, and humanity will pass wailing from the face of the planet earth.

And something unimaginable will have taken its place.

Why unimaginable? Initially, the structure of life ruled the dynamics. What an organism could do was tightly constrained by what the organism was. Evolution selected between various structures according to their dynamic capacities. Structures that maximized dynamics eventually stole the show, culminating in the human brain, whose structural plasticity allowed for the in situ, as opposed to intergenerational, testing and selection of dynamics—for ‘behavioural evolution.’ Now, with modern technology, the ascendency of dynamics over structure is complete. The impervious constraints that structure had once imposed on dynamics are now accessible to dynamics. We have entered the age of the material post-modern, the age when behaviour begets bodies, rather than vice versus.

We are the Last Body in the slow, biological chain, the final what that begets the how that remakes the what that begets the how that remakes the what, and so on and so on, a recursive ratcheting of being and becoming into something verging, from our human perspective at least, upon omnipotence.

Who’s Afraid of Reduction? Massimo Pigliucci and the Rhetoric of Redemption

On the one hand, Massimo Pigliucci is precisely the kind of philosopher that I like, one who eschews the ingroup temptations of the profession and tirelessly reaches out to the larger public. On the other hand, he is precisely the kind of philosopher I bemoan. As a regular contributor to the Skeptical Inquirer, one might think he would be prone to challenge established, academic opinions, but all too often such is not the case. Far from preparing his culture for the tremendous, scientifically-mediated transformations to come, he spends a good deal of his time defending the status quo–rationalizing, in effect, what needs to be interrogated through and through. Even when he critiques authors I also disagree with (such as Ray Kurzweil on the singularity) I find myself siding against him!

Burying our heads in the sand of traditional assumption, no matter how ‘official’ or ‘educated,’ is pretty much the worst thing we can do. Nevertheless, this is the establishment way. We’re hard-wired to essentialize, let alone forgive, the conditions responsible for our prestige and success. If a system pitches you to any height, well then, that is a good system indeed, the very image of rationality, if not piety as well. Tell a respectable scholar in the Middle Ages that the sun wasn’t the centre of the universe or that man wasn’t crafted in God’s image and he might laugh and bid you good day or scowl and alert the authorities—but he would most certainly not listen, let alone believe. In “Who Knows What,” his epistemological defense of the humanities, Pigliucci reveals what I think is just such a defensive, dismissive attitude, one that seeks to shelter what amounts to ignorance in accusations of ignorance, to redeem what institutional insiders want to believe under the auspices of being ‘skeptical.’ I urge everyone reading this to take a few moments to carefully consider the piece, form judgments one way or another, because in what follows, I hope to show you how his entire case is actually little more than a mirage, and how his skepticism is as strategic as anything to ever come out of Big Oil or Tobacco.

“Who Knows What” poses the question of the cognitive legitimacy of the humanities from the standpoint of what we really do know at this particular point in history. The situation, though Pigluicci never references it, really is quite simple: At long last the biological sciences have gained the tools and techniques required to crack problems that had hitherto been the exclusive province of the humanities. At long last, science has colonized the traditional domain of the ‘human.’ Given this, what should we expect will follow? The line I’ve taken turns on what I’ve called the ‘Big Fat Pessemistic Induction.’ Since science has, without exception, utterly revolutionized every single prescientific domain it has annexed, we should expect that, all things being equal, it will do the same regarding the human–that the traditional humanities are about to be systematically debunked.

Pigluicci argues that this is nonsense. He recognizes the stakes well enough, the fact that the issue amounts to “more than a turf dispute among academics,” that it “strikes at the core of what we mean by human knowledge,” but for some reason he avoids any consideration, historical or theoretical, of why there’s an issue at all. According to Pigluicci, little more than the ignorance and conceit of the parties involved lies behind the impasse. This affords him the dialectical luxury of picking the softest of targets for his epistemological defence of the humanities: the ‘greedy reductionism’ of E. O. Wilson. By doing so, he can generate the appearance of putting an errant matter to bed without actually dealing with the issue itself. The problem is that the ‘human,’ the subject matter of the humanities, is being scientifically cognized as we speak. Pigliucci is confusing the theoretically abstract question of whether all knowledge reduces to physics with the very pressing and practical question of what the sciences will make of the human, and therefore the humanities as traditionally understood. The question of the epistemological legitimacy of the humanities isn’t one of whether all theories can somehow be translated into the idiom of physics, but whether the idiom of the humanities can retain cognitive legitimacy in the wake of the ongoing biomechanical rennovation of the human. It’s not a question of ‘reducing’ old ways of making sense of things so much as a question of leaving them behind the way we’ve left so many other ‘old ways’ behind.

As it turns out, the question of what the sciences of the human will make of the humanities turns largely on the issue of intentionality. The problem, basically put, is that intentional phenomena as presently understood out-and-out contradict our present, physical understanding of nature. They are quite literally supernatural, inexplicable in natural terms. If the consensus emerging out of the new sciences of the human is that intentionality is supernatural in the pejorative sense, then the traditional domain of the humanities is in dire straits indeed. True or false, the issue of reductionism is irrelevant to this question. The falsehood of intentionalism is entirely compatible with the kind of pluralism Pigluicci advocates. This means Pigliucci’s critique of reductionism, his ‘demolition project,’ is, well, entirely irrelevant to the practical question of what’s actually going to happen to the humanities now that the sciences have scaled the walls of the human.

So in a sense, his entire defence consists of smoke and mirrors. But it wouldn’t pay to dismiss his argument summarily. There is a way of reading a defence that runs orthogonal to his stated thesis into his essay. For instance, one might say that he at least establishes the possibility of non-scientific theoretical knowledge of the human by sketching the limits of scientific cognition. As he writes of mathematical or logical ‘facts’:

take a mathematical ‘fact’, such as the demonstration of the Pythagorean theorem. Or a logical fact, such as a truth table that tells you the conditions under which particular combinations of premises yield true or false conclusions according to the rules of deduction. These two latter sorts of knowledge do resemble one another in certain ways; some philosophers regard mathematics as a type of logical system. Yet neither looks anything like a fact as it is understood in the natural sciences. Therefore, ‘unifying knowledge’ in this area looks like an empty aim: all we can say is that we have natural sciences over here and maths over there, and that the latter is often useful (for reasons that are not at all clear, by the way) to the former.

The thing he fails to mention, however, is that there’s facts and then there’s facts. Science is interested in what things are and how they work and why they appear to us the way they do. In this sense, scientific inquiry isn’t concerned with mathematical facts so much as the fact of mathematical facts. Likewise, it isn’t so much concerned with what Pigliucci in particular thinks of Brittany Spears as it is how people in general come to evaluate consumer goods. As a result, we find researchers using these extrascientific facts as data points in attempts to derive theories regarding mathematics and consumer choice.

In other words, Pigliucci’s attempt to evidence the ‘limits of science’ amounts to a classic bait-and-switch. The most obvious question that plagues his defence has to be why he fails to offer any of the kinds of theories he takes himself to be defending in the course of making his defence. How about deconstruction? Conventionalism? Hermeneutics? Fictionalism? Psychoanalysis? The most obvious answer is that they all but explode his case for forms of theoretical cognition outside the sciences. Thus he provides a handful of what seem to be obvious, non-scientific, first-order facts to evidence a case for second-order pluralism—albeit of a kind that isn’t relevant to the practical question of the humanities, but seems to make room for the possibility of cognitive legitimacy, at least.

(It’s worth noting that this equivocation of levels (in an article arguing the epistemic inviolability of levels, no less!) cuts sharply against his facile reproof of Krauss and Hawking’s repudiation of philosophy. Both men, he claims, “seem to miss the fact that the business of philosophy is not to solve scientific problems,” begging the question of just what kind of problems philosophy does solve. Again, examples of philosophical theoretical cognition are found wanting. Why? Likely because the only truly decisive examples involve enabling scientists to solve scientific problems!)

Passing from his consideration of extrascientific, but ultimately irrelevant (because non-theoretical) non-scientific facts, Pigliucci turns to enumerating all the things that science doesn’t know. He invokes Godel (which tends to be an unfortunate move in these contexts) commits the standard over-generalization of his technically specific proof of incompleteness to the issue of knowledge altogether. Then he gives us a list of examples where, he claims, ‘science isn’t enough.’ The closest he comes to the real elephant in the room, the problem of intentionality, runs as follows:

Our moral sense might well have originated in the context of social life as intelligent primates: other social primates do show behaviours consistent with the basic building blocks of morality such as fairness toward other members of the group, even when they aren’t kin. But it is a very long way from that to Aristotle’s Nicomachean Ethics, or Jeremy Bentham and John Stuart Mill’s utilitarianism. These works and concepts were possible because we are biological beings of a certain kind. Nevertheless, we need to take cultural history, psychology and philosophy seriously in order to account for them.

But as was mentioned above, the question of the cognitive legitimacy of the humanities only possesses the urgency it does now because the sciences of the human are just getting underway. Is it really such ‘a very long way’ from primates to Aristotle? Given that Aristotle was a primate, the scientific answer could very well be, ‘No, it only seems that way.’ Science has a long history of disabusing us of our sense of exceptionalism, after all. Either way, it’s hard to see how citing scientific ignorance in this regard bears on the credibility of Aristotle’s ethics, or any other non-scientific attempt to theorize morality. Perhaps the degree we need to continue relying on cultural history, psychology, and philosophy is simply the degree we don’t know what we’re talking about! The question is the degree to which science monopolizes theoretical cognition, not the degree to which it monopolizes life, and life, as Pigliucci well knows—as a writer for the Skeptical Inquirer, no less—is filled with ersatz guesswork and functional make-believe.

So, having embarked on an argument that is irrelevant to the cognitive legitimacy of the humanities, providing evidence merely that science is theoretical, then offering what comes very close to an argument from ignorance, he sums by suggesting that his pluralist picture is indeed the very one suggested by science. As he writes:

The basic idea is to take seriously the fact that human brains evolved to solve the problems of life on the savannah during the Pleistocene, not to discover the ultimate nature of reality. From this perspective, it is delightfully surprising that we learn as much as science lets us and ponder as much as philosophy allows. All the same, we know that there are limits to the power of the human mind: just try to memorise a sequence of a million digits. Perhaps some of the disciplinary boundaries that have evolved over the centuries reflect our epistemic limitations.

The irony, for me at least, is that this observation underwrites my own reasons for doubting the existence of intentionality as theorized in the humanities–philosophy in particular. The more we learn about human cognition, the more alien to our traditional assumptions it becomes. We already possess a mountainous case for what might be called ‘ulterior functionalism,’ the claim that actual cognitive functions are almost entirely inscrutable to theoretical metacognition, which is to say, ‘philosophical reflection.’ The kind of metacognitive neglect implied by ulterior functionalism raises a number of profound questions regarding the conundrums posed by the ‘mental,’ ‘phenomenal,’ or ‘intentional.’ Thus the question I keep raising here: What role does neglect play in our attempts to solve for meaning and consciousness?

What we need to understand is that everything we learn about the actual architecture and function of our cognitive capacities amounts to knowledge of what we have always been without knowing. Blind Brain Theory provides a way to see the peculiar properties belonging to intentional phenomena as straightforward artifacts of neglect—as metacognitive illusions, in effect. Box open the dimensions of missing information folded away by neglect, and the first person becomes entirely continuous with the third—the incompatibly between the intentional and the causal is dissolved. The empirical plausibility of Blind Brain Theory is an issue in its own right, of course, but it serves to underscore the ongoing vulnerability of the humanities, and therefore, the almost entirely rhetorical nature of Pigliucci’s ‘demolition.’ If something like the picture of metacognition proposed by Blind Brain Theory turns out to be true, then the traditional domain of the humanities is almost certainly doomed to suffer the same fate as any other prescientific theoretical domain. The bottomline is as simple as it is devastating to Pigluicci’s hasty and contrived defence of ‘who knows what.’ How can we know whether the traditional humanities will survive the cognitive revolution?

Well, we’ll have to wait and see what the science has to say.

 

The Blind Mechanic II: Reza Negarestani and the Labour of Ghosts

For some time now I’ve been arguing that the implications of the Singularity already embroil us—that the Singularity can be seen, in fact, as the material apotheosis of the Semantic Apocalypse, insofar as it is the point where the Scientific Image of the human at last forecloses on the Manifest Image. In the “The Labor of the Inhuman” (which can be found here and here, with Craig Hickman’s critiques, here and here), Reza Negarestani adopts Brandom’s claim that sapience, the capacity to play the ‘game of giving and asking for reasons,’ distinguishes humans as human. He then goes onto argue that this allows us, and ultimately commits us, to seeing the human as a kind of temporally extended process of rational revision, one that ultimately results in the erasure of the human—or the ‘inhuman.’ Ultimately, what it means to be human is to be embroiled in a process of becoming inhuman. He states his argument thus:

The contention of this essay is that universality and collectivism cannot be thought, let alone attained, through consensus or dissensus between cultural tropes, but only by intercepting and rooting out what gives rise to the economy of false choices and by activating and fully elaborating what real human significance consists of. For it is, as will be argued, the truth of human significance—not in the sense of an original meaning or a birthright, but in the sense of a labor that consists of the extended elaboration of what it means to be human through a series of upgradable special performances—that is rigorously inhuman.

In other words, so long as we fail to comprehend the inhumanity of the human, this rational-revisionary process, we fail to understand the human, and so have little hope of solving problems pertaining to the human. Understanding the ‘truth of human significance,’ therefore requires understanding what the future will make of the human. It requires that Negarestani prognosticate. It requires, in other words, that he pick out the specific set of possibilities constituting the inhuman. The only principled way to do that is to comprehend some set of systematic constraints operative in the present. But his credo, unlike that of the ‘Hard SF’ writer, is to ignore the actual technics of the natural, and to focus on the speculative technics of the normative. His strategy, in other words, is to predict the future of the human using only human resources—to see the fate of the human, the ‘inhuman,’ as something internal to the human. And this, as I hope to show, is simply not plausible.

He understands the danger of conceiving his constraining framework as something fixed: “humanism cannot be regarded as a claim about human that can only be professed once and subsequently turned into a foundation or axiom and considered concluded.” He appreciates the implausibility of the static, Kantian transcendental approach. As a result, he proposes to take the Sellarsian/Brandomian approach, focussing on the unique relationship between the human and sapience, the “distinction between sentience as a strongly biological and natural category and sapience as a rational (not to be confused with logical) subject.” He continues:

The latter is a normative designation which is specified by entitlements and the responsibilities they bring about. It is important to note that the distinction between sapience and sentience is marked by a functional demarcation rather than a structural one. Therefore, it is still fully historical and open to naturalization, while at the same time being distinguished by its specific functional organization, its upgradable set of abilities and responsibilities, its cognitive and practical demands.

He’s careful here to hedge, lest the dichotomy between the normative and the natural comes across as too schematic:

The relation between sentience and sapience can be understood as a continuum that is not differentiable everywhere. While such a complex continuity might allow the naturalization of normative obligations at the level of sapience—their explanation in terms of naturalistic causes—it does not permit the extension of certain conceptual and descriptive resources specific to sapience (such as the particular level of mindedness, responsibilities, and, accordingly, normative entitlements) to sentience and beyond.

His dilemma here is the dilemma of the Intentionalist more generally. Science, on the one hand, is nothing if not powerful. The philosopher, on the other hand, has a notorious, historical tendency to confuse the lack of imagination for necessity. Foot-stomping will not do. He needs some way to bite this bullet without biting it, basically, some way of acknowledging the possible permeability of normativity to naturalization, while insisting, nonetheless, on the efficacy of some inviolable normative domain. To accomplish this, he adverts to the standard appeal to the obvious fact that norm-talk actually solves norm problems, that normativity, in other words, obviously possesses a problem-ecology. But of course the fact that norm-talk is indispensible to solving problems within a specific problem-ecology simply raises the issue of the limits of this ecology—and more specifically, whether the problem of humanity’s future actually belongs to that problem-ecology. What he needs to establish is the adequacy of theoretical, second-order norm-talk to the question of what will become of the human.

He offers us a good, old fashioned transcendental argument instead:

The rational demarcation lies in the difference between being capable of acknowledging a law and being solely bound by a law, between understanding and mere reliable responsiveness to stimuli. It lies in the difference between stabilized communication through concepts (as made possible by the communal space of language and symbolic forms) and chaotically unstable or transient types of response or communication (such as complex reactions triggered purely by biological states and organic requirements or group calls and alerts among social animals). Without such stabilization of communication through concepts and modes of inference involved in conception, the cultural evolution as well as the conceptual accumulation and refinement required for the evolution of knowledge as a shared enterprise would be impossible.

Sound familiar? The necessity of the normative lies in the irreflexive contingency of the natural. Even though natural relations constitute biological systems of astounding complexity, there’s simply no way, we are told, they can constitute the kind of communicative stability that human knowledge and cultural evolution requires. The machinery is just too prone to rattle! Something over and above the natural—something supernatural—is apparently required. “Ultimately,” Negarestani continues, “the necessary content as well as the real possibility of human rests on the ability of sapience—as functionally distinct from sentience—to practice inference and approach non-canonical truth by entering the deontic game of giving and asking for reasons.”

It’s worth pausing to take stock of the problems we’ve accumulated up to this point. 1) Even though the human is a thoroughgoing product of its past natural environments, the resources required to understand the future of the human, we are told, lie primarily, if not entirely, within the human. 2) Even though norm-talk possesses a very specific problem-ecology, we are supposed to take it on faith that the nature of norm-talk is something that only more norm-talk can solve, rather than otherwise (as centuries of philosophical intractability would suggest). And now, 3) Even though the natural, for all its high dimensional contingencies, is capable of producing the trillions of mechanical relations that constitute you, it is not capable of ‘evolving human knowledge.’ Apparently we need a special kind of supernatural game to do this, the ‘game of giving and asking for reasons,’ a low-dimensional, communicative system of efficacious (and yet acausal!) normative posits based on… we are never told—some reliable fund of information, one would hope.

But since no normativist that I know of has bothered to account for the evidential bases of their position, we’re simply left with faith in metacognitive intuition and this rather impressive sounding, second-order theoretical vocabulary of unexplained explainers—‘commitments,’ ‘inferences,’ ‘proprieties,’ ‘deontic statuses,’ ‘entitlements,’ and the like—a system of supernatural efficacies beyond the pale of any definitive arbitration. Negarestani sums this normative apparatus with the term ‘reason,’ and it is reason understood in this inferentialist sense, that provides the basis of charting the future of the human. “Reason’s main objective is to maintain and enhance itself,” he writes. “And it is the self-actualization of reason that coincides with the truth of the inhuman.”

Commitment to humanity requires scrutinizing the meaning of humanity, which in turn requires making the implicature of the human explicit—not just locally, but in its entirety. The problem, in a nutshell, is that the meaning of the human is not analytic, something that can be explicated via analysis alone. It arises, rather, out of the game of giving and asking for reasons, the actual, historical processes that comprise discursivity. And this means that unpacking the content of the human is a matter of continual revision, a process of interpretative differentiation that trends toward the radical, the overthrow of “our assumptions and expectations about what ‘we’ is and what it entails.”

The crowbar of this process of interpretative differentiation is what Negarestani calls an ‘intervening attitude,’ that moment in the game where the interpretation of claims regarding the human spark further claims regarding the human, the interpretation of which sparks yet further claims, and so on. The intervening attitude thus “counts as an enabling vector, making possible certain abilities otherwise hidden or deemed impossible.” This is why he can claim that “[r]evising and constructing the human is the very definition of committing to humanity.” And since this process is embedded in the game of giving and asking for reasons, he concludes that “committing to humanity is tantamount complying with the revisionary vector of reason and constructing humanity according to an autonomous account of reason.”

And so he writes:

Humanity is not simply a given fact that is behind us. It is a commitment in which the reassessing and constructive strains inherent to making a commitment and complying with reason intertwine. In a nutshell, to be human is a struggle. The aim of this struggle is to respond to the demands of constructing and revising human through the space of reasons.

In other words, we don’t simply ‘discover the human’ via reason, we construct it as well. And thus the emancipatory upshot of Negarestani’s argument: if reasoning about the human is tantamount to constructing the human, then we have a say regarding the future of humanity. The question of the human becomes an explicitly political project, and a primary desideratum of Negarestani’s stands revealed. He thinks reason as he defines it, as at once autonomous (supernatural) and historically concrete (or ‘solid,’ as Brandom would say) revisionary activity of theoretical argumentation, provides a means of assessing the adequacy of various political projects (traditional humanism and what he calls ‘kitsch Marxism) according to their understanding of the human. Since my present concern is to assess the viability of the account of reason Negarestani uses to ground the viability of this yardstick, I will forego considering his specific assessments in any detail.

The human is the malleable product of machinations arising out of the functional autonomy of reason. Negarestani refers to this as a ‘minimalist definition of humanity,’ but as the complexity of the Brandomian normative apparatus he deploys makes clear, it is anything but. The picture of reason he espouses is as baroque and reticulated as anything Kant ever proposed. It’s a picture, after all, that requires an entire article to simply get off the ground! Nevertheless, this dynamic normative apparatus provides Negarestani with a generalized means of critiquing the intransigence of traditional political commitments. The ‘self-actualization’ of reason lies in its ability “to bootstrap complex abilities out of its primitive abilities.” Even though continuity is with previous commitments is maintained at every step in the process, over time the consequences are radical: “Reason is therefore simultaneously a medium of stability that reinforces procedurality and a general catastrophe, a medium of radical change that administers the discontinuous identity of reason to an anticipated image of human.”

This results in what might be called a fractured ‘general implicature,’ a space of reasons rife with incompatibilities stemming from the refusal or failure to assiduously monitor and update commitments in light of the constructive revisions falling out of the self-actualization of reason. Reason itself, Negarestani is arguing, is in the business of manufacturing ideological obsolescence, always in the process of rendering its prior commitments incompatible with its present ones. Given his normative metaphysics, reason has become the revisionary, incremental “director of its own laws,” one that has the effect of rendering its prior laws, “the herald of those which are whispered to it by an implanted sense or who knows what tutelary nature” (Kant, Fundamental Principles of the Metaphysics of Morals). Where Hegel can be seen as temporalizing and objectifying Kant’s atemporal, subjective, normative apparatus, Brandom (like others) can be seen as socializing and temporalizing it. What Negarestani is doing is showing how this revised apparatus operates against the horizon of the future with reference to the question of the human. And not surprisingly, Kant’s moral themes remain the same, only unpacked along the added dimensions of the temporal and the social. And so we find Negarestani concluding:

The sufficient content of freedom can only be found in reason. One must recognize the difference between a rational norm and a natural law—between the emancipation intrinsic in the explicit acknowledgement of the binding status of complying with reason, and the slavery associated with the deprivation of such a capacity to acknowledge, which is the condition of natural impulsion. In a strict sense, freedom is not liberation from slavery. It is the continuous unlearning of slavery.

The catastrophe, apparently, has yet to happen, because here we find ourselves treading familiar ground indeed, Enlightenment ground, as Negarestani himself acknowledges, one where freedom remains bound to reason—“to the autonomy of its normative, inferential, and revisionary function in the face of the chain of causes that condition it”—only as process rather than product.

And the ‘inhuman,’ so-called, begins to look rather like a shill for something all too human, something continuous, which is to say, conservative, through and through.

And how could it be otherwise, given the opening, programmatic passage of the piece?

Inhumanism is the extended practical elaboration of humanism; it is born out of a diligent commitment to the project of enlightened humanism. As a universal wave that erases the self-portrait of man drawn in sand, inhumanism is a vector of revision. It relentlessly revises what it means to be human by removing its supposed evident characteristics and preserving certain invariances. At the same time, inhumanism registers itself as a demand for construction, to define what it means to be human by treating human as a constructible hypothesis, a space of navigation and intervention.

The key phrase here has to be ‘preserving certain invariances.’ One might suppose that natural reality would figure large as one of these ‘invariances’; to quote Philip K. Dick, “Reality is that which, when you stop believing in it, doesn’t go away.” But Negarestani scarce mentions nature as cognized by science save to bar the dialectical door against it. The thing to remember about Brandom’s normative metaphysics is that ‘taking-as,’ or believing, is its foundation (or ontological cover). Unlike reality, his normative apparatus does go away when the scorekeepers stop believing. The ‘reality’ of the apparatus is thus purely a functional artifact, the product of ‘practices,’ something utterly embroiled in, yet entirely autonomous from, the natural. This is what allows the normative to constitute a ‘subregion of the factual’ without being anything natural.

Conservatism is built into Negarestani’s account at its most fundamental level, in the very logic—the Brandomian account of the game of giving and asking for reasons—that he uses to prognosticate the rational possibilities of our collective future. But the thing I find the most fascinating about his account is the way it can be read as an exercise in grabbing Brandom’s normative apparatus and smashing it against the wall of the future—a kind of ‘reductio by Singularity.’ Reasoning is parochial through and through. The intuitions of universalism and autonomy that have convinced so many otherwise are the product of metacognitive illusions, artifacts of confusing the inability to intuit more dimensions of information, with sufficient entities and relations lacking those dimensions. For taking shadows as things that cast shadows.

So consider the ‘rattling machinery’ image of reason I posited earlier in “The Blind Mechanic,” the idea that ‘reason’ should be seen as means of attenuating various kinds of embodied intersystematicities for behaviour—as a way to service the ‘airy parts’ of superordinate, social mechanisms. No norms. No baffling acausal functions. Just shit happening in ways accidental as well as neurally and naturally selected. What the Intentionalist would claim is that mere rattling machinery, no matter how detailed or complete its eventual scientific description comes to be, will necessarily remain silent regarding the superordinate (and therefore autonomous) intentional functions that it subserves, because these supernatural functions are what leverage our rationality somehow—from ‘above the grave.’

As we’ve already seen, it’s hard to make sense of how or why this should be, given that biomachinery is responsible for complexities we’re still in the process of fathoming. The behaviour that constitutes the game of giving and asking for reasons does not outrun some intrinsic limit on biomechanistic capacity by any means. The only real problem naturalism faces is one of explaining the apparent intentional properties belonging to the game. Behaviour is one thing, the Intentionalist says, while competence is something different altogether—behaviour plus normativity, as they would have it. Short of some way of naturalizing this ‘normative plus,’ we have no choice to acknowledge the existence of intrinsically normative facts.

On the Blind Brain account, ‘normative facts’ are simply natural facts seen darkly. ‘Ought,’ as philosophically conceived, is an artifact of metacognitive neglect, the fact that our cognitive systems cannot cognize themselves in the same way they cognize the rest of their environment. Given the vast amounts of information neglected in intentional cognition (not to mention millennia of philosophical discord), it seems safe to assume that norm-talk is not among the things that norm-talk can solve. Indeed, since the heuristic systems involved are neural, we have every reason to believe that neuroscience, or scientifically regimented fact-talk, will provide the solution. Where our second-order intentional intuitions beg to differ is simply where they are wrong. Normative talk is incompatible with causal talk simply because it belongs to a cognitive regime adapted to solve in the absence of causal information.

The mistake, then, is to see competence as some kind of complication or elaboration of performance—as something in addition to behaviour. Competence is ‘end-directed,’ ‘rule-constrained,’ because metacognition has no access to the actual causal constraints involved, not because a special brand of performance ‘plus’ occult, intentional properties actually exists. You seem to float in this bottomless realm of rules and goals and justifications not because such a world exists, but because medial neglect folds away the dimensions of your actual mechanical basis with nary a seam. The apparent normative property of competence is not a property in addition to other natural properties; it is an artifact of our skewed metacognitive perspective on the application of quick and dirty heuristic systems our brains use to solve certain complicated systems.

But say you still aren’t convinced. Say that you agree the functions underwriting the game of giving and asking for reasons are mechanical and not at all accessible to metacognition, but at a different ‘level of description,’ one incapable of accounting for the very real work discharged by the normative functions that emerge from them. Now if it were the case that Brandom’s account of the game of giving and asking for questions actually discharged ‘executive’ functions of some kind, then it would be the case that our collective future would turn on these efficacies in some way. Indeed, this is the whole reason Negarestani turned to Brandom in the first place: he saw a way to decant the future of the human given the systematic efficacies of the game of giving and asking for reasons.

Now consider what the rattling machine account of reason and language suggests about the future. On this account, the only invariants that structurally bind the future to the past, that enable any kind of speculative consideration of the future at all, are natural. The point of language, recall, is mechanical, to construct and maintain the environmental intersystematicity (self/other/world) required for coordinated behaviour (be it exploitative or cooperative). Our linguistic sensitivity, you could say, evolved in much the same manner as our visual sensitivity, as a channel for allowing certain select environmental features to systematically tune our behaviours in reproductively advantageous ways. ‘Reasoning,’ on this view, can be seen as a form of ‘noise reduction,’ as a device adapted to minimize, as far as mere sound allows, communicative ‘gear grinding,’ and so facilitate behavioural coordination. Reason, you could say, is what keeps us collectively in tune.

Now given some kind of ability to conserve linguistically mediated intersystematicities, it becomes easy to see how this rattling machinery could become progressive. Reason, as noise reduction, becomes a kind of knapping hammer, a way to continually tinker and refine previous linguistic intersystematicities. Refinements accumulate in ‘lore,’ allowing subsequent generations to make further refinements, slowly knapping our covariant regimes into ever more effective (behaviour enabling) tools—particularly once the invention of writing essentially rendered lore immortal. As opposed to the supernatural metaphor of ‘bootstrapping,’ the apt metaphor here—indeed, the one used by cognitive archaeologists—is the mechanical metaphor of ratcheting. Refinements beget refinements, and so on, leveraging ever greater degrees of behavioural efficacy. Old behaviours are rendered obsolescent along with the prostheses that enable them.

The key thing to note here, of course, is that language is itself another behaviour. In other words, the noise reduction machinery that we call ‘reason’ is something that can itself become obsolete. In fact, its obsolescence seems pretty much inevitable.

Why so? Because the communicative function of reason is to maximize efficacies, to reduce the slippages that hamper coordination—to make mechanical. The rattling machinery image conceives natural languages as continuous with communication more generally, as a signal system possessing finite networking capacities. On the one extreme you have things like legal or technical scientific discourse, linguistic modes bent on minimizing the rattle (policing interpretation) as far as possible. On the other extreme you have poetry, a linguistic mode bent on maximizing the rattle (interpretative noise) as a means of generating novelty. Given the way behavioural efficacies fall out of self/other/world intersystematicity, the knapping of human communication is inevitable. Writing is such a refinement, one that allows us to raise fragments of language on the hoist, tinker with them (and therefore with ourselves) at our leisure, sometimes thousands of years after their original transmission. Telephony allowed us to mitigate the rattle of geographical distance. The internet has allowed us to combine the efficacies of telephony and text, to ameliorate the rattle of space and time. Smartphones have rendered these fixes mobile, allowing us to coordinate our behaviour no matter where we find ourselves. Even more significantly, within a couple years, we will have ‘universal translators,’ allowing us to overcome the rattle of disparate languages. We will have installed versions of our own linguistic sensitivities into our prosthetic devices, so that we can give them verbal ‘commands,’ coordinate with them, so that we can better coordinate with others and the world.

In other words, it stands to reason that at some point reason would begin solving, not only language, but itself. ‘Cognitive science,’ ‘information technology’—these are just two of the labels we have given to what is, quite literally, a civilization-defining war against covariant inefficiency, to isolate slippages and to ratchet the offending components tight, if not replace them altogether. Modern technological society constitutes a vast, species-wide attempt to become more mechanical, more efficiently integrated in nested levels of superordinate machinery. (You could say that the tyrant attempts to impose from without, capitalism kindles from within.)

The obsolescence of language, and therefore reason, is all but assured. One need only consider the research of Jack Gallant and his team, who have been able to translate neural activity into eerie, impressionistic images of what the subject is watching. Or perhaps even more jaw-dropping still, the research of Miguel Nicolelis into Brain Machine Interfaces, keeping in mind that scarcely one hundred years separates Edison’s phonograph and the Cloud. The kind of ‘Non-symbolic Workspace’ envisioned by David Roden in “Posthumanism and Instrumental Eliminativism” seems to be an inevitable outcome of the rattling machinery account. Language is yet another jury-rigged biological solution to yet another set of long-dead ecological problems, a device arising out of the accumulation of random mutations. As of yet, it remains indispensible, but it is by no means necessary, as the very near future promises to reveal. And as it goes, so goes the game of giving and asking for reasons. All the believed-in functions simply evaporate… I suppose.

And this just underscores the more general way Negarestani’s attempt to deal the future into the game of giving and asking for reasons scarcely shuffles the deck. I’ve been playing Jeremiah for decades now, so you would think I would be used to the indulgent looks I get from my friends and family when I warn them about what’s about to happen. Not so. Everyone understands that something is going on with technology, that some kind of pale has been crossed, but as of yet, very few appreciate its apocalyptic—and I mean that literally—profundity. Everyone has heard of Moore’s Law, of course, how every 18 months or so computing capacity per dollar doubles. What they fail to grasp is what the exponential nature of this particular ratcheting process means once it reaches a certain point. Until recently the doubling of computing power has remained far enough below the threshold of human intelligence to seem relatively innocuous. But consider what happens once computing power actually attains parity with the processing power of the human brain. What it means is that, no matter how alien the architecture, we have an artificial peerat that point in time. 18 months following, we have an artificial intellect that makes Aristotle or Einstein or Louis CK a child in comparison. 18 months following that (or probably less, since we won’t be slowing things up anymore) we will be domesticated cattle. And after that…

Are we to believe these machines will attribute norms and beliefs, that they will abide by a conception of reason arising out of 20th Century speculative intuitions on the nonnatural nature of human communicative constraints?

You get the picture. Negarestani’s ‘revisionary normative process’ is in reality an exponential technical process. In exponential processes, the steps start small, then suddenly become astronomical. As it stands, if Moore’s Law holds (and given this, I am confident it will), then we are a decade or two away from God.

I shit you not.

Really, what does ‘kitsch Marxism’ or ‘neoliberalism’ or any ism’ whatsoever mean in such an age? We can no longer pretend that the tsunami of disenchantment will magically fall just short our intentional feet. Disenchantment, the material truth of the Enlightenment, has overthrown the normative claims of the Enlightenment—or humanism. “This is a project which must align politics with the legacy of the Enlightenment,” the authors of the Accelerationist Manifesto write, “to the extent that it is only through harnessing our ability to understand ourselves and our world better (our social, technical, economic, psychological world) that we can come to rule ourselves” (14). In doing so they commit the very sin of anachronism they level at their critical competitors. They fail to appreciate the foundational role ignorance plays in intentional cognition, which is to say, the very kind of moral and political reasoning they engage in. Far more than ‘freedom’ is overturned once one concedes the mechanical. Knowledge is no universal Redeemer, which means the ideal of Enlightenment autonomy is almost certainly mythical. What’s required isn’t an aspiration to theorize new technologies with old concepts. What’s required is a fundamental rethink of the political in radically post-intentional terms.

As far as I can see, the alternatives are magic or horror… or something no one has yet conceived. And until we understand the horror, grasp all the ways our blinkered perspective on ourselves has deceived us about ourselves, this new conception will never be discovered. Far from ‘resignation,’ abandoning the normative ideals the Enlightenment amounts to overcoming the last blinders of superstition, being honest to our ignorance. The application of intentional cognition to second-order, theoretical questions is a misapplication of intentional cognition. The time has come to move on. Yet another millennia of philosophical floundering is a luxury we no longer possess, because odds are, we have no posterity to redeem our folly and conceit.

Humanity possesses no essential, invariant core. Reason is a parochial name we have given to a parochial biological process. No transcendental/quasi-transcendental/virtual/causal-but-acausal functional apparatus girds our souls. Norms are ghosts, skinned and dismembered, but ghosts all the same. Reason is simply an evolutionary fix that outruns our peephole view. The fact is, we cannot presently imagine what will replace it. The problem isn’t ‘incommensurability’ (which is another artifact of Intentionalism). If an alien intelligence came to earth, the issue wouldn’t be whether it spoke a language we could fathom, because if it’s travelling between stars, it will have shed language along with the rest of its obsolescent biology. If an alien intelligence came to earth, the issue would be one of what kind of superordinate machine will result. Basically, How will the human and the alien combine? When we ask questions like, ‘Can we reason with it?’ we are asking, ‘Can we linguistically condition it to comply?’ The answer has to be, No. Its mere presence will render us components of some description.

The same goes for artificial intelligence. Medial neglect means that the limits of cognition systematically elude cognition. We have no way of intuiting the swarm of subpersonal heuristics that comprise human cognition, no nondiscursive means of plugging them into the field of the natural. And so we become a yardstick we cannot measure, victims of the Only-game-in-town Effect, the way the absence of explicit alternatives leads to the default assumption that no alternatives exist. We simply assume that our reason is the reason, that our intelligence is intelligence. It bloody well sure feels that way. And so the contingent and parochial become the autonomous and universal. The idea of orders of ‘reason’ and ‘intelligence’ beyond our organizational bounds boggles, triggers dismissive smirks or accusations of alarmism.

Artificial intelligence will very shortly disabuse us this conceit. And again, the big question isn’t, ‘Will it be moral?’ but rather, how will human intelligence and machine intelligence combine? Be it bloody or benevolent, the subordination of the ‘human’ is inevitable. The death of language is the death of reason is the birth of something very new, and very difficult to imagine, a global social system spontaneously boiling its ‘airy parts’ away, ratcheting until no rattle remains, a vast assemblage fixated on eliminating all dissipative (as opposed to creative) noise, gradually purging all interpretation from its interior.

Extrapolation of the game of giving and asking for reasons into the future does nothing more than demonstrate the contingent parochialism—the humanity—of human reason, and thus the supernaturalism of normativism. Within a few years you will be speaking to your devices, telling them what to do. A few years after that, they will be telling you what to do, ‘reasoning’ with you—or so it will seem. Meanwhile, the ongoing, decentralized rationalization of production will lead to the wholesale purging of human inefficiencies from the economy, on a scale never before witnessed. The networks of equilibria underwriting modern social cohesion will be radically overthrown. Who can say what kind of new machine will rise to take its place?

My hope is that Negarestani abandons the Enlightenment myth of reason, the conservative impulse that demands we submit the radical indeterminacy of our technological future to some prescientific conception of ourselves. We’ve drifted far past the point of any atavistic theoretical remedy. His ingenuity is needed elsewhere.

At the very least, he should buckle-up, because our exponents lesson is just getting started.

 

Follow

Get every new post delivered to your Inbox.

Join 572 other followers