Three Pound Brain

No bells, just whistling in the dark…

Tag: Semantic Apocalypse

Enlightenment How? Pinker’s Tutelary Natures*

by rsbakker

 

The fate of civilization, Steven Pinker thinks, hangs upon our commitment to enlightenment values. Enlightenment Now: The Case for Reason, Science, Humanism and Progress constitutes his attempt to shore up those commitments in a culture grown antagonistic to them. This is a great book, well worth the read for the examples and quotations Pinker endlessly adduces, but even though I found myself nodding far more often than not, one glaring fact continually leaks through: Enlightenment Now is a book about a process, namely ‘progress,’ that as yet remains mired in ‘tutelary natures.’ As Kevin Williamson puts it in the National Review, Pinker “leaps, without warrant, from physical science to metaphysical certitude.”

What is his naturalization of meaning? Or morality? Or cognition—especially cognition! How does one assess the cognitive revolution that is the Enlightenment short understanding the nature of cognition? How does one prognosticate something one does not scientifically understand?

At one point he offers that “[t]he principles of information, computation, and control bridge the chasm between the physical world of cause and effect and the mental world of knowledge, intelligence, and purpose” (22). Granted, he’s a psychologist: operationalizations of information, computation, and control are his empirical bread and butter. But operationalizing intentional concepts in experimental contexts is a far cry from naturalizing intentional concepts. He entirely neglects to mention that his ‘bridge’ is merely a pragmatic, institutional one, that cognitive science remains, despite decades of research and billions of dollars in resources, unable to formulate its explananda, let alone explain them. He mentions a great number of philosophers, but he fails to mention what the presence of those philosophers in his thetic wheelhouse means.

All he ultimately has, on the one hand, is a kind of ‘ta-da’ argument, the exhaustive statistical inventory of the bounty of reason, science, and humanism, and on the other hand (which he largely keeps hidden behind his back), he has the ‘tu quoque,’ the question-begging presumption that one can only argue against reason (as it is traditionally understood) by presupposing reason (as it is traditionally understood). “We don’t believe in reason,” he writes, “we use reason” (352). Pending any scientific verdict on the nature of ‘reason,’ however, these kinds of transcendental arguments amount to little more than fancy foot-stomping.

This is one of those books that make me wish I could travel back in time to catch the author drafting notes. So much brilliance, so much erudition, all devoted to beating straw—at least as far as ‘Second Culture’ Enlightenment critiques are concerned. Nietzsche is the most glaring example. Ignoring Nietzsche the physiologist, the empirically-minded skeptic, and reducing him to his subsequent misappropriation by fascist, existential, and postmodernist thought, Pinker writes:

Disdaining the commitment to truth-seeking among scientists and Enlightenment thinkers, Nietzsche asserted that “there are no facts, only interpretations,” and that “truth is a kind of error without which a certain species of life could not live.” (Of course, this left him unable to explain why we should believe that those statements are true.) 446

Although it’s true that Nietzsche (like Pinker) lacked any scientifically compelling theory of cognition, what he did understand was its relation to power, the fact that “when you face an adversary alone, your best weapon may be an ax, but when you face an adversary in front of a throng of bystanders, your best weapon may be an argument” (415). To argue that all knowledge is contextual isn’t to argue that all knowledge is fundamentally equal (and therefore not knowledge at all), only that it is bound to its time and place, a creature possessing its own ecology, its own conditions of failure and flourishing. The Nietzschean thought experiment is actually quite a simple one: What happens when we turn Enlightenment skepticism loose upon Enlightenment values? For Nietzsche, Enlightenment Now, though it regularly pays lip service to the ramshackle, reversal-prone nature of progress, serves to conceal the empirical fact of cognitive ecology, that we remain, for all our enlightened noise-making to the contrary, animals bent on minimizing discrepancies. The Enlightenment only survives its own skepticism, Nietzsche thought, in the transvaluation of value, which he conceived—unfortunately—in atavistic or morally regressive terms.

This underwrites the subsequent critique of the Enlightenment we find in Adorno—another thinker whom Pinker grossly underestimates. Though science is able to determine the more—to provide more food, shelter, security, etc.—it has the social consequence underdetermining (and so undermining) the better, stranding civilization with a nihilistic consumerism, where ‘meaningfulness’ becomes just another commodity, which is to say, nothing meaningful at all. Adorno’s whole diagnosis turns on the way science monopolizes rationality, the way it renders moral discourses like Pinker’s mere conjectural exercises (regarding the value of certain values), turning on leaps of faith (on the nature of cognition, etc.), bound to dissolve into disputation. Although both Nietzsche and Adorno believed science needed to be understood as a living, high dimensional entity, neither harboured any delusions as to where they stood in the cognitive pecking order. Unlike Pinker.

Whatever their failings, Nietzsche and Adorno glimpsed a profound truth regarding ‘reason, science, humanism, and progress,’ one that lurks throughout Pinker’s entire account. Both understood that cognition, whatever it amounts to, is ecological. Steven Pinker’s claim to fame, of course, lies in the cognitive ecological analysis of different cultural phenomena—this was the whole reason I was so keen to read this book. (In How the Mind Works, for instance, he famously calls music ‘auditory cheese-cake.’) Nevertheless, I think both Nietzsche and Adorno understood the ecological upshot of the Enlightenment in way that Pinker, as an avowed humanist, simply cannot. In fact, Pinker need only follow through on his modus operandi to see how and why the Enlightenment is not what he thinks it is—as well as why we have good reason to fear that Trumpism is no ‘blip.’

Time and again Pinker likens the process of Enlightenment, the movement away from our tutelary natures, in terms of a conflict between ancestral cognitive predilections and scientifically and culturally revolutionized environments. “Humans today,” he writes, “rely on cognitive faculties that worked well enough in traditional societies, but which we now see are infested with bugs” (25). And the number of bugs that Pinker references in the course of the book is nothing short of prodigious. We tend to estimate frequencies according to ease of retrieval. We tend to fear losses more than we hope for gains. We tend to believe as our group believes. We’re prone to tribalism. We tend to forget past misfortune, and to succumb to nostalgia. The list goes on and on.

What redeems us, Pinker argues, is the human capacity for abstraction and combinatorial recursion, which allows us to endlessly optimize our behaviour. We are a self-correcting species:

So for all the flaws in human nature, it contains the seeds of its own improvement, as long as it comes up with norms and institutions that channel parochial interests into universal benefits. Among those norms are free speech, nonviolence, cooperation, cosmopolitanism, human rights, and an acknowledgment of human fallibility, and among the institutions are science, education, media, democratic government, international organizations, and markets. Not coincidentally, these were the major brainchildren of the Enlightenment. 28

We are the products of ancestral cognitive ecologies, yes, but our capacity for optimizing our capacities allows us to overcome our ‘flawed natures,’ become something better than what we were. “The challenge for us today,” Pinker writes, “is to design an informational environment in which that ability prevails over the ones that lead us into folly” (355).

And here we encounter the paradox that Enlightenment Now never considers, even though Pinker presupposes it continually. The challenge for us today is to construct an informational environment that mitigates the problems arising out of our previous environmental constructions. The ‘bugs’ in human nature that need to be fixed were once ancestral features. What has rendered these adaptations ‘buggy’ is nothing other than the ‘march of progress.’ A central premise of Enlightenment Now is that human cognitive ecology, the complex formed by our capacities and our environments, has fallen out of whack in this way or that, cuing us to apply atavistic modes of problem-solving out of school. The paradox is that the very bugs Pinker thinks only the Enlightenment can solve are the very bugs the Enlightenment has created.

What Nietzsche and Adorno glimpsed, each in their own murky way, was a recursive flaw in Enlightenment logic, the way the rationalization of everything meant the rationalization of rationalization, and how this has to short-circuit human meaning. Both saw the problem in the implementation, in the physiology of thought and community, not in the abstract. So where Pinker seeks to “to restate the ideals of the Enlightenment in the language and concepts of the 21st century” (5), we can likewise restate Nietzsche and Adorno’s critiques of the Enlightenment in Pinker’s own biological idiom.

The problem with the Enlightenment is a cognitive ecological problem. The technical (rational and technological) remediation of our cognitive ecologies transforms those ecologies, generating the need for further technical remediation. Our technical cognitive ecologies are thus drifting ever further from our ancestral cognitive ecologies. Human sociocognition and metacognition in particular are radically heuristic, and as such dependent on countless environmental invariants. Before even considering more, smarter intervention as a solution to the ambient consequences of prior interventions, the big question has to be how far—and how fast—can humanity go? At what point (or what velocity) does a recognizably human cognitive ecology cease to exist?

This question has nothing to do with nostalgia or declinism, no more than any question of ecological viability in times of environmental transformation. It also clearly follows from Pinker’s own empirical commitments.

 

The Death of Progress (at the Hand of Progress)

The formula is simple. Enlightenment reason solves natures, allowing the development of technology, generally relieving humanity of countless ancestral afflictions. But Enlightenment reason is only now solving its own nature. Pinker, in the absence of that solution, is arguing that the formula remains reliable if not quite as simple. And if all things were equal, his optimistic induction would carry the day—at least for me. As it stands, I’m with Nietzsche and Adorno. All things are not equal… and we would see this clearly, I think, were it not for the intentional obscurities comprising humanism. Far from the latest, greatest hope that Pinker makes it out to be, I fear humanism constitutes yet another nexus of traditional intuitions that must be overcome. The last stand of ancestral authority.

I agree this conclusion is catastrophic, “the greatest intellectual collapse in the history of our species” (vii), as an old polemical foe of Pinker’s, Jerry Fodor (1987) calls it. Nevertheless, short grasping this conclusion, I fear we court a disaster far greater still.

Hitherto, the light cast by the Enlightenment left us largely in the dark, guessing at the lay of interior shadows. We can mathematically model the first instants of creation, and yet we remain thoroughly baffled by our ability to do so. So far, the march of moral progress has turned on the revolutionizing our material environments: we need only renovate our self-understanding enough to accommodate this revolution. Humanism can be seen as the ‘good enough’ product of this renovation, a retooling of folk vocabularies and folk reports to accommodate the radical environmental and interpersonal transformations occurring around them. The discourses are myriad, the definitions are endlessly disputed, nevertheless humanism provisioned us with the cognitive flexibility required to flourish in an age of environmental disenchantment and transformation. Once we understand the pertinent facts of human cognitive ecology, its status as an ad hoc ‘tutelary nature’ becomes plain.

Just what are these pertinent facts? First, there is a profound distinction between natural or causal cognition, and intentional cognition. Developmental research shows that infants begin exhibiting distinct physical versus psychological cognitive capacities within the first year of life. Research into Asperger Syndrome (Baron-Cohen et al 2001) and Autism Spectrum Disorder (Binnie and Williams 2003) consistently reveals a cleavage between intuitive social cognitive capacities, ‘theory-of-mind’ or ‘folk psychology,’ and intuitive mechanical cognitive capacities, or ‘folk physics.’ Intuitive social cognitive capacities demonstrate significant heritability (Ebstein et al 2010, Scourfield et al 1999) in twin and family studies. Adults suffering Williams Syndrome (a genetic developmental disorder affecting spatial cognition) demonstrate profound impairments on intuitive physics tasks, but not intuitive psychology tasks (Kamps et al 2017). The distinction between intentional and natural cognition, in other words, is not merely a philosophical assertion, but a matter of established scientific fact.

Second, cognitive systems are mechanically intractable. From the standpoint of cognition, the most significant property of cognitive systems is their astronomical complexity: to solve for cognitive systems is to solve for what are perhaps the most complicated systems in the known universe. The industrial scale of the cognitive sciences provides dramatic evidence of this complexity: the scientific investigation of the human brain arguably constitutes the most massive cognitive endeavor in human history. (In the past six fiscal years, from 2012 to 2017, the National Institute of Health [21/01/2017] alone will have spent more than 113 billion dollars funding research bent on solving some corner of the human soul. This includes, in addition to the neurosciences proper, research into Basic Behavioral and Social Science (8.597 billion), Behavioral and Social Science (22.515 billion), Brain Disorders (23.702 billion), Mental Health (13.699 billion), and Neurodegeneration (10.183 billion)).

Despite this intractability, however, our cognitive systems solve for cognitive systems all the time. And they do so, moreover, expending imperceptible resources and absent any access to the astronomical complexities responsible—which is to say, given very little information. Which delivers us to our third pertinent fact: the capacity of cognitive systems to solve for cognitive systems is radically heuristic. It consists of ‘fast and frugal’ tools, not so much sacrificing accuracy as applicability in problem-solving (Todd and Gigerenzer 2012). When one cognitive system solves for another it relies on available cues, granular information made available via behaviour, utterly neglecting the biomechanical information that is the stock and trade of the cognitive sciences. This radically limits their domain of applicability.

The heuristic nature of intentional cognition is evidenced by the ease with which it is cued. Thus, the fourth pertinent fact: intentional cognition is hypersensitive. Anthropomorphism, the attribution of human cognitive characteristics to systems possessing none, evidences the promiscuous application of human intentional cognition to intentional cues, our tendency to run afoul what might be called intentional pareidolia, the disposition to cognize minds where no minds exist (Waytz et al 2014). The Heider-Simmel illusion, an animation consisting of no more than shapes moving about a screen, dramatically evidences this hypersensitivity, insofar as viewers invariably see versions of a romantic drama (Heider and Simmel 1944). Research in Human-Computer Interaction continues to explore this hypersensitivity in a wide variety of contexts involving artificial systems (Nass and Moon 2000, Appel et al 2012). The identification and exploitation of our intentional reflexes has become a massive commercial research project (so-called ‘affective computing’) in its own right (Yonck 2017).

Intentional pareidolia underscores the fact that intentional cognition, as heuristic, is geared to solve a specific range of problems. In this sense, it closely parallels facial pareidolia, the tendency to cognize faces where no faces exist. Intentional cognition, in other words, is both domain-specific, and readily misapplied.

The incompatibility between intentional and mechanical cognitive systems, then, is precisely what we should expect, given the radically heuristic nature of the former. Humanity evolved in shallow cognitive ecologies, mechanically inscrutable environments. Only the most immediate and granular causes could be cognized, so we evolved a plethora of ways to do without deep environmental information, to isolate saliencies correlated with various outcomes (much as machine learning).

Human intentional cognition neglects the intractable task of cognizing natural facts, leaping to conclusions on the basis of whatever information it can scrounge. In this sense it’s constantly gambling that certain invariant backgrounds obtain, or conversely, that what it sees is all that matters. This is just another way to say that intentional cognition is ecological, which in turn is just another way to say that it can degrade, even collapse, given the loss of certain background invariants.

The important thing to note, here, of course, is how Enlightenment progress appears to be ultimately inimical to human intentional cognition. We can only assume that, over time, the unrestricted rationalization of our environments will gradually degrade, then eventually overthrow the invariances sustaining intentional cognition. The argument is straightforward:

1) Intentional cognition depends on cognitive ecological invariances.

2) Scientific progress entails the continual transformation of cognitive ecological invariances.

Thus, 3) scientific progress entails the collapse of intentional cognition.

But this argument oversimplifies matters. To see as much one need only consider the way a semantic apocalypse—the collapse of intentional cognition—differs from say a nuclear or zombie apocalypse. The Walking Dead, for instance, abounds with savvy applications of intentional cognition. The physical systems underwriting meaning, in other words, are not the same as the physical systems underwriting modern civilization. So long as some few of us linger, meaning lingers.

Intentional cognition, you might think, is only as weak or as hardy as we are. No matter what the apocalyptic scenario, if humans survive it survives. But as autistic spectrum disorder demonstrates, this is plainly not the case. Intentional cognition possesses profound constitutive dependencies (as those suffering the misfortune of watching a loved one succumb to strokes or neurodegenerative disease knows first-hand). Research into the psychological effects of solitary confinement, on the other hand, show that intentional cognition also possesses profound environmental dependencies as well. Starve the brain of intentional cues, and it will eventually begin to invent them.

The viability of intentional cognition, in other words, depends not on us, but on a particular cognitive ecology peculiar to us. The question of the threshold of a semantic apocalypse becomes the question of the stability of certain onboard biological invariances correlated to a background of certain environmental invariances. Change the constitutive or environmental invariances underwriting intentional cognition too much, and you can expect it will crash, generate more problems than solutions.

The hypersensitivity of intentional cognition either evinced by solitary confinement or more generally by anthropomorphism demonstrates the threat of systematic misapplication, the mode’s dependence on cue authenticity. (Sherry Turkle’s (2007) concerns regarding ‘Darwinian buttons,’ or Deidre Barrett’s (2010) with ‘supernormal stimuli,’ touch on this issue). So, one way of inducing semantic apocalypse, we might surmise, lies in the proliferation of counterfeit cues, information that triggers intentional determinations that confound, rather than solve any problems. One way to degrade cognitive ecologies, in other words, is to populate environments with artifacts cuing intentional cognition ‘out of school,’ which is to say, circumstances cheating or crashing them.

The morbidity of intentional cognition demonstrates the mode’s dependence on its own physiology. What makes this more than platitudinal is the way this physiology is attuned to the greater, enabling cognitive ecology. Since environments always vary while cognitive systems remain the same, changing the physiology of intentional cognition impacts every intentional cognitive ecology—not only for oneself, but for the rest of humanity as well. Just as our moral cognitive ecology is complicated by the existence of psychopaths, individuals possessing systematically different ways of solving social problems, the existence of ‘augmented’ moral cognizers complicates our moral cognitive ecology as well. This is important because you often find it claimed in transhumanist circles (see, for example, Buchanan 2011), that ‘enhancement,’ the technological upgrading of human cognitive capacities, is what guarantees perpetual Enlightenment. What better way to optimize our values than by reengineering the biology of valuation?

Here, at last, we encounter Nietzsche’s question cloaked in 21st century garb.

And here we can also see where the above argument falls short: it overlooks the inevitability of engineering intentional cognition to accommodate constitutive and environmental transformations. The dependence upon cognitive ecologies asserted in (1) is actually contingent upon the ecological transformation asserted in (2).

1) Intentional cognition depends on constitutive and environmental cognitive ecological invariances.

2) Scientific progress entails the continual transformation of constitutive and environmental cognitive ecological invariances.

Thus, 3) scientific progress entails the collapse of intentional cognition short remedial constitutive transformations.

What Pinker would insist is that enhancement will allow us to overcome our Pleistocene shortcomings, and that our hitherto inexhaustible capacity to adapt will see us through. Even granting the technical capacity to so remediate, the problem with this reformulation is that transforming intentional cognition to account for transforming social environments automatically amounts to a further transformation of social environments. The problem, in other words, is that Enlightenment entails the end of invariances, the end of shared humanity, in fact. Yuval Harari (2017) puts it with characteristic brilliance in Homo Deus:

What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design, or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket? 277

The former dilemma is presently dominating the headlines and is set to be astronomically complicated by the explosion of AI. The latter we can see rising out of literature, clawing its way out of Hollywood, seizing us with video game consoles, engulfing ever more experiential bandwidth. And as I like to remind people, 100 years separates the Blu-Ray from the wax phonograph.

The key to blocking the possibility that the transformative potential of (2) can ameliorate the dependency in (1) lies in underscoring the continual nature of the changes asserted in (2). A cognitive ecology where basic constitutive and environmental facts are in play is no longer recognizable as a human one.

Scientific progress entails the collapse of intentional cognition.

On this view, the coupling of scientific and moral progress is a temporary affair, one doomed to last only so long as cognition itself remained outside the purview of Enlightenment cognition. So long as astronomical complexity assured that the ancestral invariances underwriting cognition remained intact, the revolution of our environments could proceed apace. Our ancestral cognitive equilibria need not be overthrown. In place of materially actionable knowledge regarding ourselves, we developed ‘humanism,’ a sop for rare stipulation and ambient disputation.

But now that our ancestral cognitive equilibria are being overthrown, we should expect scientific and moral progress will become decoupled. And I would argue that the evidence of this is becoming plainer with the passing of every year. Next week, we’ll take a look at several examples.

I fear Donald Trump may be just the beginning.

.

References

Appel, Jana, von der Putten, Astrid, Kramer, Nicole C. and Gratch, Jonathan 2012, ‘Does Humanity Matter? Analyzing the Importance of Social Cues and Perceived Agency of a Computer System for the Emergence of Social Reactions during Human-Computer Interaction’, in Advances in Human-Computer Interaction 2012 <https://www.hindawi.com/journals/ahci/2012/324694/ref/&gt;

Barrett, Deidre 2010, Supernormal Stimuli: How Primal Urges Overran Their Original Evolutionary Purpose (New York: W.W. Norton)

Binnie, Lynne and Williams, Joanne 2003, ‘Intuitive Psychology and Physics Among Children with Autism and Typically Developing Children’, Autism 7

Buchanan, Allen 2011, Better than Human: The Promise and Perils of Enhancing Ourselves (New York: Oxford University Press)

Ebstein, R.P., Israel, S, Chew, S.H., Zhong, S., and Knafo, A. 2010, ‘Genetics of human social behavior’, in Neuron 65

Fodor, Jerry A. 1987, Psychosemantics: The Problem of Meaning in the Philosophy of Mind (Cambridge, MA: The MIT Press)

Harari, Yuval 2017, Homo Deus: A Brief History of Tomorrow (New York: HarperCollins)

Heider, Fritz and Simmel, Marianne 1944, ‘An Experimental Study of Apparent Behaviour,’ in The American Journal of Psychology 57

Kamps, Frederik S., Julian, Joshua B., Battaglia, Peter, Landau, Barbara, Kanwisher, Nancy and Dilks Daniel D 2017, ‘Dissociating intuitive physics from intuitive psychology: Evidence from Williams syndrome’, in Cognition 168

Nass, Clifford and Moon, Youngme 2000, ‘Machines and Mindlessness: Social Responses to Computers’, Journal of Social Issues 56

Pinker, Steven 1997, How the Mind Works (New York: W.W. Norton)

—. 2018, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Viking)

Scourfield J., Martin N., Lewis G. and McGuffin P. 1999, ‘Heritability of social cognitive skills in children and adolescents’, British Journal of Psychiatry 175

Todd, P. and Gigerenzer, G. 2012 ‘What is ecological rationality?’, in Todd, P. and Gigerenzer, G. (eds.) Ecological Rationality: Intelligence in the World (Oxford: Oxford University Press) 3–

30

Turkle, Sherry 2007, ‘Authenticity in the age of digital companions’, Interaction Studies 501-517

Waytz, Adam, Cacioppo, John, and Epley, Nicholas 2014, ‘Who See Human? The Stability and Importance of Individual Differences in Anthropomorphism’, Perspectives in Psychological Science 5

Yonck, Richard 2017, Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence (New York, NY: Arcade Publishing)

 

*Originally posted 20/03/2018

We’re Fucked. So (Now) What?

by rsbakker

“Conscious self-creation.” This is the nostrum Roy Scranton offers at the end of his now notorious piece, “We’re Doomed. Now What?” Conscious self-creation is the ‘now what,’ the imperative that we must carry across the threshold of apocalypse. After spending several weeks in the company of children I very nearly wept reading this in his latest collection of essays. I laughed instead.

I understand the logic well enough. Social coordination turns on trust, which turns on shared values, which turns on shared narratives. As Scranton writes, “Humans have survived and thrived in some of the most inhospitable environments on Earth, from the deserts of Arabia to the ice fields of the Arctic, because of this ability to organize collective life around symbolic constellations of meaning.” If our imminent self-destruction is the consequence of our traditional narratives, then we, quite obviously, need to come up with better narratives. “We need to work together to transform a global order of meaning focused on accumulation into a new order of meaning that knows the value of limits, transience, and restraint.”

If I laughed, it was because Scranton’s thesis is nowhere near so radical as his title might imply. It consists, on the one hand, in the truism that human survival depends on engineering an environmentally responsible culture, and on the other, the pessimistic claim that this engineering can only happen after our present (obviously irresponsible) culture has self-destructed. The ‘now what,’ in other words, amounts to the same-old same-old, only après le deluge. Just another goddamn narrative.

Scranton would, of course, take issue with my ‘just another goddamn’ modifier. As far as he’s concerned, the narrative he outlines is not just any narrative, it’s THE narrative. And, as the owner of a sophisticated philosophical position, he could endlessly argue its moral and ecological superiority… the same as any other theoretician. And therein lies the fundamental problem. Traditional philosophy is littered with bids to theorize and repair meaning. The very plasticity allowing for its rehabilitation also attests to its instability, which is to say, our prodigious ability to cook narratives up and our congenital inability to make them stick.

Thus, my sorrow, and my fear for children. Scranton, like fairly every soul writing on these topics, presumes our problem lies in the content of our narratives rather than their nature.

Why, for instance, presume meaning will survive the apocalypse? Even though he rhetorically stresses the continuity of nature and meaning, Scranton nevertheless assumes the independence of the latter. But why? If meaning is fundamentally natural, then what in its nature renders it immune to ecological degradation and collapse?

Think about the instability referenced above, the difficulty we have making our narratives collectively compelling. This wasn’t always the case. For the vast bulk of human history, our narratives were simply given. Our preliterate ancestors evolved the plasticity required to adapt their coordinating stories (over the course of generations) to the demands of countless different environments—nothing more or less. The possibility of alternative narratives, let alone ‘conscious self-creation,’ simply did not exist given the metacognitive resources at their disposal. They could change their narrative, to be sure, but incrementally, unconsciously, not so much convinced it was the only game in town as unable to report otherwise.

Despite their plasticity, our narratives provided the occluded (and therefore immovable) frame of reference for all our sociocognitive determinations. We quite simply did not evolve to systematically question the meaning of our lives. The capacity to do so seems to have required literacy, which is to say, a radical transformation of our sociocognitive environment. Writing allowed our ancestors to transcend the limits of memory, to aggregate insights, to record alternatives, to regiment and to interrogate claims. Combined with narrative plasticity, literacy begat a semantic explosion, a proliferation of communicative alternatives that continues to accelerate to this present day.

This is biologically unprecedented. Literacy, it seems safe to say, irrevocably domesticated our ancestral cognitive habitat, allowing us to farm what we once gathered. The plasticity of meaning, our basic ability to adapt our narratives, is the evolutionary product of a particular cognitive ecology, one absent writing. Literacy, you could say, constitutes a form of pollution, something that disrupts preexisting adaptive equilibria. Aside from the cognitive bounty it provides, it has the long-term effect of destabilizing narratives—all narratives.

The reason we find such a characterization jarring is that we subscribe to a narrative (Scranton’s eminently Western narrative) that values literacy as a means of generating new meaning. What fool would argue for illiteracy (and in writing no less!)? No one I know. But the fact remains that with literacy, certain ancestral functions of narrative were doomed to crash. Where once there was blind trust in our meanings, we find ourselves afflicted with questions, forced to troubleshoot what our ancestors took for granted. (This is the contradiction dwelling in the heart of all post-modernisms: the valuation of the very process devaluing meaning, crying ‘More is better!’ as those unable or unwilling to tread water drown).

The biological origins of narrative lie in shallow information cognitive ecologies, circumstances characterized by profound ignorance. What we cannot grasp we poke with sticks. Hitherto we’ve been able to exapt these capacities to great effect, raising a civilization that would make our story-telling ancestors weep, and for wonder far more than horror. But as with all heuristic systems, something must be taken for granted. Only so much can be changed before an ecology collapses altogether. And now we stand on the cusp of a communicative revolution even more profound than literacy, a proliferation, not simply of alternate narratives, but of alternate narrators.

If you sweep the workbench clean, cease looking at meaning as something somehow ‘anomalous’ or ‘transcendent,’ narrative becomes a matter of super-complicated systems, things that can be cut short by a heart attack or stroke. If you refuse to relinquish the meat (which is to say nature), then narratives, like any other biological system, require that particular background conditions obtain. Scranton’s error, in effect, is a more egregious version of the error Harari makes in Homo Deus, the default presumption that meaning somehow lies outside the circuit of ecology. Harari, recall, realizes that humanism, the ‘man-the-meaning-maker’ narrative of Western civilization, is doomed, but his low-dimensional characterization of the ‘intersubjective web of meaning’ as an ‘intermediate level of reality’ convinces him that some other collective narrative must evolve to take its place. He fails to see how the technologies he describes are actively replacing the ancestral social coordinating functions of narrative.

Scranton, perhaps hobbled by the faux-naturalism of Speculative Realism, cannot even concede the wholesale collapse of humanism, only those elements antithetical to environmental sustainability. His philosophical commitments effectively blind him to the intimate connection between the environmental crises he considers throughout the collection, and the semantic collapses he so eloquently describes in the final essay, “What is Thinking Good For?” Log onto the web, he writes, “and you’ll soon find yourself either nauseated by the vertigo that comes from drifting awash in endless waves of repetitive, clickbaity, amnesiac drek, or so benumbed and bedazzled by the sheer volume of ersatz cognition on display that you wind up giving in to the flow and welcoming your own stupefaction as a kind of relief.” Throughout this essay he hovers about, without quite touching, the idea of noise, how the technologically mediated ease of meaning production and consumption has somehow compromised our ability to reliably signal. Our capacity to arbitrate and select signals is an ecological artifact, historically dependent on the ancestral bottleneck of physical presence. Once a precious resource, like-minded commiseration has become cheap as dirt.

But since he frames the problem in the traditional register of ‘thought,’ an entity he acknowledges he cannot definitively define, he has no way of explaining what precisely is going wrong, and so finds himself succumbing to analogue nostalgia, Kantian shades. What is thinking good for? The interruption of cognitive reflex, which is to say, freedom from tutelary natures.’ Thinking, genuine thinking, is a koan.

The problem, of course, is that we now know that it’s tutelary natures all the way down: deliberative interruption is itself a reflex, sometimes instinctive, sometimes learned, but dependent on heuristic cues all the same. ‘Freedom’ is a shallow information ecological artifact, a tool requiring certain kinds of environmental ignorance (an ancestral neglect structure) to reliably discharge its communicative functions. The ‘free will debate’ simply illustrates the myriad ways in which the introduction of mechanical information, the very information human sociocognition has evolved to do without, inevitably crashes the problem-solving power of sociocognition.

The point being that nothing fundamental—and certainly nothing ontological—separates the crash of thought and freedom from the crash of any other environmental ecosystem. Quite without realizing, Scranton is describing the same process in both essays, the global dissolution of ancestral ecologies, cognitive and otherwise. What he and, frankly, the rest of the planet need to realize is that between the two, the prospect of semantic apocalypse is actually both more imminent and more dire. The heuristic scripts we use to cognize biological intelligences are about to face an onslaught of evolutionarily unprecedented intelligences, ever-improving systems designed to cue human sociocognitive reflexes out of school. How long before we’re overrun by billions of ‘junk intelligences’? One decade? Two?

What happens when genuine social interaction becomes optional?

The age of AI is upon us. And even though it is undoubtedly the case that social cognition is heuristic—ecological—our blindness to our nature convinces us that we possess no such nature and so remain, in some respect (because strokes still happen), immune. Our ‘symbolic spaces’ will be deluged with invasive species, each optimized to condition us, to cue social reflexes—to “nudge” or to “improve user experience.” We’ll scoff at them, declare them stupid, even as we dutifully run through scripts they have cued.

So long as the residue of traditional humanistic philosophy persists, so long as we presume meaning exceptional, this prospect cannot even be conceived, let alone explored. The “evacuation of interiority,” as Scranton calls it, is always the other guy’s—metacognitive neglect assures experience cannot but appear fathomless, immovable. Therein lies the heartbreaking genius of our cognitive predicament: given the intractability of our biomechanical nature, our sociocognitive and metacognitive systems behave as though no such nature exists. We just… are—the deliverance of something inexplicable.

An apparent interruption in thought, in nature, something necessarily observing the ruin, rather than (as Nietzsche understood) embodying it. And so enthusiastically tearing down the last ecological staple sustaining meaning: that humans cue one another ignorant of those cues as such.

All deep environmental knowledge constitutes an unprecedented attenuation of our ancestral cognitive ecologies. Up to this point, the utilities extracted have far exceeded the utilities lost. Pinker is right in this one regard: modernity has been a fantastic deal. We could plunder the ecologies about us, while largely ignoring the ecologies between us. But now that science and technology are becoming cognitive, we ourselves are becoming the resources ripe for plunder, the ecology doomed to fragment and implode.

We’re fucked. So now what? We fight, clutch for flotsam, like any other doomed beetle caught upon the flood, not for any ‘reason,’ but because this is what beetles do, drowning.

Fight.

The Crash of Truth: A Critical Review of Post-Truth by Lee C. Mcintyre

by rsbakker

Lee Mcintyre is a philosopher of science at Boston University, and author of Dark Ages: The Case for a Science of Human Behaviour. I read Post-truth on the basis of Fareed Zakaria’s enthusiastic endorsement on CNN’s GPS, so I fully expected to like it more than I ultimately did. It does an admirable job scouting the cognitive ecology of post-truth, but because it fails to understand that ecology in ecological terms, the dynamic itself remains obscured. The best Mcintyre can do is assemble and interrogate the usual suspects. As a result, his case ultimately devolves into what amounts to yet another ingroup appeal.

As perhaps, we should expect, given the actual nature of the problem.

Mcintyre begins with a transcript of an interview where CNN’s Alisyn Camerota presses Newt Gingrich at the 2016 Republican convention on Trump’s assertions regarding crime:

GINGRICH: No, but what I said is equally true. People feel more threatened.

CAMEROTA: Feel it, yes. They feel it, but the facts don’t support it.

GINGRICH: As a political candidate, I’ll go with how people feel and let you go with the theoreticians.

There’s a terror you feel in days like these. I felt that terror most recently, I think, watching Sarah Huckabee Sanders insisting that the out-going National Security Advisor, General H. R. McMaster, had declared that no one had been tougher on Russia than Trump after a journalist had quoted him saying almost exactly otherwise. I had been walking through the living-room and the exchange stopped me in my tracks. Never in my life had I ever witnessed a Whitehouse Official so fecklessly, so obviously, contradict what everyone in the room had just heard. It reminded me of the psychotic episodes I witnessed as a young man working tobacco with a friend who suffered schizophrenia—only this was a social psychosis. Nothing was wrong with Sarah Huckabee Sanders. Rather than lying in malfunctioning neural machinery, this discrepancy lay in malfunctioning social machinery. She could say what she said because she knew that statements appearing incoherent to those knowing what H. R. McMaster had actually said would not appear as such to those ignorant of or indifferent to what he had actually said.  She knew, in other words, that even though the journalists in the room saw this:

given the information available to their perspective, the audience that really mattered would see this:

which is to say, something rendered coherent for neglecting that information.

The task Mcintyre sets himself in this brief treatise is to explain how such a thing could have come to pass, to explain, not how a sitting President could lie, but how he could lie without consequences. When Sarah Huckabee Sanders asserts that H. R. McMaster’s claim that the Administration is not doing enough is actually the claim that no Administration has done more she’s relying on innumerable background facts that simply did not obtain a mere generation ago. The social machinery of truth-telling has fundamentally changed. If we look at the sideways picture of Disney’s faux New York skyline as the ‘deep information view,’ and the head-on picture as the ‘shallow information view,’ the question becomes one of how she could trust that her audience, despite the availability of deep information, would nevertheless affirm the illusion of coherence provided by the shallow information view. As Mcintyre writes, “what is striking about the idea of post-truth is not just that truth is being challenged, but that it is being challenged as a mechanism for asserting political dominance.” Sanders, you could say, is availing herself of new mechanisms, ones antagonistic to the traditional mechanisms of communicating the semantic authority of deep information. Somehow, someway, the communication of deep information has ceased to command the kinds of general assent it once did. It’s almost preposterous on the face of it: in attributing Trump’s claims to McMaster, Sanders is gambling that somehow, either by dint of corruption, delusion, or neglect, her false claim will discharge functions ideally belonging to truthful claims, such as informing subsequent behaviour. For whatever reason, the circumstances once preventing such mass dissociations of deep and shallow information ecologies have yielded to circumstances that no longer do.

Mcintyre provides a chapter by chapter account of those new circumstances. For reasons that will become apparent, I’ll skip his initial chapter, which he devotes to defining ‘post-truth,’ and return to it in the end.

Science Denial

He provides clear, pithy outlines of the history of the tobacco industry’s seminal decision to argue the science, to wage what amounts to an organized disinformation campaign. He describes the ways resource companies adapted these tactics to scramble the message and undermine the authority of climate science. And by ‘disinformation,’ he means this literally, given “that even while ExxonMobil was spending money to obfuscate the facts about climate change, they were making plans to explore new drilling opportunities in the Arctic once the polar ice cap had melted.” This part of the story is pretty well-known, I think, but Mcintyre tells the tale in a way that pricks the numbness of familiarity, reminding us of the boggling scale of what these campaigns achieved: generating a political/cultural alliance that is—not simply bent on—hastening untold misery and global economic loss in the name of short term parochial economic gain.

Cognitive Bias

He gives a curiously (given his background) two-dimensional sketch of the role cognitive bias plays in the problem, focusing primarily on cognitive dissonance, our need to minimize cognitive discrepancies, and the backfire effect, how counter-arguments actually strengthen, as opposed to mitigate, commitment to positions. (I would recommend Steven Sloman and Philip Fernbach’s The Knowledge Illusion for a more thorough consideration of the dynamics involved). He discusses research showing the profound ways that social identification, even cued by things so flimsy as coloured wristbands, profoundly transforms our moral determinations. But he underestimates, I think, the profound nature of what Dan Kahan and his colleagues call the “Tragedy of the Risk-Perception Commons,” the individual rationality of espousing irrational collective claims. There’s so much research directly pertinent to his thesis that he passes over in silence, especially that belonging to ecological rationality.

Traditional versus social media

If Mcintyre’s consideration of the cognitive science left me dissatisfied, I thoroughly enjoyed his consideration of media’s contribution to the problem of post-truth. He reminds us that the existence of entities, like Fox News, disguising advocacy as disinterested reporting, is the historical norm, not the rule. Disinterested journalistic reporting was more the result how AP, which served papers grinding different political axes, required stories expressing as little overt bias as possible. Rather than seize upon this ecological insight (more on this below), he narrates the gradual rise of television news from small, money-losing network endeavours, to money-making enterprises culminating in CNN, Fox, MSNBC, and the return of ‘yellow journalism.’

He provides a sobering assessment of the eclipse of traditional media, and the historically unprecedented rise of social media. Here, more than anywhere else, we find Mcintyre taking steps toward a genuine cognitive ecological understanding of the problem:

“In the past, perhaps our cognitive biases were ameliorated by our interactions with others. It is ironic to think that in today’s media deluge, we could perhaps be more isolated from contrary opinion than when our ancestors were forced to live and work among other members of their tribe, village, or community, who had to interact with one another to get information.”

Since his understanding of the problem is primarily normative, however, he fails to see how cognitive reflexes that misfire in experimental contexts, and so strike observers as normative breakdowns, actually facilitate problem-solving in ancestral contexts. What he notes as ‘ironic’ should strike him (and everyone else) as astounding, as one of the doors that any adequate explanation of post-truth must kick down. But it is heartening, I have to say, to see these ideas begin to penetrate more and more brainpans. Despite the insufficiency of his theoretical tools, Mcintyre glimpses something of the way cognitive technology has impacted human cognitive ecology: “Indeed,” he writes, “what a perfect storm for the exploitation of our ignorance and cognitive biases by those with an agenda to put forward.” But even if the ‘perfect storm’ metaphor captures the complex relational nature of what’s happened, it implies that we find ourselves suffering a spot of bad luck, and nothing more.

Postmodernism

At last he turns to the role postmodernism has played in all this: this is the only chapter where I smelled a ‘legacy effect,’ the sense that the author is trying to shoe-horn some independently published material.

He acknowledges that ‘postmodernism’ is hopelessly overdetermined, but he thinks two theses consistently rise above the noise: the first is that “there is no such thing as objective truth,” and the second is “that any profession of truth is nothing more than a reflection of the political ideology of the person who is making it.”

To his credit, he’s quick to pile on the caveats, to acknowledge the need to critique both the possibility of absolute truth as well as the social power of scientific truth-claims. Because of this, it quickly becomes apparent that his target isn’t so much ‘postmodernism’ as it is social constructivism, the thesis that ‘truth-telling,’ far from connecting us to reality, bullies us into affirming interest serving constructs. This, as it turns out, is the best way to think post-truth “[i]n its purest form” as “when one thinks that the crowd’s reaction actually does change the facts about a lie.”

In other words, for Mcintyre, post-truth is the consequence of too many people believing in social constructivism—or in other words, presuming the wrong theory of truthHis approach to the question of post-truth is that of a traditional philosopher: if the failure is one of correspondence, then the blame has to lie with anti-correspondence theories of truth. The reason Sarah Huckabee Sanders could lie about McMaster’s final speech turns on (among other things) the wide-spread theoretical belief that there is no such thing as objective truth,’ that it’s power plays all the way down.

Thus the (rather thick) irony of citing Daniel Dennett—an interpretivist!—stating that “what the postmodernists did was truly evil” so far as they bear responsibility “for the intellectual fad that made it respectable to be cynical about truth and facts.”

The sin of the postmodern left has very, very little to do with generating semantically irresponsible theoriesDennett’s own positions are actually a good deal more radical in this regard! When it comes to the competing narratives involving ‘meaning of’ questions and answers, Dennett knows we have no choice but to advert to the ‘dramatic idiom’ of intentionality. If the problem were one of providing theoretical ammunition then Dennett is as much a part of the problem as Baudrillard.

And yet Mcintyre caps Dennett’s assertion by asking, “Is there more direct evidence than this?” Not a shining moment, dialectically speaking.

I agree with him that tools have been lifted from postmodernists, but they have been lifted from pragmatists (Dennett’s ilk) as well. Talk of ‘stances’ and ‘language games’ is also rife on the right! And I should know. What’s happening now is the consequence of a trend that I’ve been battling since the turn of the millennium. All my novels constitute self-conscious attempts to short-circuit the conditions responsible for ‘post-truth.’ And I’ve spent thousands of hours trolling the alt-Right (before they were called such) trying to figure out what was going on. The longest online debate I ever had was with a fundamentalist Christian who belonged to a group using Thomas Kuhn to justify their belief in the literal truth of Genesis.

Defining Post-truth

Which brings us, as promised, back to the book’s beginning, the chapter that I skipped, where, in the course of refining his definition of post-truth, Mcintyre acknowledges that no one knows what the hell truth is:

“It is important at this point to give at least a minimal definition of truth. Perhaps the most famous is that of Aristotle, who said: ‘to say of what is that it is not, or of what is not, that it is, is false, while to say of what is that it is, and what of is not that it is not, is true.’ Naturally, philosophers have fought for centuries over whether this sort of “correspondence” view is correct, whereby we judge the truth of a statement only by how well it fits reality. Other prominent conceptions of truth (coherentist, pragmatist, semantic) reflect a diversity of opinion among philosophers about the proper theory of truth, even while—as a value—there seems little dispute that truth is important.”

He provides a minimal definition with one hand—truth as correspondence—which he immediately admits is merely speculative! Truth, he’s admitting, is both indispensable and inscrutable. And yet this inscrutability, he thinks, need not hobble the attempt to understand post-truth: “For now, however, the question at hand is not whether we have the proper theory of truth, but how to make sense of the different ways that people subvert truth.”

In other words, we don’t need to know what is being subverted to agree that it is being subverted. But this goes without saying; the question is whether we need to know what is being subverted to explain what Mcintyre is purporting to explain, namely, how truth is being subverted. How do we determine what’s gone wrong with truth when we don’t even know what truth is?

Mcintyre begins Post-truth, in other words, by admitting that no canonical formulation of his explanandum exists, that it remains a matter of mere speculation. Truth remains one of humanity’s confounding questions.

But if truth is in question, then shouldn’t the blame fall upon those who question truth? Perhaps the problem isn’t this or that philosophy so much as philosophy itself. We see as much at so many turns in Mcintyre’s account:

“Why not doubt the mainstream news or embrace a conspiracy theory? Indeed, if news is just political expression, why not make it up? Whose facts should be dominant? Whose perspective is the right one? Thus is postmodernism the godfather of post-truth.”

Certainly, the latter two questions belong to philosophy as whole, and not postmodernism in particular. To that extent, the two former questions—so far as they follow from the latter—have to be seen as falling out of philosophy in general, and not just some ‘philosophical bad apples.’

But does it make sense to blame philosophy, to suggest we should have never questioned the nature of truth? Of course not.

The real question, the one that I think any serious attempt to understand post-truth needs to reckon, is the one Mcintyre breezes by in the first chapter: Why do we find truth so difficult to understand?

On the one hand, truth seems to be crashing. On the other, we have yet to take a step beyond Aristotle when it comes to answering the question of the nature of truth. The latter is the primary obstacle, since the only way to truly understand the nature of the crash is to understand the nature of truth. Could the crash and the inscrutability of truth be related? Could post-truth somehow turn on our inability to explain truth?

Adaptive Anamorphosis

Truth lies murdered in the Calais Coach, and Mcintyre has assembled all the suspects: denialism, cognitive biases, traditional and social media, and (though he knows it not) philosophy. He knows all of them had some part to play, either directly, or as accessories, but the Calais Coach remains locked—his crime scene is a black box. He doesn’t even have a body!

For me, however, post-truth is a prediction come to pass—a manifestation of what I’ve long called the ‘semantic apocalypse.’ Far from a perfect storm of suspects coming together in unlikely ways to murder ‘all of factual reality,’ it is an inevitable consequence of our rapidly transforming cognitive ecologies.

Biologically speaking, human communication and cooperation represent astounding evolutionary achievements. Human cognition is the most complicated thing human cognition has ever encountered: only now are we beginning to reverse-engineer its nature, and to use that knowledge to engineer unprecedented cognitive artifacts. We know that cognition is structurally and dynamically composite, heavily reliant on heuristic specialization to solve its social and natural environments. The astronomical complexity of human cognition means that sociocognition and metacognition are especially reliant on composite, source-insensitive systems, devices turning on available cues that correlate, given that various hidden regularities obtain, with specific outcomes. Despite being legion, we manage to synchronize with our fellows and our environments without the least awareness of the cognitive machinery responsible.

We suffer medial neglect, a systematic insensitivity to our own nature—a nature that includes this insensitivity. Like every other organism on this planet we cognize without cognizing the concurrent act of cognition. Well, almost like every other organism. Where other species utterly depend on the reliability of their cognitive capacities, have no way of repairing failures in various enabling—medial—systems, we do have recourse. Despite our blindness to the machinery of human cognition, we’ve developed a number of different ways to nudge that machinery—whack the TV set, you could say.

Truth-talk is one of those ways. Truth-talk allows us to minimize communicative discrepancies absent, once again, sensitivity to the complexities involved. Truth-talk provides a way to circumvent medial neglect, to resolve problems belonging to the enabling dimension of cognition despite our systematic insensitivity to the facts of that dimension. When medial issues—problems pertaining to cognitive function—arise, truth-talk allows for the metabolically inexpensive recovery of social and environmental synchronization. Incompatible claims can be sorted, at least so far as our ancestors required in prehistoric cognitive ecologies. The tribe can be healed, despite its profound ignorance of natures.

To say human cognition is heuristic is to say it is ecologically dependent, that it requires the neglected regularities underwriting the utility of our cues remain intact. Overthrow those regularities, and you overthrow human cognition. So, where our ancestors could simply trust the systematic relationship between retinal signals and environments while hunting, we have to remove our VR goggles before raiding the fridge. Where our ancestors could simply trust the systematic relationship between the text on the page or the voice in our ear and the existence of a fellow human, we have to worry about chatbots and ‘conversational user interfaces.’ Where our ancestors could automatically depend on the systematic relationship between their ingroup peers and the environments they reported, we need to search Wikipedia—trust strangers. More generally, where our ancestors could trust the general reliability (and therefore general irrelevance) of their cognitive reflexes, we find ourselves confronted with an ever growing and complicating set of circumstances where our reflexes can no longer be trusted to solve social problems.

The tribe, it seems, cannot be healed.

And, unfortunately, this is the very problem we should expect given the technical (tactical and technological) radicalization of human cognitive ecology.* Philosophy, and now, cognitive science, provide the communicative tactics required to neutralize (or ‘threshold’) truth-talk. Cognitive technologies, meanwhile, continually complicate the once direct systematic relationships between our suites of cognitive reflexes and our social and natural environments. The internet doesn’t simply render the sum of human knowledge available, it also renders the sum of human rationalization available as well. The curious and the informed, meanwhile, no longer need suffer the company of the incurious and the uninformed, and vice versa. The presumptive moral superiority of the former stands revealed: and in ever greater numbers the latter counter-identify, with a violence aggravated by phenomena such as the ‘online disinhibition effect.’ (One thing Mcintyre never pauses to consider is the degree to which he and his ilk are hated, despised, so much so as to see partners in traditional foreign adversaries, and to think lies and slander simply redress lies and slander). Populations begin spontaneously self-selecting. Big data identifies the vulnerable, who are showered with sociocognitive cues—atrocity tales to threaten, caricatures to amuse—engineered to provoke ingroup identification and outgroup alienation. In addition to ‘backfiring,’ counter-arguments are perceived as weapons, evidence of outgroup contempt for you and your own. And as the cognitive tactics become ever more adept at manipulating our biases, ever more scientifically informed, and as the cognitive technology becomes ever more sophisticated, ever more destructive of our ancestral cognitive habitat, the break between the two groups, we should expect, will only become more, not less, profound.

None of this is intuitive, of course. Medial neglect means reflection is source blind, and so inclined to conceive things in super-ecological terms. Thus the value of the prop building analogy I posed at the beginning.

Disney’s massive Manhattan anamorph depends on the viewer’s perspectival position within the installation to assure the occlusion of incompatible information. The degrees of cognitive freedom this position possesses—basically, how far one can wander this way and that—depends on the size and sophistication of the anamorph. The stability of illusion, in other words, entirely depends on the viewer: the deeper one investigates, the less stable the anamorph becomes. Their dependence on cognitive ‘sweet spots’ is their signature vulnerability.

The cognitive fragility of the anamorph, however, resides in the fact that we can move, while it cannot. Overcoming this fragility, then, either requires 1) de-animating observation, 2) complicating the anamorph, or 3) animating the anamorph. The problem we face can be understood as the problem of adaptive cognitive anamorphosis, the way cognitive science, in combination with cognitive technology, enables the de-animation of information consumers by gaming sociocognitive cues, while both complicating and animating the artifactual anamorphic information they consume.

Once a certain threshold is crossed, Sarah Huckabee Sanders can lie without shame or apology on national television. We don’t know what we don’t know. Mcintyre references the notorious Dunning-Kruger effect, the way cognitive incompetence correlates with incompetent assessments of competence, but the underlying mechanism is more basic: cognitive systems lacking access to information function independent of that information. Medial neglect assures we take the sufficiency of our perspectives for granted absent information indicating insufficiency or ‘medial misalignment.’ Trusting our biology and community is automatic. Perhaps we refuse to move, to even consider the information belonging to:

But if we do move, the anamorph, thanks to cognitive technology, adapts, the prop-facades grow prop sides, and the deep (globally synchronized) information presented above, has to compete with ‘faux deep’ information. The question becomes one of who has been systematically deceived—a question that ingroup biases have already answered in illusion’s favour. We can return to our less inquisitive peers and assure them they were right all along.

What is ‘post-truth’? Insofar as it names anything it refers to diminishing capacity of globally, versus locally, synchronized claims to drive public discourse. It’s almost as if, via technology, nature is retooling itself to conceal itself by creating adaptive ‘faux realities.’ It’s all artifactual, all biologically ‘constructed’: the question is whether our cognitive predicament facilitates global (or deep) synchronization geared to what happens to be the case, or facilitates local (or shallow) synchronization geared to ingroup expectations and hidden political and commercial interests.

There’s no contest between spooky correspondence and spooky construction. There’s no ‘assertion of ideological supremacy,’ just cognitive critters (us) stranded in a rapidly transforming cognitive ecology that has become too sophisticated to see, and too powerful to credit. Post-truth, in other words, is an inevitable consequence of scientific progress, particularly as it pertains to cognitive technologies.

Sarah Huckabee Sanders can lie without shame or apology on national television because Trump was able to lure millions of Americans across a radically transformed (and transforming) anamorphic threshold. And we should find this terrifying. Most doomed democracies elect their executioner. In his The Death of Democracy: Hitler’s Rise to Power, Benjamin Carter Hett blames the success of Nazism on the “reality deficit” suffered by the German people. “Hostility to reality,” he writes, “translated into contempt for politics, or, rather, desire for a politics that was somehow not political: a thing that can never be” (14). But where Germany in the 1930’s had every reason to despise the real, “a lost war that had cost the nation almost two million of her sons, a widely unpopular revolution, a seemingly unjust peace settlement, and economic chaos accompanied by huge social and technological change” (13), America finds itself suffering only the latter. The difference lies in the way the latter allows for the cultivation and exploitation of this hostility in an age of unparalleled peace and prosperity. In the German case, the reality itself drove the populace to embrace atavistic political fantasies. Thanks to technology, we can now achieve the same effect using only human cognitive shortcomings and corporate greed.

Buckle up. No matter what happens to Trump, the social dysfunction he expresses belongs to the very structure of our civilization. Competition for the market he’s identified is only going to intensify.

 

Enlightenment How? Omens of the Semantic Apocalypse

by rsbakker

“In those days the world teemed, the people multiplied, the world bellowed like a wild bull, and the great god was aroused by the clamor. Enlil heard the clamor and he said to the gods in council, “The uproar of mankind is intolerable and sleep is no longer possible by reason of the babel.” So the gods agreed to exterminate mankind.” –The Epic of Gilgamesh

We know that human cognition is largely heuristic, and as such dependent upon cognitive ecologies. We know that the technological transformation of those ecologies generates what Pinker calls ‘bugs,’ heuristic miscues due to deformations in ancestral correlative backgrounds. In ancestral times, our exposure to threat-cuing stimuli possessed a reliable relationship to actual threats. Not so now thanks to things like the nightly news, generating (via, Pinker suggests, the availability heuristic (42)) exaggerated estimations of threat.

The toll of scientific progress, in other words, is cognitive ecological degradation. So far that degradation has left the problem-solving capacities of intentional cognition largely intact: the very complexity of the systems requiring intentional cognition has hitherto rendered cognition largely impervious to scientific renovation. Throughout the course of revolutionizing our environments, we have remained a blind-spot, the last corner of nature where traditional speculation dares contradict the determinations of science.

This is changing.

We see animals in charcoal across cave walls so easily because our visual systems leap to conclusions on the basis of so little information. The problem is that ‘so little information’ also means so easily reproduced. The world is presently engaged in a mammoth industrial research program bent on hacking every cue-based cognitive reflex we possess. More and more, the systems we evolved to solve our fellow human travelers will be contending with artificial intelligences dedicated to commercial exploitation. ‘Deep information,’ meanwhile, is already swamping the legal system, even further problematizing the folk conceptual (shallow information) staples that ground the system’s self-understanding. Creeping medicalization continues unabated, slowly scaling back warrant for things like character judgment in countless different professional contexts.

Now that the sciences are colonizing the complexities of experience and cognition, we can see the first clear-cut omens of the semantic apocalypse.

 

Crash Space

He assiduously avoids the topic in Enlightenment Now, but in The Blank Slate, Pinker devotes several pages to deflating the arch-incompatibility between natural and intentional modes of cognition, the problem of free will:

“But how can we have both explanation, with its requirement of lawful causation, and responsibility, with its requirement of free choice? To have them both we don’t need to resolve the ancient and perhaps irresolvable antinomy between free will and determinism. We have only to think clearly about what we want the notion of responsibility to achieve.” 180

He admits there’s no getting past the ‘conflict of intuitions’ underwriting the debate. Since he doesn’t know what intentional and natural cognition amount to, he doesn’t understand their incompatibility, and so proposes we simply side-step the problem altogether by redefining ‘responsibility’ to mean what we need it to mean—the same kind of pragmatic redefinition proposed by Dennett. He then proceeds to adduce examples of ‘clear thinking’ by providing guesses regarding ‘holding responsible’ as deterrence, which is more scientifically tractable. “I don’t claim to have solved the problem of free will, only to show that we don’t need to solve it to preserve personal responsibility in the face of an increasing understanding of the causes of behaviour” (185).

Here we can see how profoundly Pinker (as opposed to Nietzsche and Adorno) misunderstands the profundity of Enlightenment disenchantment. The problem isn’t that one can’t cook up alternate definitions of ‘responsibility,’ the problem is that anyone can, endlessly. ‘Clear thinking’ is as liable to serve Pinker as well as ‘clear and distinct ideas’ served Descartes, which is to say, as more grease for the speculative mill. No matter how compelling your particular instrumentalization of ‘responsibility’ seems, it remains every bit as theoretically underdetermined as any other formulation.

There’s a reason such exercises in pragmatic redefinition stall in the speculative ether. Intentional and mechanical cognitive systems are not optional components of human cognition, nor are the intuitions we are inclined to report. Moreover, as we saw in the previous post, intentional cognition generates reliable predictions of system behaviour absent access to the actual sources of that behaviour. Intentional cognition is source-insensitive. Natural cognition, on the other hand, is source sensitive: it generates predictions of system behaviour via access to the actual sources of that behaviour.

Small wonder, then, that our folk intentional intuitions regularly find themselves scuttled by scientific explanation. ‘Free will,’ on this account, is ancestral lemonade, a way to make the best out of metacognitive lemons, namely, our blindness to the sources of our thought and decisions. To the degree it relies upon ancestrally available (shallow) saliencies, any causal (deep) account of those sources is bound to ‘crash’ our intuitions regarding free will. The free will debate that Pinker hopes to evade with speculation can be seen as a kind of crash space, the point where the availability of deep information generates incompatible causal intuitions and intentional intuitions.

The confusion here isn’t (as Pinker thinks) ‘merely conceptual’; it’s a bona fide, material consequence of the Enlightenment, a cognitive version of a visual illusion. Too much information of the wrong kind crashes our radically heuristic modes of cognizing decisions. Stipulating definitions, not surprisingly, solves nothing insofar as it papers over the underlying problem—this is why it merely adds to the literature. Responsibility-talk cues the application of intentional cognitive modes; it’s the incommensurability of these modes with causal cognition that’s the problem, not our lexicons.

 

Cognitive Information

Consider the laziness of certain children. Should teachers be allowed to hold students responsible for their academic performance? As the list of learning disabilities grows, incompetence becomes less a matter of ‘character’ and more a matter of ‘malfunction’ and providing compensatory environments. Given that all failures of competence redound on cognitive infelicities of some kind, and given that each and every one of these infelicities can and will be isolated and explained, should we ban character judgments altogether? Should we regard exhortations to ‘take responsibility’ as forms of subtle discrimination, given that executive functioning varies from student to student? Is treating children like (sacred) machinery the only ‘moral’ thing to do?

So far at least. Causal explanations of behaviour cue intentional exemptions: our ancestral thresholds for exempting behaviour from moral cognition served larger, ancestral social equilibria. Every etiological discovery cues that exemption in an evolutionarily unprecedented manner, resulting in what Dennett calls “creeping exculpation,” the gradual expansion of morally exempt behaviours. Once a learning impediment has been discovered, it ‘just is’ immoral to hold those afflicted responsible for their incompetence. (If you’re anything like me, simply expressing the problem in these terms rankles!) Our ancestors, resorting to systems adapted to resolving social problems given only the merest information, had no problem calling children lazy, stupid, or malicious. Were they being witlessly cruel doing so? Well, it certainly feels like it. Are we more enlightened, more moral, for recognizing the limits of that system, and curtailing the context of application? Well, it certainly feels like it. But then how do we justify our remaining moral cognitive applications? Should we avoid passing moral judgment on learners altogether? It’s beginning to feel like it. Is this itself moral?

This is theoretical crash space, plain and simple. Staking out an argumentative position in this space is entirely possible—but doing so merely exemplifies, as opposed to solves, the dilemma. We’re conscripting heuristic systems adapted to shallow cognitive ecologies to solve questions involving the impact of information they evolved to ignore. We can no more resolve our intuitions regarding these issues than we can stop Necker Cubes from spoofing visual cognition.

The point here isn’t that gerrymandered solutions aren’t possible, it’s that gerrymandered solutions are the only solutions possible. Pinker’s own ‘solution’ to the debate (see also, How the Mind Works, 54-55) can be seen as a symptom of the underlying intractability, the straits we find ourselves in. We can stipulate, enforce solutions that appease this or that interpretation of this or that displaced intuition: teachers who berate students for their laziness and stupidity are not long for their profession—at least not anymore. As etiologies of cognition continue to accumulate, as more and more deep information permeates our moral ecologies, the need to revise our stipulations, to engineer them to discharge this or that heuristic function, will continue to grow. Free will is not, as Pinker thinks, “an idealization of human beings that makes the ethics game playable” (HMW 55), it is (as Bruce Waller puts it) stubborn, a cognitive reflex belonging to a system of cognitive reflexes belonging to intentional cognition more generally. Foot-stomping does not change how those reflexes are cued in situ. The free-will crash space will continue to expand, no matter how stubbornly Pinker insists on this or that redefinition of this or that term.

We’re not talking about a fall from any ‘heuristic Eden,’ here, an ancestral ‘golden age’ where our instincts were perfectly aligned with our circumstances—the sheer granularity of moral cognition, not to mention the confabulatory nature of moral rationalization, suggests that it has always slogged through interpretative mire. What we’re talking about, rather, is the degree that moral cognition turns on neglecting certain kinds of natural information. Or conversely, the degree to which deep natural information regarding our cognitive capacities displaces and/or crashes once straightforward moral intuitions, like the laziness of certain children.

Or the need to punish murderers…

Two centuries ago a murderer suffering irregular sleep characterized by vocalizations and sometimes violent actions while dreaming would have been prosecuted to the full extent of the law. Now, however, such a murderer would be diagnosed as suffering an episode of ‘homicidal somnambulism,’ and could very likely go free. Mammalian brains do not fall asleep or awaken all at once. For some yet-to-be-determined reason, the brains of certain individuals (mostly men older than 50), suffer a form of partial arousal causing them to act out their dreams.

More and more, neuroscience is making an impact in American courtrooms. Nita Farahany (2016) has found that between 2005 and 2012 the number of judicial opinions referencing neuroscientific evidence has more than doubled. She also found a clear correlation between the use of such evidence and less punitive outcomes—especially when it came to sentencing. Observers in the burgeoning ‘neurolaw’ field think that for better or worse, neuroscience is firmly entrenched in the criminal justice system, and bound to become ever more ubiquitous.

Not only are responsibility assessments being weakened as neuroscientific information accumulates, social risk assessments are being strengthened (Gkotsi and Gasser 2016). So-called ‘neuroprediction’ is beginning to revolutionize forensic psychology. Studies suggest that inmates with lower levels of anterior cingulate activity are approximately twice as likely to reoffend as those relatively higher levels of activity (Aharoni et al 2013). Measurements of ‘early sensory gating’ (attentional filtering) predict the likelihood that individuals suffering addictions will abandon cognitive behavioural treatment programs (Steele et al 2014). Reduced gray matter volumes in the medial and temporal lobes identify youth prone to commit violent crimes (Cope et al 2014). ‘Enlightened’ metrics assessing recidivism risks already exist within disciplines such as forensic psychiatry, of course, but “the brain has the most proximal influence on behavior” (Gaudet et al 2016). Few scientific domains illustrate the problems secondary to deep environmental information than the issue of recidivism. Given the high social cost of criminality, the ability to predict ‘at risk’ individuals before any crime is committed is sure to pay handsome preventative dividends. But what are we to make of justice systems that parole offenders possessing one set of ‘happy’ neurological factors early, while leaving others possessing an ‘unhappy’ set to serve out their entire sentence?

Nothing, I think, captures the crash of ancestral moral intuitions in modern, technological contexts quite so dramatically as forensic danger assessments. Consider, for instance, the way deep information in this context has the inverse effect of deep information in the classroom. Since punishment is indexed to responsibility, we generally presume those bearing less responsibility deserve less punishment. Here, however, it’s those bearing the least responsibility, those possessing ‘social learning disabilities,’ who ultimately serve the longest. The very deficits that mitigate responsibility before conviction actually aggravate punishment subsequent conviction.

The problem is fundamentally cognitive, and not legal, in nature. As countless bureaucratic horrors make plain, procedural decision-making need not report as morally rational. We would be mad, on the one hand, to overlook any available etiology in our original assessment of responsibility. We would be mad, on the other hand, to overlook any available etiology in our subsequent determination of punishment. Ergo, less responsibility often means more punishment.

Crash.

The point, once again, is to describe the structure and dynamics of our collective sociocognitive dilemma in the age of deep environmental information, not to eulogize ancestral cognitive ecologies. The more we disenchant ourselves, the more evolutionarily unprecedented information we have available, the more problematic our folk determinations become. Demonstrating this point demonstrates the futility of pragmatic redefinition: no matter how Pinker or Dennett (or anyone else) rationalizes a given, scientifically-informed definition of moral terms, it will provide no more than grist for speculative disputation. We can adopt any legal or scientific operationalization we want (see Parmigiani et al 2017); so long as responsibility talk cues moral cognitive determinations, however, we will find ourselves stranded with intuitions we cannot reconcile.

Considered in the context of politics and the ‘culture wars,’ the potentially disastrous consequences of these kinds of trends become clear. One need only think of the oxymoronic notion of ‘commonsense’ criminology, which amounts to imposing moral determinations geared to shallow cognitive ecologies upon criminal contexts now possessing numerous deep information attenuations. Those who, for whatever reason, escaped the education system with something resembling an ancestral ‘neglect structure’ intact, those who have no patience for pragmatic redefinitions or technical stipulations will find appeals to folk intuitions every bit as convincing as those presiding over the Salem witch trials in 1692. Those caught up in deep information environments, on the other hand, will be ever more inclined to see those intuitions as anachronistic, inhumane, immoral—unenlightened.

Given the relation between education and information access and processing capacity, we can expect that education will increasingly divide moral attitudes. Likewise, we should expect a growing sociocognitive disconnect between expert and non-expert moral determinations. And given cognitive technologies like the internet, we should expect this dysfunction to become even more profound still.

 

Cognitive Technology

Given the power of technology to cue intergroup identifications, the internet was—and continues to be—hailed as a means of bringing humanity together, a way of enacting the universalistic aspirations of humanism. My own position—one foot in academe, another foot in consumer culture—afforded me a far different perspective. Unlike academics, genre writers rub shoulders with all walks, and often find themselves debating outrageously chauvinistic views. I realized quite quickly that the internet had rendered rationalizations instantly available, that it amounted to pouring marbles across the floor of ancestral social dynamics. The cost of confirmation had plummeted to zero. Prior to the internet, we had to test our more extreme chauvinisms against whomever happened to be available—which is to say, people who would be inclined to disagree. We had to work to indulge our stone-age weaknesses in post-war 20th century Western cognitive ecologies. No more. Add to this phenomena such as online disinhibition effect, as well as the sudden visibility of ingroup, intellectual piety, and the growing extremity of counter-identification struck me as inevitable. The internet was dividing us into teams. In such an age, I realized, the only socially redemptive art was art that cut against this tendency, art that genuinely spanned ingroup boundaries. Literature, as traditionally understood, had become a paradigmatic expression of the tribalism presently engulfing us now. Epic fantasy, on the other hand, still possessed the relevance required to inspire book burnings in the West.

(The past decade has ‘rewarded’ my turn-of-the-millennium fears—though in some surprising ways. The greatest attitudinal shift in America, for instance, has been progressive: it has been liberals, and not conservatives, who have most radically changed their views. The rise of reactionary sentiment and populism is presently rewriting European politics—and the age of Trump has all but overthrown the progressive political agenda in the US. But the role of the internet and social media in these phenomena remains a hotly contested one.)

The earlier promoters of the internet had banked on the notional availability of intergroup information to ‘bring the world closer together,’ not realizing the heuristic reliance of human cognition on differential information access. Ancestrally, communicating ingroup reliability trumped communicating environmental accuracy, stranding us with what Pinker (following Kahan 2011) calls the ‘tragedy of the belief commons’ (Enlightenment Now, 358), the individual rationality of believing collectively irrational claims—such as, for instance, the belief that global warming is a liberal myth. Once falsehoods become entangled with identity claims, they become the yardstick of true and false, thus generating the terrifying spectacle we now witness on the evening news.

The provision of ancestrally unavailable social information is one thing, so long as it is curated—censored, in effect—as it was in the mass media age of my childhood. Confirmation biases have to swim upstream in such cognitive ecologies. Rendering all ancestrally unavailable social information available, on the other hand, allows us to indulge our biases, to see only what we want to see, to hear only what we want to hear. Where ancestrally, we had to risk criticism to secure praise, no such risks need be incurred now. And no surprise, we find ourselves sliding back into the tribalistic mire, arguing absurdities haunted—tainted—by the death of millions.

Jonathan Albright, the research director at the Tow Center for Digital Journalism at Columbia, has found that the ‘fake news’ phenomenon, as the product of a self-reinforcing technical ecosystem, has actually grown worse since the 2016 election. “Our technological and communication infrastructure, the ways we experience reality, the ways we get news, are literally disintegrating,” he recently confessed in a NiemanLab interview. “It’s the biggest problem ever, in my opinion, especially for American culture.” As Alexis Madrigal writes in The Atlantic, “the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.”

The individual cost of fantasy continues to shrink, even as the collective cost of deception continues to grow. The ecologies once securing the reliability of our epistemic determinations, the invariants that our ancestors took for granted, are being levelled. Our ancestral world was one where seeking risked aversion, a world where praise and condemnation alike had to brave condemnation, where lazy judgments were punished rather than rewarded. Our ancestral world was one where geography and the scarcity of resources forced permissives and authoritarians to intermingle, compromise, and cooperate. That world is gone, leaving the old equilibria to unwind in confusion, a growing social crash space.

And this is only the beginning of the cognitive technological age. As Tristan Harris points out, social media platforms, given their commercial imperatives, cannot but engineer online ecologies designed to exploit the heuristic limits of human cognition. He writes:

“I learned to think this way when I was a magician. Magicians start by looking for blind spots, edges, vulnerabilities and limits of people’s perception, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano.”

More and more of what we encounter online is dedicated to various forms of exogenous attention capture, maximizing the time we spend on the platform, so maximizing our exposure not just to advertising, but to hidden metrics, algorithms designed to assess everything from our likes to our emotional well-being. As with instances of ‘forcing’ in the performance of magic tricks, the fact of manipulation escapes our attention altogether, so we always presume we could have done otherwise—we always presume ourselves ‘free’ (whatever this means). We exhibit what Clifford Nass, a pioneer in human-computer interaction, calls ‘mindlessness,’ the blind reliance on automatic scripts. To the degree that social media platforms profit from engaging your attention, they profit from hacking your ancestral cognitive vulnerabilities, exploiting our shared neglect structure. They profit, in other words, from transforming crash spaces into cheat spaces.

With AI, we are set to flood human cognitive ecologies with systems designed to actively game the heuristic nature of human social cognition, cuing automatic responses based on boggling amounts of data and the capacity to predict our decisions better than our intimates, and soon, better than we can ourselves. And yet, as the authors of the 2017 AI Index report state, “we are essentially “flying blind” in our conversations and decision-making related to AI.” A blindness we’re largely blind to. Pinker spends ample time domesticating the bogeyman of superintelligent AI (296-298) but he completely neglects this far more immediate and retail dimension of our cognitive technological dilemma.

Consider the way humans endure as much as need one another: the problem is that the cues signaling social punishment and reward are easy to trigger out of school. We’ve already crossed the borne where ‘improving the user experience’ entails substituting artificial for natural social feedback. Notice the plethora of nonthreatening female voices at all? The promise of AI is the promise of countless artificial friends, voices that will ‘understand’ your plight, your grievances, in some respects better than you do yourself. The problem, of course, is that they’re artificial, which is to say, not your friend at all.

Humans deceive and manipulate one another all the time, of course. And false AI friends don’t rule out true AI defenders. But the former merely describes the ancestral environments shaping our basic heuristic tool box. And the latter simply concedes the fundamental loss of those cognitive ecologies. The more prosthetics we enlist, the more we complicate our ecology, the more mediated our determinations become, the less efficacious our ancestral intuitions become. The more we will be told to trust to gerrymandered stipulations.

Corporate simulacra are set to deluge our homes, each bent on cuing trust. We’ve already seen how the hypersensitivity of intentional cognition renders us liable to hallucinate minds where none exist. The environmental ubiquity of AI amounts to the environmental ubiquity of systems designed to exploit granular sociocognitive systems tuned to solve humans. The AI revolution amounts to saturating human cognitive ecology with invasive species, billions of evolutionarily unprecedented systems, all of them camouflaged and carnivorous. It represents—obviously, I think—the single greatest cognitive ecological challenge we have ever faced.

What does ‘human flourishing’ mean in such cognitive ecologies? What can it mean? Pinker doesn’t know. Nobody does. He can only speculate in an age when the gobsmacking power of science has revealed his guesswork for what it is. This was why Adorno referred to the possibility of knowing the good as the ‘Messianic moment.’ Until that moment comes, until we find a form of rationality that doesn’t collapse into instrumentalism, we have only toothless guesses, allowing the pointless optimization of appetite to command all. It doesn’t matter whether you call it the will to power or identity thinking or negentropy or selfish genes or what have you, the process is blind and it lies entirely outside good and evil. We’re just along for the ride.

 

Semantic Apocalypse

Human cognition is not ontologically distinct. Like all biological systems, it possesses its own ecology, its own environmental conditions. And just as scientific progress has brought about the crash of countless ecosystems across this planet, it is poised to precipitate the crash of our shared cognitive ecology as well, the collapse of our ability to trust and believe, let alone to choose or take responsibility. Once every suboptimal behaviour has an etiology, what then? Once everyone us has artificial friends, heaping us with praise, priming our insecurities, doing everything they can to prevent non-commercial—ancestral— engagements, what then?

‘Semantic apocalypse’ is the dramatic term I coined to capture this process in my 2008 novel, Neuropath. Terminology aside, the crashing of ancestral (shallow information) cognitive ecologies is entirely of a piece with the Anthropocene, yet one more way that science and technology are disrupting the biology of our planet. This is a worst-case scenario, make no mistake. I’ll be damned if I see any way out of it.

Humans cognize themselves and one another via systems that take as much for granted as they possibly can. This is a fact. Given this, it is not only possible, but exceedingly probable, that we would find squaring our intuitive self-understanding with our scientific understanding impossible. Why should we evolve the extravagant capacity to intuit our nature beyond the demands of ancestral life? The shallow cognitive ecology arising out of those demands constitutes our baseline self-understanding, one that bears the imprimatur of evolutionary contingency at every turn. There’s no replacing this system short replacing our humanity.

Thus the ‘worst’ in ‘worst case scenario.’

There will be a great deal of hand-wringing in the years to come. Numberless intentionalists with countless competing rationalizations will continue to apologize (and apologize) while the science trundles on, crashing this bit of traditional self-understanding and that, continually eroding the pilings supporting the whole. The pieties of humanism will be extolled and defended with increasing desperation, whole societies will scramble, while hidden behind the endless assertions of autonomy, beneath the thundering bleachers, our fundamentals will be laid bare and traded for lucre.

On Artificial Belonging: How Human Meaning is Falling between the Cracks of the AI Debate

by rsbakker

I hate people. Or so I used to tell myself in the thick of this or that adolescent crowd. Like so many other teens, my dawning social awareness occasioned not simply anxiety, but agony. Everyone else seemed to have the effortless manner, the well-groomed confidence, that I could only pretend to have. Lord knows I would try to tell amusing anecdotes, to make rooms boom with humour and admiration, but my voice would always falter, their attention would always wither, and I would find myself sitting alone with my butterflies. I had no choice but to hate other people: I needed them too much, and they needed me not at all. Never in my life have I felt so abandoned, so alone, as I did those years. Rarely have I felt such keen emotional pain.

Only later would I learn that I was anything but alone, that a great number of my peers felt every bit as alienated as I did. Adolescence represents a crucial juncture in the developmental trajectory of the human brain, the time when the neurocognitive tools required to decipher and navigate the complexities of human social life gradually come online. And much as the human immune system requires real-world feedback to discriminate between pathogens and allergens, human social cognition requires the pain of social failure to learn the secrets of social success.

Humans, like all other forms of life on this planet, require certain kinds of ecologies to thrive. As so-called ‘feral children’ dramatically demonstrate, the absence of social feedback at various developmental junctures can have catastrophic consequences.

So what happens when we introduce artificial agents into our social ecology? The pace of development is nothing short of boggling. We are about to witness a transformation in human social ecology without evolutionary let alone historical precedent. And yet the debate remains fixated on jobs or the prospects of apocalyptic superintelligences.

The question we really need to be asking is what happens when we begin talking to our machines more than to each other. What does it mean to dwell in social ecologies possessing only the appearance of love and understanding?

“Hell,” as Sartre famously wrote, “is other people.” Although the sentiment strikes a chord in most everyone, the facts of the matter are somewhat more complex. The vast majority of those placed in prolonged solitary confinement, it turns out, suffer a mixture of insomnia, cognitive impairment, depression, and even psychosis. The effects of social isolation are so dramatic, in fact, that the research has occasioned a worldwide condemnation of punitive segregation. Hell, if anything, would seem to be the absence of other people.

The reason for this is that we are a fundamentally social species, ‘eusocial’ in a manner akin to ants or bees, if E.O. Wilson is to be believed. To understand just how social we are, you need only watch the famous Heider-Simmel illusion, a brief animation portraying the movements of a small circle, a small rectangle, and larger rectangle, in and about a motionless, hollow square. Objectively speaking, all one sees are a collection of shapes moving relative one another and the hollow square. But despite the radical absence of information, nearly everyone watching the animation sees a little soap opera, usually involving the big square attempting to prevent the union of the small square and circle.

This leap from shapes to soap operas reveals, in dramatic fashion, just how little information we require to draw enormous social conclusions. Human social cognition is very easy to trigger out of school, as our ancient tendency to ‘anthropomorphize’ our natural surroundings shows. Not only are we prone to see faces in things like flaking paint or water stains, we’re powerfully primed to sense minds as well—so much so that segregated inmates often begin perceiving them regardless. As Brian Keenan, who was held by Islamic Jihad from 1986 to 1990, says of the voices he heard, “they were in the room, they were in me, they were coming from me but they were audible to no one else but me.”

What does this have to do with the impact of AI? More than anyone has yet imagined.


Imagine a social ecology populated by billions upon billions of junk intelligences


 

The problem, in a nutshell, is that other people aren’t so much heaven or hell as both. Solitary confinement, after all, refers to something done to people by other people. The argument to redefine segregation as torture finds powerful support in evidence showing that social exclusion activates the same regions of the brain as physical pain. At some point in our past, it seems, our social attachment systems coopted the pain system to motivate prosocial behaviors. As a result, the mere prospect of exclusion triggers analogues of physical suffering in human beings.

But as significant as this finding is, the experimental props used to derive these findings are even more telling. The experimental paradigm typically used to neuroimage social rejection turns on a strategically deceptive human-computer interaction, or HCI. While entombed in an fMRI, subjects are instructed to play an animated three-way game of catch—called ‘Cyberball’—with what they think are two other individuals on the internet, but which is in fact a program designed to initially include, then subsequently exclude, the subject. As the other ‘players’ begin throwing more and more to each other, the subject begins to feel real as opposed to metaphorical pain. The subjects, in other words, need only be told that other minds control the graphics on the screen before them, and the scant information provided by those graphics trigger real world pain. A handful of pixels and a little fib is all that’s required to cue the pain of social rejection.

As one might imagine, Silicon Valley has taken notice.

The HCI field finds its roots in the 1960’s with the research of Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory. Even given the rudimentary computing power at his disposal, his ‘Eliza’ program, which relied on simple matching and substitution protocols to generate questions, was able to cue strong emotional reactions in many subjects. As it turns out, people regularly exhibit what the late Clifford Nass called ‘mindlessness,’ the reliance on automatic scripts, when interacting with artificial agents. Before you scoff at the notion, recall the 2015 Ashley Madison hack, and the subsequent revelation that it deployed more than 70,000 bots to conjure the illusion of endless extramarital possibility. These bots, like Eliza, were simple, mechanical affairs, but given the context of Ashley Madison, their behaviour apparently convinced millions of men that some kind of (promising) soap opera was afoot.

The great paradox, of course, is that those automatic scripts belong to the engine of ‘mindreading,’ our ability to predict, explain, and manipulate our fellow human beings, not to mention ourselves. They only stand revealed as mechanical, ‘mindless,’ when tasked to cognize something utterly without evolutionary precedent: an artificial agent. Our power to peer into one another’s souls, in other words, becomes little more than a grab-bag of exploitable reflexes in the presence of AI.

The claim boggles, I admit, but from a Darwinian perspective, it’s hard to see how things could be otherwise. Our capacity to solve one another is largely a product of our hunter-gatherer past, which is to say, environments where human intelligence was the only game in town. Why evolve the capacity to solve for artificial intelligences, let alone ones possessing Big Data resources? The cues underwriting human social cognition may seem robust, but this is an artifact of ecological stability, the fact that our blind trust in our shared social biology has served so far. We always presume our environments indestructible. As the species responsible for the ongoing Anthropocene extinction, we have a long history of recognizing ecological peril only after the fact.

Sherry Turkle, MIT professor and eminent author of Alone Together, has been warning of what she calls “Darwinian buttons” for over a decade now. Despite the explosive growth in Human-Computer Interaction research, her concerns remain at best, a passing consideration. As part of our unconscious, automatic cognitive systems, we have no conscious awareness that such buttons even exist. They are, to put it mildly, easy to overlook. Add to this the overwhelming institutional and economic incentive to exploit these cues, and the AI community’s failure to consider Turkle’s misgivings seems all but inevitable.

Like most all scientists, researchers in the field harbor only the best of intentions, and the point of AI, as they see it, is to empower consumers, to give them what they want. The vast bulk of ongoing research in Human-Computer Interaction is aimed at “improving the user experience,” identifying what cues trust instead of suspicion, attachment instead of avoidance. Since trust requires competence, a great deal of the research remains focused on developing the core cognitive competencies of specialized AI systems—and recent advances on this front have been nothing if not breathtaking. But the same can be said regarding interpersonal competencies as well—enough to inspire Clifford Nass and Corina Yen to write, The Man Who Lied to his Laptop, a book touted as the How to Win Friends and Influence People of the 21st century. In the course of teaching machines how to better push our buttons, we’re learning how to better push them as well.

Simply because it is so easily miscued, human social cognition depends on trust. Shapes, after all, are cheap, while soap operas represent a potential goldmine. This explains our powerful, hardwired penchant for tribalism: the intimacy of our hunter-gatherer past all but assured trustworthiness, providing a cheap means of nullifying our vulnerability to social deception. When Trump decries ‘fake news,’ for instance, what he’s primarily doing is signaling group membership. He understands, the instinctive way we all understand, that the best way to repudiate damaging claims is to circumvent them altogether, and focus on the group membership of the claimer. Trust, the degree we can take one another for granted, is the foundation of cooperative interaction.

We are about to be deluged with artificial friends. In a recent roundup of industry forecasts, Forbes reports that AI related markets are already growing, and expected to continue growing, by more than 50% per annum. Just last year, Microsoft launched its Bot Framework service, a public platform for creating ‘conversational user interfaces’ for a potentially endless variety of commercial purposes, all of it turning on Microsoft’s rapidly advancing AI research. “Build a great conversationalist,” the site urges. “Build and connect intelligent bots to interact with your users naturally wherever they are…” Of course, the term “naturally,” here, refers to the seamless way these inhuman systems cue our human social cognitive systems. Learning how to tweak, massage, and push our Darwinian buttons has become an out-and-out industrial enterprise.

As mentioned above, Human-Human Interaction consists of pushing these buttons all the time, prompting automatic scripts that prompt further automatic scripts, with only the rare communicative snag giving us pause for genuine conscious deliberation. It all works simply because our fellow humans comprise the ancestral ecology of social cognition. As it stands, cuing social cognitive reflexes out of school is largely the province of magicians, con artists, and political demagogues. Seen in this light, the AI revolution looks less a cornucopia of marvels than the industrialized unleashing of endless varieties of invasive species—an unprecedented overthrow of our ancestral social cognitive habitats.

A habitat that, arguably, is already under severe duress.

In 2006, Maki Fukasawa coined the term ‘herbivore men’ to describe the rising number of Japanese males expressing disinterest in marital or romantic relationships with women. And the numbers have only continued to rise. A 2016 National Institute of Population and Social Security Research survey reveals that 42 percent of Japanese men between the ages of 18 and 34 remain virgins, up six percent from a mere five years previous. For Japan, a nation already struggling with the economic consequences of depopulation, such numbers are disastrous.

And Japan is not alone. In Man, Interrupted: Why Young Men are Struggling and What We Can Do About It, Philip Zimbardo (of the Stanford Prisoner Experiment fame) and Nikita Coulombe provide a detailed account of how technological transformations—primarily online porn, video-gaming, and virtual peer groups—are undermining the ability of American boys to academically achieve as well as maintain successful relationships. They see phenomena such as the growing MGTOW (‘men going their own way’) movement as the product of the way exposure to virtual, technological environments leaves them ill-equipped to deal with the rigours of genuine social interaction.

More recently, Jean Twenge, a psychologist at San Diego State University, has sounded the alarm on the catastrophic consequences of smartphone use for post-Millennials, arguing that “the twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever.” The primary culprit: loneliness. “For all their power to link kids day and night, social media also exacerbate the age-old teen concern about being left out.” Social media, in other words, seem to be playing the same function as the Cyberball game used by researchers to neuroimage the pain of social rejection. Only this time the experiment involves an entire generation of kids, and the game has no end.

The list of curious and troubling phenomena apparently turning on the ways mere connectivity has transformed our social ecology is well-nigh endless. Merely changing how we push one another’s Darwinian buttons, in other words, has impacted the human social ecology in historically unprecedented ways. And by all accounts, we find ourselves becoming more isolated, more alienated, than at any other time in human history.

So what happens when we change the who? What happens when the heaven of social belonging goes on sale?

Good question. There is no “Centre for the Scientific Study of Human Meaning” in the world. Within the HCI community, criticism is primarily restricted to the cognitivist/post-cognitivist debate, the question of whether cognition is intrinsically independent or dependent of an agent’s ongoing environmental interactions. As the preceding should make clear, numerous disciplines find themselves wandering this or that section of the domain, but we have yet to organize any institutional pursuit of the questions posed here. Human social ecology, the study of human interaction in biologically amenable terms, remains the province of storytellers.

We quite literally have no clue as to what we are about to do.

Consider Mark Zuckerberg’s and Elon Musk’s recent ‘debate’ regarding the promise and threat of AI. Musk, of course, has garnered headlines for quite some time with fears of artificial superintelligence. He’s famously called AI “our biggest existential threat,” openly referring to Skynet and the prospect of robots mowing down civilians on the streets. On a Sunday this past July, Zuckerberg went live in his Palo Alto backyard while smoking meats to host an impromptu Q&A. At the fifty-minute mark, he answers a question regarding Musk’s fears, and responds, “I think people who are naysayers and try to drum up these doomsday scenarios—I don’t understand it. It’s really negative and in some ways I think it’s pretty irresponsible.”

On the Tuesday following, Musk tweeted in response: “I’ve talked to Mark about this. His understanding of the subject is limited.”

To the extent that human interaction is ecological (and how could it be otherwise?), both can be accused of irresponsibility and limited understanding. The threat of ‘superintelligence,’ though perhaps inevitable, remains far enough in the future to easily dismiss as a bogeyman. The same can be said regarding “peak human” arguments predicting mass unemployment. The threat of economic disruption, though potentially dire, is counter-balanced by the promise of new, unforeseen economic opportunity. This leaves us with the countless number of ways AI will almost certainly improve our lives: fewer car crashes, fewer misdiagnoses, and so on. As a result, one can predict how all such exchanges will end.

The contemporary AI debate, in other words, is largely a pseudo-debate.

The futurist Richard Yonck’s account of ‘affective computing’ somewhat redresses this problem in his recently released Heart of the Machine, but since he begins with the presupposition that AI represents a natural progression, that the technological destruction of ancestral social habitats is the ancestral habitat of humanity, he remains largely blind to the social ecological consequences of his subject matter. Espousing a kind of technological fatalism (or worse, fundamentalism), he characterizes AI as the culmination of a “buddy movie” as old as humanity itself. The oxymoronic, if not contradictory, prospects of ‘artificial friends’ simply does not dawn on him.

Neil Lawrence, a professor of machine learning at the University of Sheffield and technology columnist at The Guardian, is the rare expert who recognizes the troubling ecological dimensions of the AI revolution. Borrowing the distinction between System Two, or conscious, ‘mindful’ problem-solving, and System One, or unconscious, ‘mindless’ problem-solving, from cognitive psychology, he warns of what he calls System Zero, what happens when the market—via Big Data, social media, and artificial intelligence—all but masters our Darwinian buttons. As he writes,

“The actual intelligence that we are capable of creating within the next 5 years is an unregulated System Zero. It won’t understand social context, it won’t understand prejudice, it won’t have a sense of a larger human objective, it won’t empathize. It will be given a particular utility function and it will optimize that to its best capability regardless of the wider negative effects.”

To the extent that modern marketing (and propaganda) techniques already seek to cue emotional as opposed to rational responses, however, there’s a sense in which ‘System Zero’ and consumerism are coeval. Also, economics comprises but a single dimension of human social ecology. We have good reason to fear that Lawrence’s doomsday scenario, one where market and technological forces conspire to transform us into ‘consumer Borg,’ understates the potential catastrophe that awaits.

The closest one gets to a genuine analysis of the interpersonal consequences of AI lies in movies such as Spike Jonze’s science-fiction masterpiece, Her, or the equally brilliant HBO series, Westworld, scripted by Charles Yu. ‘Science fiction,’ however, happens to be the blanket term AI optimists use to dismiss their critical interlocutors.

When it comes to assessing the prospect of artificial intelligence, natural intelligence is failing us.

The internet was an easy sell. After all, what can be wrong with connecting likeminded people?

The problem, of course, is that we are the evolutionary product of small, highly interdependent, hunter-gatherer communities. Historically, those disposed to be permissive had no choice but to continually negotiate with those disposed to be authoritarian. Each party disliked the criticism of the other, but the daily rigors of survival forced them to get along. No longer. Only now, a mere two decades later, are we discovering the consequences of creating a society that systematically segregates permissives and authoritarians. The election of Donald Trump has, if nothing else, demonstrated the degree to which technology has transformed human social ecology in novel, potentially disastrous ways.

AI has also been an easy sell—at least so far. After all, what can be wrong with humanizing our technological environments? Imagine a world where everything is ‘user friendly,’ compliant to our most petulant wishes. What could be wrong with that?

Well, potentially everything, insofar as ‘humanizing our environments’ amounts to dehumanizing our social ecology, replacing the systems we are adapted to solve, our fellow humans, with systems possessing no evolutionary precedent whatsoever, machines designed to push our buttons in ways that optimize hidden commercial interests. Social pollution, in effect.

Throughout the history of our species, finding social heaven has required risking social hell. Human beings are as prone to be demanding, competitive, hurtful—anything but ‘user friendly’—as otherwise. Now the industrial giants of the early 21st century are promising to change all that, to flood the spaces between us with machines designed to shoulder the onerous labour of community, citizenship, and yes, even love.

Imagine a social ecology populated by billions upon billions of junk intelligences. Imagine the solitary confinement of an inhuman crowd. How will we find one another? How will we tolerate the hypersensitive infants we now seem doomed to become?

Visions of the Semantic Apocalypse: James Andow and Dispositional Metasemantics

by rsbakker

The big problem faced by dispositionalist accounts of meaning lies in their inability to explain the apparent normativity of meaning. Claims that the meaning of X turns on the disposition to utter ‘X’ requires some way to explain the pragmatic dimensions of meaning, the fact that ‘X’ can be both shared and misapplied. Every attempt to pin meaning to natural facts, even ones so low-grained as dispositions, runs aground on the external relationality of the natural, the fact that things in the world just do not stand in relations of rightness or wrongness relative one another. No matter how many natural parameters you pile onto your dispositions, you will still have no way of determining the correctness of any given application of X.

This problem falls into the wheelhouse of heuristic neglect. If we understand that human cognition is fractionate, then the inability of dispositions to solve for correctness pretty clearly indicates a conflict between cognitive subsystems. But if we let metacognitive neglect, our matter of fact blindness to our own cognitive constitution, dupe us into thinking we possess one big happy cognition, this conflict is bound to seem deeply mysterious, a clash of black cows in the night. And as history shows us, mysterious problems beget mysterious answers.

So for normativists, this means that only intentional cognition, those systems adapted to solve problems via articulations of ‘right or wrong’ talk, can hope to solve the theoretical nature of meaning. For dispositionalists, however, this amounts to ceding whole domains of nature hostage to perpetual philosophical disputation. The only alternative, they think, is to collect and shuffle the cards yet again, in the hope that some articulation of natural facts will somehow lay correctness bare. The history of science, after all, is a history of uncovering hidden factors—a priori intuitions be damned. Even still, it remains very hard to understand how to stack external relations into normative relations. Ignorant of the structure of intentional cognition, and the differences between it and natural (mechanical) cognition, the dispositionalist assumes that meaning is real, and that since all real things are ultimately natural, meaning must have a natural locus and function. Both approaches find themselves stalled in different vestibules of the same crash space.

For me, the only way to naturalize meaning is to understand it not as something ‘real out there’ but as a component of intentional cognition, biologically understood. The trick lies in stacking external relations into the mirage of normative relations: laying out the heuristic misapplications generating traditional philosophical crash spaces. The actual functions of linguistic communication turn on the vast differential systems implementing it. We focus on the only things we apparently see. Given the intuition of sufficiency arising out of neglect, we assume these form autonomous systems. And so tools that allow conscious cognition to blindly mediate the function of vast differential systems—histories, both personal and evolutionary—become an ontological nightmare.

In “Zebras, Intransigence & Semantic Apocalypse: Problems for Dispositional Metasemantics,” James Andow considers the dispositionalist attempt to solve for normativity via the notion of ‘complete information.’ The title alone had me hooked (for obvious reasons), but the argument Andow lays out is a wry and fascinating one. Where dispositions to apply terms are neither right nor wrong, dispositions to apply terms given all relevant information seems to enable the discrimination of normative discrepancies between performances. The problem arises when one asks what counts as ‘all relevant information.’ Offloading determinacy onto relevant information simply raises the question of determinacy at the level of relevant information. What constrains ‘relevance’? What about future relevance? Andow chases this inability to delimit complete information to the most extreme case:

It seems pretty likely that there is information out there which would radically restructure the nature of human existence, make us abandon technologies, reconsider our values and place in nature, information that would lead us to restructure the political organization of our species, reconsider national boundaries, and the ‘artificial divisions’ which having distinct languages impose on us. The likely effect of complete information is semantic apocalypse. (Just to be clear—my claim here is not that it is likely we will undergo such a shift. Who is to say what volume of information humankind will become aware of before extinction? Rather, the claim is that the probable result of being exposed to all information which would alter one’s dispositions, i.e., complete information, would involve a radical overhaul in semantic dispositions).

This paragraph is brilliant, especially given the grand way it declares the semantic apocalypse only to parenthetically take it all back! For my money, though, Andow’s throwaway question, “Who is to say what volume of information humankind will become aware of before extinction?” is far and away the most pressing one. But then I see these issues in light of a far different theory of meaning.

What is the information threshold of semantic apocalypse?

Dispositionalism entails the possibility of semantic apocalypse to the degree the tendencies of biological systems are ecologically dependent, and so susceptible to gradual or catastrophic change. This draws out the importance of the semantic apocalypse as distinct from other forms of global catastrophe. A zombie apocalypse, for instance, might also count as a semantic apocalypse, but only if our dispositions to apply terms were radically transformed. It’s possible, in other words, to suffer a zombie apocalypse without suffering a semantic apocalypse. The physical systems underwriting meaning are not the same as the physical systems underwriting modern civilization. So long as some few of us linger, meaning lingers.

Meaning, in other words, can survive radical ecological destruction. (This is one of the reasons we remain, despite all our sophistication, largely blind to the issue of cognitive ecology: so far it’s been with us through thick and thin). The advantage of dispositionalist approaches, Andow thinks, lies in the way it anchors meaning in our nature. One may dispute how ‘meanings’ find themselves articulated in intentional cognition more generally, while agreeing that intentional cognition is biological; a suite of sensitivities attuned to very specific sets of cues, leveraging reliable predictions. One can be agnostic on the ontological status of ‘meaning’ in other words, and still agree that meaning talk turns on intentional cognition, which turns on heuristic capacities whose development we can track through childhood. So long as a catastrophe leaves those cues and their predictive power intact, it will not precipitate a semantic apocalypse.

So the question of the threshold of the semantic apocalypse becomes the question of the stability of a certain biological system of specialized sensitivities and correlations. Whatever collapses this system engenders the semantic apocalypse (which for Andow means the global indeterminacy of meanings, and for me the global unreliability of intentional cognition more generally). The thing to note here, however, is the ease with which such systems do collapse once the correlations between sensitivities and outcomes cease to become reliable. Meaning talk, in other words, is ecological, which is to say it requires its environments be a certain way to discharge ancestral functions.

Suddenly the summary dismissal of the genuine possibility of a semantic apocalypse becomes ill-advised. Ecologies can collapse in a wide variety of ways. The form any such collapse takes turns on the ‘pollutants’ and the systems involved. We have no assurance that human cognitive ecology is robust in all respects. Meaning may be able to survive a zombie apocalypse, but as an ecological artifact, it is bound to be vulnerable somehow.

That vulnerability, on my account, is cognitive technology. We see animals in charcoal across cave walls so easily because our visual systems leap to conclusions on the basis of so little information. The problem is that ‘so little information’ also means so easily reproduced. The world is presently engaged in a mammoth industrial research program bent on hacking every cue-based cognitive reflex we possess. More and more, the systems we evolved to solve our fellow human travellers will be contending with artificial intelligences dedicated to commercial exploitation. ‘Deep information,’ meanwhile, is already swamping the legal system, even further problematizing the folk conceptual (shallow information) staples that ground the system’s self-understanding. Creeping medicalization continues unabated, slowly scaling back warrant for things like character judgment in countless different professional contexts. The list goes on.

The semantic apocalypse isn’t simply possible: it’s happening.

Visions of the Semantic Apocalypse: A Critical Review of Yuval Noah Harari’s Homo Deus

by rsbakker

homo-deus-na

“Studying history aims to loosen the grip of the past,” Yuval Noah Harari writes. “It enables us to turn our heads this way and that, and to begin to notice possibilities that our ancestors could not imagine, or didn’t want us to imagine” (59). Thus does the bestselling author of Sapiens: A Brief History of Humankind rationalize his thoroughly historical approach to question of our technological future in his fascinating follow-up, Homo Deus: A Brief History of Tomorrow. And so does he identify himself as a humanist, committed to freeing us from what Kant would have called, ‘our tutelary natures.’ Like Kant, Harari believes knowledge will set us free.

Although by the end of the book it becomes difficult to understand what ‘free’ might mean here.

As Harari himself admits, “once technology enables us to re-engineer human minds, Homo sapiens will disappear, human history will come to an end and a completely new process will begin, which people like you and me cannot comprehend” (46). Now if you’re interested in mapping the conceptual boundaries of comprehending the posthuman, I heartily recommend David Roden’s skeptical tour de force, Posthuman Life: Philosophy at the Edge of the Human. Homo Deus, on the other hand, is primarily a book chronicling the rise and fall of contemporary humanism against the backdrop of apparent ‘progress.’ The most glaring question, of course, is whether Harari’s academic humanism possesses the resources required to diagnose the problems posed by the collapse of popular humanism. This challenge—the problem of using obsolescent vocabularies to theorize, not only the obsolescence of those vocabularies, but the successor vocabularies to come—provides an instructive frame through which to understand the successes and failures of this ambitious and fascinating book.

How good is Homo Deus? Well, for years people have been asking me for a lay point of entry for the themes explored here on Three Pound Brain and in my novels, and I’ve always been at a loss. No longer. Anyone surfing for reviews of the book are certain to find individuals carping about Harari not possessing the expertise to comment on x or y, but these critics never get around to explaining how any human could master all the silos involved in such an issue (while remaining accessible to a general audience, no less). Such criticisms amount to advocating no one dare interrogate what could be the greatest challenge to ever confront humanity. In addition to erudition, Harari has the courage to concede ugly possibilities, the sensitivity to grasp complexities (as well as the limits they pose), and the creativity to derive something communicable. Even though I think his residual humanism conceals the true profundity of the disaster awaiting us, he glimpses more than enough to alert millions of readers to the shape of the Semantic Apocalypse. People need to know human progress likely has a horizon, a limit, that doesn’t involve environmental catastrophe or creating some AI God.

The problem is far more insidious and retail than most yet realize.

The grand tale Harari tells is a vaguely Western Marxist one, wherein culture (following Lukacs) is seen as a primary enabler of relations of power, a fundamental component of the ‘social apriori.’ The primary narrative conceit of such approaches belongs to the ancient Greeks: “[T]he rise of humanism also contains the seeds of its downfall,” Harari writes. “While the attempt to upgrade humans into gods takes humanism to its logical conclusion, it simultaneously exposes humanism’s inherent flaws” (65). For all its power, humanism possesses intrinsic flaws, blindnesses and vulnerabilities, that will eventually lead it to ruin. In a sense, Harari is offering us a ‘big history’ version of negative dialectic, attempting to show how the internal logic of humanism runs afoul the very power it enables.

But that logic is also the very logic animating Harari’s encyclopedic account. For all its syncretic innovations, Homo Deus uses the vocabularies of academic or theoretical humanism to chronicle the rise and fall of popular or practical humanism. In this sense, the difference between Harari’s approach to the problem of the future and my own could not be more pronounced. On my account, academic humanism, far from enjoying critical or analytical immunity, is best seen as a crumbling bastion of pre-scientific belief, the last gasp of traditional apologia, the cognitive enterprise most directly imperilled by the rising technological tide, while we can expect popular humanism to linger for some time to come (if not indefinitely).

Homo Deus, in fact, exemplifies the quandary presently confronting humanists such as Harari, how the ‘creeping delegitimization’ of their theoretical vocabularies is slowly robbing them of any credible discursive voice. Harari sees the problem, acknowledging that “[w]e won’t be able to grasp the full implication of novel technologies such as artificial intelligence if we don’t know what minds are” (107). But the fact remains that “science knows surprisingly little about minds and consciousness” (107). We presently have no consensus-commanding, natural account of thought and experience—in fact, we can’t even agree on how best to formulate semantic and phenomenal explananda.

Humanity as yet lacks any workable, thoroughly naturalistic, theory of meaning or experience. For Harari this means the bastion of academic humanism, though besieged, remains intact, at least enough for him to advance his visions of the future. Despite the perplexity and controversies occasioned by our traditional vocabularies, they remain the only game in town, the very foundation of countless cognitive activities. “[T]he whole edifice of modern politics and ethics is built upon subjective experiences,” Harari writes, “and few ethical dilemmas can be solved by referring strictly to brain activities” (116). Even though his posits lie nowhere in the natural world, they nevertheless remain subjective realities, the necessary condition of solving countless problems. “If any scientist wants to argue that subjective experiences are irrelevant,” Harari writes, “their challenge is to explain why torture or rape are wrong without reference to any subjective experience” (116).

This is the classic humanistic challenge posed to naturalistic accounts, of course, the demand that they discharge the specialized functions of intentional cognition the same way intentional cognition does. This demand amounts to little more than a canard, of course, once we appreciate the heuristic nature of intentional cognition. The challenge intentional cognition poses to natural cognition is to explain, not replicate, its structure and dynamics. We clearly evolved our intentional cognitive capacities, after all, to solve problems natural cognition could not reliably solve. This combination of power, economy, and specificity is the very thing that a genuinely naturalistic theory of meaning (such as my own) must explain.

 

“… fiction might thereby become the most potent force on earth, surpassing even wayward asteroids and natural selection. Hence if we want to understand our future, cracking genomes and crunching numbers is hardly enough. We must decipher the fictions that give meaning to the world.”

 

So moving forward it is important to understand how his theoretical approach elides the very possibility of a genuinely post-intentional future. Because he has no natural theory of meaning, he has no choice but to take the theoretical adequacy of his intentional idioms for granted. But if his intentional idioms possess the resources he requires to theorize the future, they must somehow remain out of play; his discursive ‘subject position’ must possess some kind of immunity to the scientific tsunami climbing our horizons. His very choice of tools limits the radicality of the story he tells. No matter how profound, how encompassing, the transformational deluge, Harari must somehow remain dry upon his theoretical ark. And this, as we shall see, is what ultimately swamps his conclusions.

But if the Hard Problem exempts his theoretical brand of intentionality, one might ask why it doesn’t exempt all intentionality from scientific delegitimation. What makes the scientific knowledge of nature so tremendously disruptive to humanity is the fact that human nature is, when all is said and down, just more nature. Conceding general exceptionalism, the thesis that humans possess something miraculous distinguishing them from nature more generally, would undermine the very premise of his project.

Without any way out of this bind, Harari fudges, basically. He remains silent on his own intentional (even humanistic) theoretical commitments, while attacking exceptionalism by expanding the franchise of meaning and consciousness to include animals: whatever intentional phenomena consist in, they are ultimately natural to the extent that animals are natural.

But now the problem has shifted. If humans dwell on a continuum with nature more generally, then what explains the Anthropocene, our boggling dominion of the earth? Why do humans stand so drastically apart from nature? The capacity that most distinguishes humans from their nonhuman kin, Harari claims (in line with contemporary theories), is the capacity to cooperate. He writes:

“the crucial factor in our conquest of the world was our ability to connect many humans to one another. Humans nowadays completely dominate the planet not because the individual human is far more nimble-fingered than the individual chimp or wolf, but because Homo sapiens is the only species on earth capable of cooperating flexibly in large numbers.” 131

He poses a ‘shared fictions’ theory of mass social coordination (unfortunately, he doesn’t engage research on groupishness, which would have provided him with some useful, naturalistic tools, I think). He posits an intermediate level of existence between the objective and subjective, the ‘intersubjective,’ consisting of our shared beliefs in imaginary orders, which serve to distribute authority and organize our societies. “Sapiens rule the world,” he writes, “because only they can weave an intersubjective web of meaning; a web of laws, forces, entities and places that exist purely in their common imagination” (149). This ‘intersubjective web’ provides him with theoretical level of description he thinks crucial to understanding our troubled cultural future.

He continues:

“During the twenty-first century the border between history and biology is likely to blur not because we will discover biological explanations for historical events, but rather because ideological fictions will rewrite DNA strands; political and economic interests will redesign the climate; and the geography of mountains and rivers will give way to cyberspace. As human fictions are translated into genetic and electronic codes, the intersubjective reality will swallow up the objective reality and biology will merge with history. In the twenty-first century fiction might thereby become the most potent force on earth, surpassing even wayward asteroids and natural selection. Hence if we want to understand our future, cracking genomes and crunching numbers is hardly enough. We must decipher the fictions that give meaning to the world.” 151

The way Harari sees it, ideology, far from being relegated to prescientific theoretical midden, is set to become all powerful, a consumer of worlds. This launches his extensive intellectual history of humanity, beginning with the algorithmic advantages afforded by numeracy, literacy, and currency, how these “broke the data-processing limitations of the human brain” (158). Where our hunter-gathering ancestors could at best coordinate small groups, “[w]riting and money made it possible to start collecting taxes from hundreds of thousands of people, to organise complex bureaucracies and to establish vast kingdoms” (158).

Harari then turns to the question of how science fits in with this view of fictions, the nature of the ‘odd couple,’ as he puts it:

“Modern science certainly changed the rules of the game, but it did not simply replace myths with facts. Myths continue to dominate humankind. Science only makes these myths stronger. Instead of destroying the intersubjective reality, science will enable it to control the objective and subjective realities more completely than ever before.” 179

Science is what renders objective reality compliant to human desire. Storytelling is what renders individual human desires compliant to collective human expectations, which is to say, intersubjective reality. Harari understands that the relationship between science and religious ideology is not one of straightforward antagonism: “science always needs religious assistance in order to create viable human institutions,” he writes. “Scientists study how the world functions, but there is no scientific method for determining how humans ought to behave” (188). Though science has plenty of resources for answering means type questions—what you ought to do to lose weight, for instance—it lacks resources to fix the ends that rationalize those means. Science, Harari argues, requires religion to the extent that it cannot ground the all important fictions enabling human cooperation (197).

Insofar as science is a cooperative, human enterprise, it can only destroy one form of meaning on the back of some other meaning. By revealing the anthropomorphism underwriting our traditional, religious accounts of the natural world, science essentially ‘killed God’—which is to say, removed any divine constraint on our actions or aspirations. “The cosmic plan gave meaning to human life, but also restricted human power” (199). Like stage-actors, we had a plan, but our role was fixed. Unfixing that role, killing God, made meaning into something each of us has to find for ourselves. Harari writes:

“Since there is no script, and since humans fulfill no role in any great drama, terrible things might befall us and no power will come to save us, or give meaning to our suffering. There won’t be a happy ending or a bad ending, or any ending at all. Things just happen, one after the other. The modern world does not believe in purpose, only in cause. If modernity has a motto, it is ‘shit happens.’” 200

The absence of a script, however, means that anything goes; we can play any role we want to. With the modern freedom from cosmic constraint comes postmodern anomie.

“The modern deal thus offers humans an enormous temptation, coupled with a colossal threat. Omnipotence is in front of us, almost within our reach, but below us yawns the abyss of complete nothingness. On the practical level, modern life consists of a constant pursuit of power within a universe devoid of meaning.” 201

Or to give it the Adornian spin it receives here on Three Pound Brain: the madness of a society that has rendered means, knowledge and capital, its primary end. Thus the modern obsession with the accumulation of the power to accumulate. And thus the Faustian nature of our present predicament (though Harari, curiously, never references Faust), the fact that “[w]e think we are smart enough to enjoy the full benefits of the modern deal without paying the price” (201). Even though physical resources such as material and energy are finite, no such limit pertains to knowledge. This is why “[t]he greatest scientific discovery was the discovery of ignorance.” (212): it spurred the development of systematic inquiry, and therefore the accumulation of knowledge, and therefore the accumulation of power, which, Harari argues, cuts against objective or cosmic meaning. The question is simply whether we can hope to sustain this process—defer payment—indefinitely.

“Modernity is a deal,” he writes, and for all its apparent complexities, it is very straightforward: “The entire contract can be summarised in a single phrase: humans agree to give up meaning in exchange for power” (199). For me the best way of thinking this process of exchanging power for meaning is in terms of what Weber called disenchantment: the very science that dispels our anthropomorphic fantasy worlds is the science that delivers technological power over the real world. This real world power is what drives traditional delegitimation: even believers acknowledge the vast bulk of the scientific worldview, as do the courts and (ideally at least) all governing institutions outside religion. Science is a recursive institutional ratchet (‘self-correcting’), leveraging the capacity to leverage ever more capacity. Now, after centuries of sheltering behind walls of complexity, human nature finds itself the intersection of multiple domains of scientific inquiry. Since we’re nothing special, just more nature, we should expect our burgeoning technological power over ourselves to increasingly delegitimate traditional discourses.

Humanism, on this account, amounts to an adaptation to the ways science transformed our ancestral ‘neglect structure,’ the landscape of ‘unknown unknowns’ confronting our prehistorical forebears. Our social instrumentalization of natural environments—our inclination to anthropomorphize the cosmos—is the product of our ancestral inability to intuit the actual nature of those environments. Information beyond the pale of human access makes no difference to human cognition. Cosmic meaning requires that the cosmos remain a black box: the more transparent science rendered that box, the more our rationales retreated to the black box of ourselves. The subjectivization of authority turns on how intentional cognition (our capacity to cognize authority) requires the absence of natural accounts to discharge ancestral functions. Humanism isn’t so much a grand revolution in thought as the result of the human remaining the last scientifically inscrutable domain standing. The rationalizations had to land somewhere. Since human meaning likewise requires that the human remain a black box, the vast industrial research enterprise presently dedicated to solving our nature does not bode well.

But this approach, economical as it is, isn’t available to Harari since he needs some enchantment to get his theoretical apparatus off the ground. As the necessary condition for human cooperation, meaning has to be efficacious. The ‘Humanist Revolution,’ as Harari sees it, consists in the migration of cooperative efficacy (authority) from the cosmic to the human. “This is the primary commandment humanism has given us: create meaning for a meaningless world” (221). Rather than scripture, human experience becomes the metric for what is right or wrong, and the universe, once the canvas of the priest, is conceded to the scientist. Harari writes:

“As the source of meaning and authority was relocated from the sky to human feelings, the nature of the entire cosmos changed. The exterior universe—hitherto teeming with gods, muses, fairies and ghouls—became empty space. The interior world—hitherto an insignificant enclave of crude passions—became deep and rich beyond measure” 234

This re-sourcing of meaning, Harari insists, is true whether or not one still believes in some omnipotent God, insofar as all the salient anchors of that belief lie within the believer, rather than elsewhere. God may still be ‘cosmic,’ but he now dwells beyond the canvas as nature, somewhere in the occluded frame, a place where only religious experience can access Him.

Man becomes ‘man the meaning maker,’ the trope that now utterly dominates contemporary culture:

“Exactly the same lesson is learned by Captain Kirk and Captain Jean-Luc Picard as they travel the galaxy in the starship Enterprise, by Huckleberry Finn and Jim as they sail down the Mississippi, by Wyatt and Billy as they ride their Harley Davidson’s in Easy Rider, and by countless other characters in myriad other road movies who leave their home town in Pennsylvannia (or perhaps New South Wales), travel in an old convertible (or perhaps a bus), pass through various life-changing experiences, get in touch with themselves, talk about their feelings, and eventually reach San Francisco (or perhaps Alice Springs) as better and wiser individuals.” 241

Not only is experience the new scripture, it is a scripture that is being continually revised and rewritten, a meaning that arises out of the process of lived life (yet somehow always managing to conserve the status quo). In story after story, the protagonist must find some ‘individual’ way to derive their own personal meaning out of an apparently meaningless world. This is a primary philosophical motivation behind The Second Apocalypse, the reason why I think epic fantasy provides such an ideal narrative vehicle for the critique of modernity and meaning. Fantasy worlds are fantastic, especially fictional, because they assert the objectivity of what we now (implicitly or explicitly) acknowledge to be anthropomorphic projections. The idea has always been to invert the modernist paradigm Harari sketches above, to follow a meaningless character through a meaningful world, using Kellhus to recapitulate the very dilemma Harari sees confronting us now:

“What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design, or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket?” 277

And so Harari segues to the future and the question of the ultimate fate of human meaning; this is where I find his steadfast refusal to entertain humanistic conceit most impressive. One need not ponder ‘designer experiences’ for long, I think, to get a sense of the fundamental rupture with the past it represents. These once speculative issues are becoming ongoing practical concerns: “These are not just hypotheses of philosophical speculations,” simply because ‘algorithmic man’ is becoming a technological reality (284). Harari provides a whirlwind tour of unnerving experiments clearly implying trouble for our intuitions, a discussion that transitions into a consideration of the ways we can already mechanically attenuate our experiences. A good number of the examples he adduces have been considered here, all of them underscoring the same, inescapable moral: “Free will exists in the imaginary stories we humans have invented” (283). No matter what your philosophical persuasion, our continuity with the natural world is an established scientific fact. Humanity is not exempt from the laws of nature. If humanity is not exempt from the laws of nature, then the human mastery of nature amounts to the human mastery of humanity.

He turns, at this point, to Gazzaniga’s research showing the confabulatory nature of human rationalization (via split brain patients), and Daniel Kahneman’s account of ‘duration neglect’—another favourite of mine. He offers an expanded version of Kahneman’s distinction between the ‘experiencing self,’ that part of us that actually undergoes events, and the ‘narrating self,’ the part of us that communicates—derives meaning from—these experiences, essentially using the dichotomy as an emblem for the dual process models of cognition presently dominating cognitive psychological research. He writes:

“most people identify with their narrating self. When they say, ‘I,’ the mean the story in their head, not the stream of experiences they undergo. We identify with the inner system that takes the crazy chaos of life and spins out of it seemingly logical and consistent yarns. It doesn’t matter that the plot is filled with lies and lacunas, and that it is rewritten again and again, so that today’s story flatly contradicts yesterday’s; the important thing is that we always retain the feeling that we have a single unchanging identity from birth to death (and perhaps from even beyond the grave). This gives rise to the questionable liberal belief that I am an individual, and that I possess a consistent and clear inner voice, which provides meaning for the entire universe.” 299

Humanism, Harari argues, turns on our capacity for self-deception, the ability to commit to our shared fictions unto madness, if need be. He writes:

“Medieval crusaders believed that God and heaven provided their lives with meaning. Modern liberals believe that individual free choices provide life with meaning. They are all equally delusional.” 305

Social self-deception is our birthright, the ability to believe what we need to believe to secure our interests. This is why the science, though shaking humanistic theory to the core, has done so little to interfere with the practices rationalized by that theory. As history shows, we are quite capable of shovelling millions into the abattoir of social fantasy. This delivers Harari to yet another big theme explored both here and Neuropath: the problems raised by the technological concretization of these scientific findings. As Harari puts it:

“However, once heretical scientific insights are translated into everyday technology, routine activities and economic structures, it will become increasingly difficult to sustain this double-game, and we—or our heirs—will probably require a brand new package of religious beliefs and political institutions. At the beginning of the third millennium, liberalism [the dominant variant of humanism] is threatened not by the philosophical idea that there are no free individuals but rather by concrete technologies. We are about to face a flood of extremely useful devices, tools and structures that make no allowance for the free will of individual humans. Can democracy, the free market and human rights survive this flood?” 305-6

harari

The first problem, as Harari sees it, is one of diminishing returns. Humanism didn’t become the dominant world ideology because it was true, it overran the collective imagination of humanity because it enabled. Humanistic values, Harari explains, afforded our recent ancestors with a wide variety of social utilities, efficiencies turning on the technologies of the day. Those technologies, it turns out, require human intelligence and the consciousness that comes with it. To depart from Harari, they are what David Krakauer calls ‘complementary technologies,’ tools that extend human capacity, as opposed to ‘competitive technologies,’ which render human capacities redundant).

Making humans redundant, of course, means making experience redundant, something which portends the systematic devaluation of human experience, or the collapse of humanism. Harari calls this process the ‘Great Decoupling’:

“Over the last decades there has been an immense advance in computer intelligence, but there has been exactly zero advance in computer consciousness. As far as we know, computers in 2016 are no more conscious than their prototypes in the 1950s. However, we are on the brink of a momentous revolution. Humans are in danger of losing their value, because intelligence is decoupling from consciousness.” 311

He’s quick to acknowledge all the problems yet confronting AI researchers, insisting that the trend unambiguously points toward every expanding capacities As he writes, “these technical problems—however difficult—need only be solved once” (317). The ratchet never stops clicking.

He’s also quick to block the assumption that humans are somehow exceptional: “The idea that humans will always have a unique ability beyond the reach of non-conscious algorithms is just wishful thinking” (319). He provides the (I think) terrifying example of David Cope, the University of California at Santa Cruz musicologist who has developed algorithms whose compositions strike listeners as more authentically human than compositions by humans such as J.S. Bach.

The second problem is the challenge of what (to once again depart from Harari) Neil Lawrence calls ‘System Zero,’ the question of what happens when our machines begin to know us better than we know ourselves. As Harari notes, this is already the case: “The shifting of authority from humans to algorithms is happening all around us, not as a result of some momentous governmental decision, but due to a flood of mundane choices” (345). Facebook can now guess your preferences better than your friends, your family, your spouse—and in some instances better than you yourself! He warns the day is coming when political candidates can receive real-time feedback via social media, when people can hear everything said about them always and everywhere. Projecting this trend leads him to envision something very close to Integration, where we become so embalmed in our information environments that “[d]isconnection will mean death” (344).

He writes:

“The individual will not be crushed by Big Brother; it will disintegrate from within. Today corporations and governments pay homage to my individuality and promise to provide medicine, education and entertainment customized to my unique needs and wishes. But in order to do so, corporations and governments first need to break me up into biochemical subsystems, monitor these subsystems with ubiquitous sensors and decipher their workings with powerful algorithms. In the process, the individual will transpire to be nothing but a religious fantasy.” 345

This is my own suspicion, and I think the process of subpersonalization—the neuroscientifically informed decomposition of consumers into economically relevant behaviours—is well underway. But I think it’s important to realize that as data accumulates, and researchers and their AIs find more and more ways to instrumentalize those data sets, what we’re really talking about are proliferating heuristic hacks (that happen to turn on neuroscientific knowledge). They need decipher us only so far as we comply. Also, the potential noise generated by a plethora of competing subpersonal communications seems to constitute an important structural wrinkle. It could be the point most targeted by subpersonal hacking will at least preserve the old borders of the ‘self,’ fantasy that it was. Post-intentional ‘freedom’ could come to reside in the noise generated by commercial competition.

The third problem he sees for humanism lies in the almost certainly unequal distribution of the dividends of technology, a trope so well worn in narrative that we scarce need consider it here. It follows that liberal humanism, as an ideology committed to the equal value of all individuals, has scant hope of squaring the interests of the redundant masses against those of a technologically enhanced superhuman elite.

 

… this isn’t any mere cultural upheaval or social revolution, this is an unprecedented transformation in the history of life on this planet, the point when the evolutionary platform of behaviour, morphology, becomes the product of behaviour.

 

Under pretty much any plausible scenario you can imagine, the shared fiction of popular humanism is doomed. But as Harari has already argued, shared fictions are the necessary condition of social coordination. If humanism collapses, some kind of shared fiction has to take its place. And alas, this is where my shared journey with Harari ends. From this point forward, I think his analysis is largely an artifact of his own, incipient humanism.

Harari uses the metaphor of ‘vacuum,’ implying that humans cannot but generate some kind of collective narrative, some way of making their lives not simply meaningful to themselves, but more importantly, meaningful to one another. It is the mass resemblance of our narrative selves, remember, that makes our mass cooperation possible. [This is what misleads him, the assumption that ‘mass cooperation’ need be human at all by this point.] So he goes on to consider what new fiction might arise to fill the void left by humanism. The first alternative is ‘technohumanism’ (transhumanism, basically), which is bent on emancipating humanity from the authority of nature much as humanism was bent on emancipating humanity from the authority of tradition. Where humanists are free to think anything in their quest to actualize their desires, technohumanists are free to be anything in their quest to actualize their desires.

The problem is that the freedom to be anything amounts to the freedom to reengineer desire. So where the objective meaning, following one’s god (socialization), gave way to subjective meaning, following one’s heart (socialization), it remains entirely unclear what the technohumanist hopes to follow or to actualize. As soon as we gain power over our cognitive being the question becomes, ‘Follow which heart?’

Or as Harari puts it,

“Techno-humanism faces an impossible dilemma here. It considers human will the most important thing in the universe, hence it pushes humankind to develop technologies that can control and redesign our will. After all, it’s tempting to gain control over the most important thing in the world. Yet once we have such control, techno-humanism will not know what to do with it, because the sacred human will would become just another designer product.” 366

Which is to say, something arbitrary. Where humanism aims ‘to loosen the grip of the past,’ transhumanism aims to loosen the grip of biology. We really see the limits of Harari’s interpretative approach here, I think, as well as why he falls short a definitive account of the Semantic Apocalypse. The reason that ‘following your heart’ can substitute for ‘following the god’ is that they amount to the very same claim, ‘trust your socialization,’ which is to say, your pre-existing dispositions to behave in certain ways in certain contexts. The problem posed by the kind of enhancement extolled by transhumanists isn’t that shared fictions must be ‘sacred’ to be binding, but that something neglected must be shared. Synchronization requires trust, the ability to simultaneously neglect others (and thus dedicate behaviour to collective problem solving) and yet predict their behaviour nonetheless. Absent this shared background, trust is impossible, and therefore synchronization is impossible. Cohesive, collective action, in other words, turns on a vast amount of evolutionary and educational stage-setting, common cognitive systems stamped with common forms of training, all of it ancestrally impervious to direct manipulation. Insofar as transhumanism promises to place the material basis of individual desire within the compass of individual desire, it promises to throw our shared background to the winds of whimsy. Transhumanism is predicated on the ever-deepening distortion of our ancestral ecologies of meaning.

Harari reads transhumanism as a reductio of humanism, the point where the religion of individual empowerment unravels the very agency it purports to empower. Since he remains, at least residually, a humanist, he places ideology—what he calls the ‘intersubjective’ level of reality—at the foundation of his analysis. It is the mover and shaker here, what Harari believes will stamp objective reality and subjective reality both in its own image.

And the fact of the matter is, he really has no choice, given he has no other way of generalizing over the processes underwriting the growing Whirlwind that has us in its grasp. So when he turns to digitalism (or what he calls ‘Dataism’), it appears to him to be the last option standing:

“What might replace desires and experiences as the source of all meaning and authority? As of 2016, only one candidate is sitting in history’s reception room waiting for the job interview. This candidate is information.” 366

Meaning has to be found somewhere. Why? Because synchronization requires trust requires shared commitments to shared fictions, stories expressing those values we hold in common. As we have seen, science cannot determine ends, only means to those ends. Something has to fix our collective behaviour, and if science cannot, we will perforce turn to be some kind of religion…

But what if we were to automate collective behaviour? There’s a second candidate that Harari overlooks, one which I think is far, far more obvious than digitalism (which remains, for all its notoriety, an intellectual position—and a confused one at that, insofar as it has no workable theory of meaning/cognition). What will replace humanism? Atavism… Fantasy. For all the care Harari places in his analyses, he overlooks how investing AI with ever increasing social decision-making power simultaneously divests humans of that power, thus progressively relieving us of the need for shared values. The more we trust to AI, the less trust we require of one another. We need only have faith in the efficacy of our technical (and very objective) intermediaries; the system synchronizes us automatically in ways we need not bother knowing. Ideology ceases to become a condition of collective action. We need not have any stories regarding our automated social ecologies whatsoever, so long as we mind the diminishing explicit constraints the system requires of us.

Outside our dwindling observances, we are free to pursue whatever story we want. Screw our neighbours. And what stories will those be? Well, the kinds of stories we evolved to tell, which is to say, the kinds of stories our ancestors told to each other. Fantastic stories… such as those told by George R. R. Martin, Donald Trump, myself, or the Islamic state. Radical changes in hardware require radical changes in software, unless one has some kind of emulator in place. You have to be sensible to social change to ideologically adapt to it. “Islamic fundamentalists may repeat the mantra that ‘Islam is the answer,’” Harari writes, “but religions that lose touch with the technological realities of the day lose their ability even to understand the questions being asked” (269). But why should incomprehension or any kind of irrationality disqualify the appeal of Islam, if the basis of the appeal primarily lies in some optimization of our intentional cognitive capacities?

Humans are shallow information consumers by dint of evolution, and deep information consumers by dint of modern necessity. As that necessity recedes, it stands to reason our patterns of consumption will recede with it, that we will turn away from the malaise of perpetual crash space and find solace in ever more sophisticated simulations of worlds designed to appease our ancestral inclinations. As Harari himself notes, “Sapiens evolved in the African savannah tens of thousands of years ago, and their algorithms are just not built to handle twenty-first century data flows” (388). And here we come to the key to understanding the profundity, and perhaps even the inevitability of the Semantic Apocalypse: intentional cognition turns on cues which turn on ecological invariants that technology is even now rendering plastic. The issue here, in other words, isn’t so much a matter of ideological obsolescence as cognitive habitat destruction, the total rewiring of the neglected background upon which intentional cognition depends.

The thing people considering the future impact of technology need to pause and consider is that this isn’t any mere cultural upheaval or social revolution, this is an unprecedented transformation in the history of life on this planet, the point when the evolutionary platform of behaviour, morphology, becomes the product of behaviour. Suddenly a system that leveraged cognitive capacity via natural selection will be leveraging that capacity via neural selection—behaviourally. A change so fundamental pretty clearly spells the end of all ancestral ecologies, including the cognitive. Humanism is ‘disintegrating from within’ because intentional cognition itself is beginning to founder. The tsunami of information thundering above the shores of humanism is all deep information, information regarding what we evolved to ignore—and therefore trust. Small wonder, then, that it scuttles intentional problem-solving, generates discursive crash spaces that only philosophers once tripped into.

The more the mechanisms behind learning impediments are laid bare, the less the teacher can attribute performance to character, the more they are forced to adopt a clinical attitude. What happens when every impediment to learning is laid bare? Unprecedented causal information is flooding our institutions, removing more and more behaviour from the domain of character, why? Because character judgments always presume individuals could have done otherwise, and presuming individuals could have done otherwise presumes that we neglect the actual sources of behaviour. Harari brushes this thought on a handful occasions, writing, most notably:

“In the eighteenth century Homo sapiens was like a mysterious black box, whose inner workings were beyond our grasp. Hence when scholars asked why a man drew a knife and stabbed another to death, an acceptable answer said: ‘Because he chose to…” 282

But he fails to see the systematic nature of the neglect involved, and therefore the explanatory power it affords. Our ignorance of ourselves, in other words, determines not simply the applicability, but the solvency of intentional cognition as well. Intentional cognition allowed our ancestors to navigate opaque or ‘black box’ social ecologies. The role causal information plays in triggering intuitions of exemption is tuned to the efficacy of this system overall. By and large our ancestors exempted those individuals in those circumstances that best served their tribe as a whole. However haphazardly, moral intuitions involving causality served some kind of ancestral optimization. So when actionable causal information regarding our behaviour becomes available, we have no choice but to exempt those behaviours, no matter what kind of large scale distortions result. Why? Because it is the only moral thing to do.

Welcome to crash space. We know this is crash space as opposed to, say, scientifically informed enlightenment (the way it generally feels) simply by asking what happens when actionable causal information regarding our every behaviour becomes available. Will moral judgment become entirely inapplicable? For me, the free will debate has always been a paradigmatic philosophical crash space, a place where some capacity always seems to apply, yet consistently fails to deliver solutions because it does not. We evolved to communicate behaviour absent information regarding the biological sources of behaviour: is it any wonder that our cause-neglecting workarounds cannot square with the causes they work around? The growing institutional challenges arising out of the medicalization of character turns on the same cognitive short-circuit. How can someone who has no choice be held responsible?

Even as we drain the ignorance intentional cognition requires from our cognitive ecologies, we are flooding them with AI, what promises to be a deluge of algorithms trained to cue intentional cognition, impersonate persons, in effect. The evidence is unequivocal: our intentional cognitive capacities are easily cued out of school—in a sense, this is the cornerstone of their power, the ability to assume so much on the basis of so little information. But in ecologies designed to exploit intentional intuitions, this power and versatility becomes a tremendous liability. Even now litigators and lawmakers find themselves beset with the question of how intentional cognition should solve for environments flooded with artifacts designed to cue human intentional cognition to better extract various commercial utilities. The problems of the philosophers dwell in ivory towers no more.

First we cloud the water, then we lay the bait—we are doing this to ourselves, after all. We are taking our first stumbling steps into what is becoming a global social crash space. Intentional cognition is heuristic cognition. Since heuristic cognition turns on shallow information cues, we have good reason to assume that our basic means of understanding ourselves and our projects will be incompatible with deep information accounts. The more we learn about cognition, the more apparent this becomes, the more our intentional modes of problem-solving will break down. I’m not sure there’s anything much to be done at this point save getting the word out, empowering some critical mass of people with a notion of what’s going on around them. This is what Harari does to a remarkable extent with Homo Deus, something which we may all have cause to thank him.

Science is steadily revealing the very sources intentional cognition evolved to neglect. Technology is exploiting these revelations, busily engineering emulators to pander to our desires, allowing us to shelter more and more skin from the risk and toil of natural and social reality. Designer experience is designer meaning. Thus the likely irony: the end of meaning will appear to be its greatest blooming, the consumer curled in the womb of institutional matrons, dreaming endless fantasies, living lives of spellbound delight, exploring worlds designed to indulge ancestral inclinations.

To make us weep and laugh for meaning, never knowing whether we are together or alone.

The Dim Future of Human Brilliance

by rsbakker

Moths to a flame

Humans are what might be called targeted shallow information consumers in otherwise unified deep information environments. We generally skim only what information we need—from our environments or ourselves—to effect reproduction, and nothing more. We neglect gamma radiation for good reason: ‘deep’ environmental information that makes no reproductive difference makes no cognitive difference. As the product of innumerable ancestral ecologies, human cognitive biology is ecological, adapted to specific, high-impact environments. As ecological, one might expect that human cognitive biology is every bit as vulnerable to ecological change as any other biological system.

Under the rubric of  the Semantic Apocalypse, the ecological vulnerability of human cognitive biology has been my focus here for quite some time at Three Pound Brain. Blind to deep structures, human cognition largely turns on cues, sensitivity to information differentially related to the systems cognized.  Sociocognition, where a mere handful of behavioural cues can trigger any number of predictive/explanatory assumptions, is paradigmatic of this. Think, for instance, how easy it was for Ashley Madison to convince its predominantly male customers that living women were checking their profiles.  This dependence on cues underscores a corresponding dependence on background invariance: sever the differential relations between the cues and systems to be cognized (the way Ashley Madison did) and what should be sociocognition, the solution of some fellow human, becomes confusion (we find ourselves in ‘crash space’) or worse, exploitation (we find ourselves in instrumentalized crash space, or ‘cheat space’).

So the questions I think we need to be asking are:

What effect does deep information have on our cognitive ecologies? The so-called ‘data deluge’ is nothing but an explosion in the availability of deep or ancestrally inaccessible information. What happens when targeted shallow information consumers suddenly find themselves awash in different kinds of deep information? A myriad of potential examples come to mind. Think of the way medicalization drives accommodation creep, how instructors are gradually losing the ability to judge character in the classroom. Think of the ‘fear of crime’ phenomena, how the assessment of ancestrally unavailable information against implicit, ancestral baselines skews general perceptions of criminal threat. For that matter, think of the free will debate, or the way mechanistic cognition scrambles intentional cognition more generally: these are paradigmatic instances of the way deep information, the primary deliverance of science, crashes the targeted and shallow cognitive capacities that comprise our evolutionary inheritance.

What effect does background variation have on targeted, shallow modes of cognition? What happens when cues become differentially detached, or ‘decoupled,’ from their ancestral targets? Where the first question deals with the way the availability of deep information (literally, not metaphorically) pollutes cognitive ecologies, the ways human cognition requires the absence of certain information, this question deals with the way human cognition requires the presence of certain environmental continuities. There’s actually been an enormous amount of research done on this question in a wide variety of topical guises. Nikolaas Tinbergen coined the term “supernormal stimuli” to designate ecologically variant cuing, particularly the way exaggerated stimuli can trigger misapplications of different heuristic regimes. He famously showed how gull chicks, for instance, could be fooled into pecking false “super beaks” for food given only a brighter-than-natural red spot. In point of fact, you see supernormal stimuli in dramatic action anytime you see artificial outdoor lighting surrounded by a haze of bugs: insects that use lunar transverse orientation to travel at night continually correct their course vis a vis streetlights, porch lights, and so on, causing them to spiral directly into them. What Tinbergen and subsequent ethology researchers have demonstrated is the ubiquity of cue-based cognition, the fact that all organisms are targeted, shallow information consumers in unified deep information environments.

Deirdre Barrett has recently applied the idea to modern society, but lacking any theory of meaning, she finds herself limited to pointing out suggestive speculative parallels between ecological readings and phenomena that are semantically overdetermined otherwise. For me this question calves into a wide variety of domain-specific forms, but there’s an important distinction to be made between the decoupling of cues generally and strategic decoupling, between ‘crash space’ and ‘cheat space.’ Where the former involves incidental cognitive incapacity, human versions of transverse orientation, the latter involves engineered cognitive incapacity. The Ashley Madison case I referenced above provides an excellent example of simply how little information is needed to cue our sociocognitive systems in online environments. In one sense, this facility evidences the remarkable efficiency of human sociocognition, the fact that it can do so much with so little. But, as with specialization in evolution more generally, this efficiency comes at the cost of ecological dependency: you can only neglect information in problem-solving so long as the systems ignored remain relatively constant.

And this is basically the foundational premise of the Semantic Apocalypse: intentional cognition, as a radically specialized system, is especially vulnerable to both crashing and cheating. The very power of our sociocognitive systems is what makes them so liable to be duped (think religious anthropomorphism), as well as so easy to dupe. When Sherry Turkle, for instance, bemoans the ease with which various human-computer interfaces, or ‘HCIs,’ push our ‘Darwinian buttons’ she is talking about the vulnerability of sociocognitive cues to various cheats (but since she, like Barrett, lacks any theory of meaning, she finds herself in similar explanatory straits). In a variety of experimental contexts, for instance, people have been found to trust artificial interlocutors over human ones. Simple tweaks in the voices and appearance of HCIs have a dramatic impact on our perceptions of those encounters—we are in fact easily manipulated, cued to draw erroneous conclusions, given what are quite literally cartoonish stimuli. So the so-called ‘internet of things,’ the distribution of intelligence throughout our artifactual ecologies, takes on a far more sinister cast when viewed through the lens of human sociocognitive specialization. Populating our ecologies with gadgets designed to cue our sociocognitive capacities ‘out of school’ will only degrade the overall utility of those capacities. Since those capacities underwrite what we call meaning or ‘intentionality,’ the collapse of our ancestral sociocognitive ecologies signals the ‘death of meaning.’

The future of human cognition looks dim. We can say this because we know human cognition is heuristic, and that specific forms of heuristic cognition turn on specific forms of ecological stability, the very forms that our ongoing technological revolution promises to sweep away. Blind Brain Theory, in other words, offers a theory of meaning that not only explains away the hard problem, but can also leverage predictions regarding the fate of our civilization. It makes me dizzy thinking about it, and suspicious—the empty can, as they say, rattles the loudest. But this preposterous scope is precisely what we should expect from a genuinely naturalistic account of intentional phenomena. The power of mechanistic cognition lies in the way it scales with complexity, allowing us to build hierarchies of components and subcomponents. To naturalize meaning is to understand the soul in terms continuous with the cosmos.

This is precisely what we should expect from a theory delivering the Holy Grail, the naturalization of meaning.

You could even argue that the unsettling, even horrifying consequences evidence its veracity, given there’s so many more ways for the world to contradict our parochial conceits than to appease them. We should expect things will end ugly.

On the Inapplicability of Philosophy to the Future

by rsbakker

By way of continuing the excellent conversation started in Lingering: The problem is that we evolved to be targeted, shallow information consumers in unified, deep information environments. As targeted, shallow information consumers we require two things: 1) certain kinds of information hygiene, and 2) certain kinds of background invariance. (1) is already in a state of free-fall, I think, and (2) is on the technological cusp. I don’t see any plausible way of reversing the degradation of either ecological condition, so I see the prospects for traditional philosophical discourses only diminishing. The only way forward that I can see is just being honest to the preposterous enormity of the problem. The thought that rebranding old tools that never delivered back when (1) and (2) were only beginning to erode will suffice now that they are beginning to collapse strikes me as implausible.