Three Pound Brain

No bells, just whistling in the dark…

Visions of the Semantic Apocalypse: A Critical Review of Yuval Noah Harari’s Homo Deus

by rsbakker


“Studying history aims to loosen the grip of the past,” Yuval Noah Harari writes. “It enables us to turn our heads this way and that, and to begin to notice possibilities that our ancestors could not imagine, or didn’t want us to imagine” (59). Thus does the bestselling author of Sapiens: A Brief History of Humankind rationalize his thoroughly historical approach to question of our technological future in his fascinating follow-up, Homo Deus: A Brief History of Tomorrow. And so does he identify himself as a humanist, committed to freeing us from what Kant would have called, ‘our tutelary natures.’ Like Kant, Harari believes knowledge will set us free.

Although by the end of the book it becomes difficult to understand what ‘free’ might mean here.

As Harari himself admits, “once technology enables us to re-engineer human minds, Homo sapiens will disappear, human history will come to an end and a completely new process will begin, which people like you and me cannot comprehend” (46). Now if you’re interested in mapping the conceptual boundaries of comprehending the posthuman, I heartily recommend David Roden’s skeptical tour de force, Posthuman Life: Philosophy at the Edge of the Human. Homo Deus, on the other hand, is primarily a book chronicling the rise and fall of contemporary humanism against the backdrop of apparent ‘progress.’ The most glaring question, of course, is whether Harari’s academic humanism possesses the resources required to diagnose the problems posed by the collapse of popular humanism. This challenge—the problem of using obsolescent vocabularies to theorize, not only the obsolescence of those vocabularies, but the successor vocabularies to come—provides an instructive frame through which to understand the successes and failures of this ambitious and fascinating book.

How good is Homo Deus? Well, for years people have been asking me for a lay point of entry for the themes explored here on Three Pound Brain and in my novels, and I’ve always been at a loss. No longer. Anyone surfing for reviews of the book are certain to find individuals carping about Harari not possessing the expertise to comment on x or y, but these critics never get around to explaining how any human could master all the silos involved in such an issue (while remaining accessible to a general audience, no less). Such criticisms amount to advocating no one dare interrogate what could be the greatest challenge to ever confront humanity. In addition to erudition, Harari has the courage to concede ugly possibilities, the sensitivity to grasp complexities (as well as the limits they pose), and the creativity to derive something communicable. Even though I think his residual humanism conceals the true profundity of the disaster awaiting us, he glimpses more than enough to alert millions of readers to the shape of the Semantic Apocalypse. People need to know human progress likely has a horizon, a limit, that doesn’t involve environmental catastrophe or creating some AI God.

The problem is far more insidious and retail than most yet realize.

The grand tale Harari tells is a vaguely Western Marxist one, wherein culture (following Lukacs) is seen as a primary enabler of relations of power, a fundamental component of the ‘social apriori.’ The primary narrative conceit of such approaches belongs to the ancient Greeks: “[T]he rise of humanism also contains the seeds of its downfall,” Harari writes. “While the attempt to upgrade humans into gods takes humanism to its logical conclusion, it simultaneously exposes humanism’s inherent flaws” (65). For all its power, humanism possesses intrinsic flaws, blindnesses and vulnerabilities, that will eventually lead it to ruin. In a sense, Harari is offering us a ‘big history’ version of negative dialectic, attempting to show how the internal logic of humanism runs afoul the very power it enables.

But that logic is also the very logic animating Harari’s encyclopedic account. For all its syncretic innovations, Homo Deus uses the vocabularies of academic or theoretical humanism to chronicle the rise and fall of popular or practical humanism. In this sense, the difference between Harari’s approach to the problem of the future and my own could not be more pronounced. On my account, academic humanism, far from enjoying critical or analytical immunity, is best seen as a crumbling bastion of pre-scientific belief, the last gasp of traditional apologia, the cognitive enterprise most directly imperilled by the rising technological tide, while we can expect popular humanism to linger for some time to come (if not indefinitely).

Homo Deus, in fact, exemplifies the quandary presently confronting humanists such as Harari, how the ‘creeping delegitimization’ of their theoretical vocabularies is slowly robbing them of any credible discursive voice. Harari sees the problem, acknowledging that “[w]e won’t be able to grasp the full implication of novel technologies such as artificial intelligence if we don’t know what minds are” (107). But the fact remains that “science knows surprisingly little about minds and consciousness” (107). We presently have no consensus-commanding, natural account of thought and experience—in fact, we can’t even agree on how best to formulate semantic and phenomenal explananda.

Humanity as yet lacks any workable, thoroughly naturalistic, theory of meaning or experience. For Harari this means the bastion of academic humanism, though besieged, remains intact, at least enough for him to advance his visions of the future. Despite the perplexity and controversies occasioned by our traditional vocabularies, they remain the only game in town, the very foundation of countless cognitive activities. “[T]he whole edifice of modern politics and ethics is built upon subjective experiences,” Harari writes, “and few ethical dilemmas can be solved by referring strictly to brain activities” (116). Even though his posits lie nowhere in the natural world, they nevertheless remain subjective realities, the necessary condition of solving countless problems. “If any scientist wants to argue that subjective experiences are irrelevant,” Harari writes, “their challenge is to explain why torture or rape are wrong without reference to any subjective experience” (116).

This is the classic humanistic challenge posed to naturalistic accounts, of course, the demand that they discharge the specialized functions of intentional cognition the same way intentional cognition does. This demand amounts to little more than a canard, of course, once we appreciate the heuristic nature of intentional cognition. The challenge intentional cognition poses to natural cognition is to explain, not replicate, its structure and dynamics. We clearly evolved our intentional cognitive capacities, after all, to solve problems natural cognition could not reliably solve. This combination of power, economy, and specificity is the very thing that a genuinely naturalistic theory of meaning (such as my own) must explain.


“… fiction might thereby become the most potent force on earth, surpassing even wayward asteroids and natural selection. Hence if we want to understand our future, cracking genomes and crunching numbers is hardly enough. We must decipher the fictions that give meaning to the world.”


So moving forward it is important to understand how his theoretical approach elides the very possibility of a genuinely post-intentional future. Because he has no natural theory of meaning, he has no choice but to take the theoretical adequacy of his intentional idioms for granted. But if his intentional idioms possess the resources he requires to theorize the future, they must somehow remain out of play; his discursive ‘subject position’ must possess some kind of immunity to the scientific tsunami climbing our horizons. His very choice of tools limits the radicality of the story he tells. No matter how profound, how encompassing, the transformational deluge, Harari must somehow remain dry upon his theoretical ark. And this, as we shall see, is what ultimately swamps his conclusions.

But if the Hard Problem exempts his theoretical brand of intentionality, one might ask why it doesn’t exempt all intentionality from scientific delegitimation. What makes the scientific knowledge of nature so tremendously disruptive to humanity is the fact that human nature is, when all is said and down, just more nature. Conceding general exceptionalism, the thesis that humans possess something miraculous distinguishing them from nature more generally, would undermine the very premise of his project.

Without any way out of this bind, Harari fudges, basically. He remains silent on his own intentional (even humanistic) theoretical commitments, while attacking exceptionalism by expanding the franchise of meaning and consciousness to include animals: whatever intentional phenomena consist in, they are ultimately natural to the extent that animals are natural.

But now the problem has shifted. If humans dwell on a continuum with nature more generally, then what explains the Anthropocene, our boggling dominion of the earth? Why do humans stand so drastically apart from nature? The capacity that most distinguishes humans from their nonhuman kin, Harari claims (in line with contemporary theories), is the capacity to cooperate. He writes:

“the crucial factor in our conquest of the world was our ability to connect many humans to one another. Humans nowadays completely dominate the planet not because the individual human is far more nimble-fingered than the individual chimp or wolf, but because Homo sapiens is the only species on earth capable of cooperating flexibly in large numbers.” 131

He poses a ‘shared fictions’ theory of mass social coordination (unfortunately, he doesn’t engage research on groupishness, which would have provided him with some useful, naturalistic tools, I think). He posits an intermediate level of existence between the objective and subjective, the ‘intersubjective,’ consisting of our shared beliefs in imaginary orders, which serve to distribute authority and organize our societies. “Sapiens rule the world,” he writes, “because only they can weave an intersubjective web of meaning; a web of laws, forces, entities and places that exist purely in their common imagination” (149). This ‘intersubjective web’ provides him with theoretical level of description he thinks crucial to understanding our troubled cultural future.

He continues:

“During the twenty-first century the border between history and biology is likely to blur not because we will discover biological explanations for historical events, but rather because ideological fictions will rewrite DNA strands; political and economic interests will redesign the climate; and the geography of mountains and rivers will give way to cyberspace. As human fictions are translated into genetic and electronic codes, the intersubjective reality will swallow up the objective reality and biology will merge with history. In the twenty-first century fiction might thereby become the most potent force on earth, surpassing even wayward asteroids and natural selection. Hence if we want to understand our future, cracking genomes and crunching numbers is hardly enough. We must decipher the fictions that give meaning to the world.” 151

The way Harari sees it, ideology, far from being relegated to prescientific theoretical midden, is set to become all powerful, a consumer of worlds. This launches his extensive intellectual history of humanity, beginning with the algorithmic advantages afforded by numeracy, literacy, and currency, how these “broke the data-processing limitations of the human brain” (158). Where our hunter-gathering ancestors could at best coordinate small groups, “[w]riting and money made it possible to start collecting taxes from hundreds of thousands of people, to organise complex bureaucracies and to establish vast kingdoms” (158).

Harari then turns to the question of how science fits in with this view of fictions, the nature of the ‘odd couple,’ as he puts it:

“Modern science certainly changed the rules of the game, but it did not simply replace myths with facts. Myths continue to dominate humankind. Science only makes these myths stronger. Instead of destroying the intersubjective reality, science will enable it to control the objective and subjective realities more completely than ever before.” 179

Science is what renders objective reality compliant to human desire. Storytelling is what renders individual human desires compliant to collective human expectations, which is to say, intersubjective reality. Harari understands that the relationship between science and religious ideology is not one of straightforward antagonism: “science always needs religious assistance in order to create viable human institutions,” he writes. “Scientists study how the world functions, but there is no scientific method for determining how humans ought to behave” (188). Though science has plenty of resources for answering means type questions—what you ought to do to lose weight, for instance—it lacks resources to fix the ends that rationalize those means. Science, Harari argues, requires religion to the extent that it cannot ground the all important fictions enabling human cooperation (197).

Insofar as science is a cooperative, human enterprise, it can only destroy one form of meaning on the back of some other meaning. By revealing the anthropomorphism underwriting our traditional, religious accounts of the natural world, science essentially ‘killed God’—which is to say, removed any divine constraint on our actions or aspirations. “The cosmic plan gave meaning to human life, but also restricted human power” (199). Like stage-actors, we had a plan, but our role was fixed. Unfixing that role, killing God, made meaning into something each of us has to find for ourselves. Harari writes:

“Since there is no script, and since humans fulfill no role in any great drama, terrible things might befall us and no power will come to save us, or give meaning to our suffering. There won’t be a happy ending or a bad ending, or any ending at all. Things just happen, one after the other. The modern world does not believe in purpose, only in cause. If modernity has a motto, it is ‘shit happens.’” 200

The absence of a script, however, means that anything goes; we can play any role we want to. With the modern freedom from cosmic constraint comes postmodern anomie.

“The modern deal thus offers humans an enormous temptation, coupled with a colossal threat. Omnipotence is in front of us, almost within our reach, but below us yawns the abyss of complete nothingness. On the practical level, modern life consists of a constant pursuit of power within a universe devoid of meaning.” 201

Or to give it the Adornian spin it receives here on Three Pound Brain: the madness of a society that has rendered means, knowledge and capital, its primary end. Thus the modern obsession with the accumulation of the power to accumulate. And thus the Faustian nature of our present predicament (though Harari, curiously, never references Faust), the fact that “[w]e think we are smart enough to enjoy the full benefits of the modern deal without paying the price” (201). Even though physical resources such as material and energy are finite, no such limit pertains to knowledge. This is why “[t]he greatest scientific discovery was the discovery of ignorance.” (212): it spurred the development of systematic inquiry, and therefore the accumulation of knowledge, and therefore the accumulation of power, which, Harari argues, cuts against objective or cosmic meaning. The question is simply whether we can hope to sustain this process—defer payment—indefinitely.

“Modernity is a deal,” he writes, and for all its apparent complexities, it is very straightforward: “The entire contract can be summarised in a single phrase: humans agree to give up meaning in exchange for power” (199). For me the best way of thinking this process of exchanging power for meaning is in terms of what Weber called disenchantment: the very science that dispels our anthropomorphic fantasy worlds is the science that delivers technological power over the real world. This real world power is what drives traditional delegitimation: even believers acknowledge the vast bulk of the scientific worldview, as do the courts and (ideally at least) all governing institutions outside religion. Science is a recursive institutional ratchet (‘self-correcting’), leveraging the capacity to leverage ever more capacity. Now, after centuries of sheltering behind walls of complexity, human nature finds itself the intersection of multiple domains of scientific inquiry. Since we’re nothing special, just more nature, we should expect our burgeoning technological power over ourselves to increasingly delegitimate traditional discourses.

Humanism, on this account, amounts to an adaptation to the ways science transformed our ancestral ‘neglect structure,’ the landscape of ‘unknown unknowns’ confronting our prehistorical forebears. Our social instrumentalization of natural environments—our inclination to anthropomorphize the cosmos—is the product of our ancestral inability to intuit the actual nature of those environments. Information beyond the pale of human access makes no difference to human cognition. Cosmic meaning requires that the cosmos remain a black box: the more transparent science rendered that box, the more our rationales retreated to the black box of ourselves. The subjectivization of authority turns on how intentional cognition (our capacity to cognize authority) requires the absence of natural accounts to discharge ancestral functions. Humanism isn’t so much a grand revolution in thought as the result of the human remaining the last scientifically inscrutable domain standing. The rationalizations had to land somewhere. Since human meaning likewise requires that the human remain a black box, the vast industrial research enterprise presently dedicated to solving our nature does not bode well.

But this approach, economical as it is, isn’t available to Harari since he needs some enchantment to get his theoretical apparatus off the ground. As the necessary condition for human cooperation, meaning has to be efficacious. The ‘Humanist Revolution,’ as Harari sees it, consists in the migration of cooperative efficacy (authority) from the cosmic to the human. “This is the primary commandment humanism has given us: create meaning for a meaningless world” (221). Rather than scripture, human experience becomes the metric for what is right or wrong, and the universe, once the canvas of the priest, is conceded to the scientist. Harari writes:

“As the source of meaning and authority was relocated from the sky to human feelings, the nature of the entire cosmos changed. The exterior universe—hitherto teeming with gods, muses, fairies and ghouls—became empty space. The interior world—hitherto an insignificant enclave of crude passions—became deep and rich beyond measure” 234

This re-sourcing of meaning, Harari insists, is true whether or not one still believes in some omnipotent God, insofar as all the salient anchors of that belief lie within the believer, rather than elsewhere. God may still be ‘cosmic,’ but he now dwells beyond the canvas as nature, somewhere in the occluded frame, a place where only religious experience can access Him.

Man becomes ‘man the meaning maker,’ the trope that now utterly dominates contemporary culture:

“Exactly the same lesson is learned by Captain Kirk and Captain Jean-Luc Picard as they travel the galaxy in the starship Enterprise, by Huckleberry Finn and Jim as they sail down the Mississippi, by Wyatt and Billy as they ride their Harley Davidson’s in Easy Rider, and by countless other characters in myriad other road movies who leave their home town in Pennsylvannia (or perhaps New South Wales), travel in an old convertible (or perhaps a bus), pass through various life-changing experiences, get in touch with themselves, talk about their feelings, and eventually reach San Francisco (or perhaps Alice Springs) as better and wiser individuals.” 241

Not only is experience the new scripture, it is a scripture that is being continually revised and rewritten, a meaning that arises out of the process of lived life (yet somehow always managing to conserve the status quo). In story after story, the protagonist must find some ‘individual’ way to derive their own personal meaning out of an apparently meaningless world. This is a primary philosophical motivation behind The Second Apocalypse, the reason why I think epic fantasy provides such an ideal narrative vehicle for the critique of modernity and meaning. Fantasy worlds are fantastic, especially fictional, because they assert the objectivity of what we now (implicitly or explicitly) acknowledge to be anthropomorphic projections. The idea has always been to invert the modernist paradigm Harari sketches above, to follow a meaningless character through a meaningful world, using Kellhus to recapitulate the very dilemma Harari sees confronting us now:

“What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design, or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket?” 277

And so Harari segues to the future and the question of the ultimate fate of human meaning; this is where I find his steadfast refusal to entertain humanistic conceit most impressive. One need not ponder ‘designer experiences’ for long, I think, to get a sense of the fundamental rupture with the past it represents. These once speculative issues are becoming ongoing practical concerns: “These are not just hypotheses of philosophical speculations,” simply because ‘algorithmic man’ is becoming a technological reality (284). Harari provides a whirlwind tour of unnerving experiments clearly implying trouble for our intuitions, a discussion that transitions into a consideration of the ways we can already mechanically attenuate our experiences. A good number of the examples he adduces have been considered here, all of them underscoring the same, inescapable moral: “Free will exists in the imaginary stories we humans have invented” (283). No matter what your philosophical persuasion, our continuity with the natural world is an established scientific fact. Humanity is not exempt from the laws of nature. If humanity is not exempt from the laws of nature, then the human mastery of nature amounts to the human mastery of humanity.

He turns, at this point, to Gazzaniga’s research showing the confabulatory nature of human rationalization (via split brain patients), and Daniel Kahneman’s account of ‘duration neglect’—another favourite of mine. He offers an expanded version of Kahneman’s distinction between the ‘experiencing self,’ that part of us that actually undergoes events, and the ‘narrating self,’ the part of us that communicates—derives meaning from—these experiences, essentially using the dichotomy as an emblem for the dual process models of cognition presently dominating cognitive psychological research. He writes:

“most people identify with their narrating self. When they say, ‘I,’ the mean the story in their head, not the stream of experiences they undergo. We identify with the inner system that takes the crazy chaos of life and spins out of it seemingly logical and consistent yarns. It doesn’t matter that the plot is filled with lies and lacunas, and that it is rewritten again and again, so that today’s story flatly contradicts yesterday’s; the important thing is that we always retain the feeling that we have a single unchanging identity from birth to death (and perhaps from even beyond the grave). This gives rise to the questionable liberal belief that I am an individual, and that I possess a consistent and clear inner voice, which provides meaning for the entire universe.” 299

Humanism, Harari argues, turns on our capacity for self-deception, the ability to commit to our shared fictions unto madness, if need be. He writes:

“Medieval crusaders believed that God and heaven provided their lives with meaning. Modern liberals believe that individual free choices provide life with meaning. They are all equally delusional.” 305

Social self-deception is our birthright, the ability to believe what we need to believe to secure our interests. This is why the science, though shaking humanistic theory to the core, has done so little to interfere with the practices rationalized by that theory. As history shows, we are quite capable of shovelling millions into the abattoir of social fantasy. This delivers Harari to yet another big theme explored both here and Neuropath: the problems raised by the technological concretization of these scientific findings. As Harari puts it:

“However, once heretical scientific insights are translated into everyday technology, routine activities and economic structures, it will become increasingly difficult to sustain this double-game, and we—or our heirs—will probably require a brand new package of religious beliefs and political institutions. At the beginning of the third millennium, liberalism [the dominant variant of humanism] is threatened not by the philosophical idea that there are no free individuals but rather by concrete technologies. We are about to face a flood of extremely useful devices, tools and structures that make no allowance for the free will of individual humans. Can democracy, the free market and human rights survive this flood?” 305-6


The first problem, as Harari sees it, is one of diminishing returns. Humanism didn’t become the dominant world ideology because it was true, it overran the collective imagination of humanity because it enabled. Humanistic values, Harari explains, afforded our recent ancestors with a wide variety of social utilities, efficiencies turning on the technologies of the day. Those technologies, it turns out, require human intelligence and the consciousness that comes with it. To depart from Harari, they are what David Krakauer calls ‘complementary technologies,’ tools that extend human capacity, as opposed to ‘competitive technologies,’ which render human capacities redundant).

Making humans redundant, of course, means making experience redundant, something which portends the systematic devaluation of human experience, or the collapse of humanism. Harari calls this process the ‘Great Decoupling’:

“Over the last decades there has been an immense advance in computer intelligence, but there has been exactly zero advance in computer consciousness. As far as we know, computers in 2016 are no more conscious than their prototypes in the 1950s. However, we are on the brink of a momentous revolution. Humans are in danger of losing their value, because intelligence is decoupling from consciousness.” 311

He’s quick to acknowledge all the problems yet confronting AI researchers, insisting that the trend unambiguously points toward every expanding capacities As he writes, “these technical problems—however difficult—need only be solved once” (317). The ratchet never stops clicking.

He’s also quick to block the assumption that humans are somehow exceptional: “The idea that humans will always have a unique ability beyond the reach of non-conscious algorithms is just wishful thinking” (319). He provides the (I think) terrifying example of David Cope, the University of California at Santa Cruz musicologist who has developed algorithms whose compositions strike listeners as more authentically human than compositions by humans such as J.S. Bach.

The second problem is the challenge of what (to once again depart from Harari) Neil Lawrence calls ‘System Zero,’ the question of what happens when our machines begin to know us better than we know ourselves. As Harari notes, this is already the case: “The shifting of authority from humans to algorithms is happening all around us, not as a result of some momentous governmental decision, but due to a flood of mundane choices” (345). Facebook can now guess your preferences better than your friends, your family, your spouse—and in some instances better than you yourself! He warns the day is coming when political candidates can receive real-time feedback via social media, when people can hear everything said about them always and everywhere. Projecting this trend leads him to envision something very close to Integration, where we become so embalmed in our information environments that “[d]isconnection will mean death” (344).

He writes:

“The individual will not be crushed by Big Brother; it will disintegrate from within. Today corporations and governments pay homage to my individuality and promise to provide medicine, education and entertainment customized to my unique needs and wishes. But in order to do so, corporations and governments first need to break me up into biochemical subsystems, monitor these subsystems with ubiquitous sensors and decipher their workings with powerful algorithms. In the process, the individual will transpire to be nothing but a religious fantasy.” 345

This is my own suspicion, and I think the process of subpersonalization—the neuroscientifically informed decomposition of consumers into economically relevant behaviours—is well underway. But I think it’s important to realize that as data accumulates, and researchers and their AIs find more and more ways to instrumentalize those data sets, what we’re really talking about are proliferating heuristic hacks (that happen to turn on neuroscientific knowledge). They need decipher us only so far as we comply. Also, the potential noise generated by a plethora of competing subpersonal communications seems to constitute an important structural wrinkle. It could be the point most targeted by subpersonal hacking will at least preserve the old borders of the ‘self,’ fantasy that it was. Post-intentional ‘freedom’ could come to reside in the noise generated by commercial competition.

The third problem he sees for humanism lies in the almost certainly unequal distribution of the dividends of technology, a trope so well worn in narrative that we scarce need consider it here. It follows that liberal humanism, as an ideology committed to the equal value of all individuals, has scant hope of squaring the interests of the redundant masses against those of a technologically enhanced superhuman elite.


… this isn’t any mere cultural upheaval or social revolution, this is an unprecedented transformation in the history of life on this planet, the point when the evolutionary platform of behaviour, morphology, becomes the product of behaviour.


Under pretty much any plausible scenario you can imagine, the shared fiction of popular humanism is doomed. But as Harari has already argued, shared fictions are the necessary condition of social coordination. If humanism collapses, some kind of shared fiction has to take its place. And alas, this is where my shared journey with Harari ends. From this point forward, I think his analysis is largely an artifact of his own, incipient humanism.

Harari uses the metaphor of ‘vacuum,’ implying that humans cannot but generate some kind of collective narrative, some way of making their lives not simply meaningful to themselves, but more importantly, meaningful to one another. It is the mass resemblance of our narrative selves, remember, that makes our mass cooperation possible. [This is what misleads him, the assumption that ‘mass cooperation’ need be human at all by this point.] So he goes on to consider what new fiction might arise to fill the void left by humanism. The first alternative is ‘technohumanism’ (transhumanism, basically), which is bent on emancipating humanity from the authority of nature much as humanism was bent on emancipating humanity from the authority of tradition. Where humanists are free to think anything in their quest to actualize their desires, technohumanists are free to be anything in their quest to actualize their desires.

The problem is that the freedom to be anything amounts to the freedom to reengineer desire. So where the objective meaning, following one’s god (socialization), gave way to subjective meaning, following one’s heart (socialization), it remains entirely unclear what the technohumanist hopes to follow or to actualize. As soon as we gain power over our cognitive being the question becomes, ‘Follow which heart?’

Or as Harari puts it,

“Techno-humanism faces an impossible dilemma here. It considers human will the most important thing in the universe, hence it pushes humankind to develop technologies that can control and redesign our will. After all, it’s tempting to gain control over the most important thing in the world. Yet once we have such control, techno-humanism will not know what to do with it, because the sacred human will would become just another designer product.” 366

Which is to say, something arbitrary. Where humanism aims ‘to loosen the grip of the past,’ transhumanism aims to loosen the grip of biology. We really see the limits of Harari’s interpretative approach here, I think, as well as why he falls short a definitive account of the Semantic Apocalypse. The reason that ‘following your heart’ can substitute for ‘following the god’ is that they amount to the very same claim, ‘trust your socialization,’ which is to say, your pre-existing dispositions to behave in certain ways in certain contexts. The problem posed by the kind of enhancement extolled by transhumanists isn’t that shared fictions must be ‘sacred’ to be binding, but that something neglected must be shared. Synchronization requires trust, the ability to simultaneously neglect others (and thus dedicate behaviour to collective problem solving) and yet predict their behaviour nonetheless. Absent this shared background, trust is impossible, and therefore synchronization is impossible. Cohesive, collective action, in other words, turns on a vast amount of evolutionary and educational stage-setting, common cognitive systems stamped with common forms of training, all of it ancestrally impervious to direct manipulation. Insofar as transhumanism promises to place the material basis of individual desire within the compass of individual desire, it promises to throw our shared background to the winds of whimsy. Transhumanism is predicated on the ever-deepening distortion of our ancestral ecologies of meaning.

Harari reads transhumanism as a reductio of humanism, the point where the religion of individual empowerment unravels the very agency it purports to empower. Since he remains, at least residually, a humanist, he places ideology—what he calls the ‘intersubjective’ level of reality—at the foundation of his analysis. It is the mover and shaker here, what Harari believes will stamp objective reality and subjective reality both in its own image.

And the fact of the matter is, he really has no choice, given he has no other way of generalizing over the processes underwriting the growing Whirlwind that has us in its grasp. So when he turns to digitalism (or what he calls ‘Dataism’), it appears to him to be the last option standing:

“What might replace desires and experiences as the source of all meaning and authority? As of 2016, only one candidate is sitting in history’s reception room waiting for the job interview. This candidate is information.” 366

Meaning has to be found somewhere. Why? Because synchronization requires trust requires shared commitments to shared fictions, stories expressing those values we hold in common. As we have seen, science cannot determine ends, only means to those ends. Something has to fix our collective behaviour, and if science cannot, we will perforce turn to be some kind of religion…

But what if we were to automate collective behaviour? There’s a second candidate that Harari overlooks, one which I think is far, far more obvious than digitalism (which remains, for all its notoriety, an intellectual position—and a confused one at that, insofar as it has no workable theory of meaning/cognition). What will replace humanism? Atavism… Fantasy. For all the care Harari places in his analyses, he overlooks how investing AI with ever increasing social decision-making power simultaneously divests humans of that power, thus progressively relieving us of the need for shared values. The more we trust to AI, the less trust we require of one another. We need only have faith in the efficacy of our technical (and very objective) intermediaries; the system synchronizes us automatically in ways we need not bother knowing. Ideology ceases to become a condition of collective action. We need not have any stories regarding our automated social ecologies whatsoever, so long as we mind the diminishing explicit constraints the system requires of us.

Outside our dwindling observances, we are free to pursue whatever story we want. Screw our neighbours. And what stories will those be? Well, the kinds of stories we evolved to tell, which is to say, the kinds of stories our ancestors told to each other. Fantastic stories… such as those told by George R. R. Martin, Donald Trump, myself, or the Islamic state. Radical changes in hardware require radical changes in software, unless one has some kind of emulator in place. You have to be sensible to social change to ideologically adapt to it. “Islamic fundamentalists may repeat the mantra that ‘Islam is the answer,’” Harari writes, “but religions that lose touch with the technological realities of the day lose their ability even to understand the questions being asked” (269). But why should incomprehension or any kind of irrationality disqualify the appeal of Islam, if the basis of the appeal primarily lies in some optimization of our intentional cognitive capacities?

Humans are shallow information consumers by dint of evolution, and deep information consumers by dint of modern necessity. As that necessity recedes, it stands to reason our patterns of consumption will recede with it, that we will turn away from the malaise of perpetual crash space and find solace in ever more sophisticated simulations of worlds designed to appease our ancestral inclinations. As Harari himself notes, “Sapiens evolved in the African savannah tens of thousands of years ago, and their algorithms are just not built to handle twenty-first century data flows” (388). And here we come to the key to understanding the profundity, and perhaps even the inevitability of the Semantic Apocalypse: intentional cognition turns on cues which turn on ecological invariants that technology is even now rendering plastic. The issue here, in other words, isn’t so much a matter of ideological obsolescence as cognitive habitat destruction, the total rewiring of the neglected background upon which intentional cognition depends.

The thing people considering the future impact of technology need to pause and consider is that this isn’t any mere cultural upheaval or social revolution, this is an unprecedented transformation in the history of life on this planet, the point when the evolutionary platform of behaviour, morphology, becomes the product of behaviour. Suddenly a system that leveraged cognitive capacity via natural selection will be leveraging that capacity via neural selection—behaviourally. A change so fundamental pretty clearly spells the end of all ancestral ecologies, including the cognitive. Humanism is ‘disintegrating from within’ because intentional cognition itself is beginning to founder. The tsunami of information thundering above the shores of humanism is all deep information, information regarding what we evolved to ignore—and therefore trust. Small wonder, then, that it scuttles intentional problem-solving, generates discursive crash spaces that only philosophers once tripped into.

The more the mechanisms behind learning impediments are laid bare, the less the teacher can attribute performance to character, the more they are forced to adopt a clinical attitude. What happens when every impediment to learning is laid bare? Unprecedented causal information is flooding our institutions, removing more and more behaviour from the domain of character, why? Because character judgments always presume individuals could have done otherwise, and presuming individuals could have done otherwise presumes that we neglect the actual sources of behaviour. Harari brushes this thought on a handful occasions, writing, most notably:

“In the eighteenth century Homo sapiens was like a mysterious black box, whose inner workings were beyond our grasp. Hence when scholars asked why a man drew a knife and stabbed another to death, an acceptable answer said: ‘Because he chose to…” 282

But he fails to see the systematic nature of the neglect involved, and therefore the explanatory power it affords. Our ignorance of ourselves, in other words, determines not simply the applicability, but the solvency of intentional cognition as well. Intentional cognition allowed our ancestors to navigate opaque or ‘black box’ social ecologies. The role causal information plays in triggering intuitions of exemption is tuned to the efficacy of this system overall. By and large our ancestors exempted those individuals in those circumstances that best served their tribe as a whole. However haphazardly, moral intuitions involving causality served some kind of ancestral optimization. So when actionable causal information regarding our behaviour becomes available, we have no choice but to exempt those behaviours, no matter what kind of large scale distortions result. Why? Because it is the only moral thing to do.

Welcome to crash space. We know this is crash space as opposed to, say, scientifically informed enlightenment (the way it generally feels) simply by asking what happens when actionable causal information regarding our every behaviour becomes available. Will moral judgment become entirely inapplicable? For me, the free will debate has always been a paradigmatic philosophical crash space, a place where some capacity always seems to apply, yet consistently fails to deliver solutions because it does not. We evolved to communicate behaviour absent information regarding the biological sources of behaviour: is it any wonder that our cause-neglecting workarounds cannot square with the causes they work around? The growing institutional challenges arising out of the medicalization of character turns on the same cognitive short-circuit. How can someone who has no choice be held responsible?

Even as we drain the ignorance intentional cognition requires from our cognitive ecologies, we are flooding them with AI, what promises to be a deluge of algorithms trained to cue intentional cognition, impersonate persons, in effect. The evidence is unequivocal: our intentional cognitive capacities are easily cued out of school—in a sense, this is the cornerstone of their power, the ability to assume so much on the basis of so little information. But in ecologies designed to exploit intentional intuitions, this power and versatility becomes a tremendous liability. Even now litigators and lawmakers find themselves beset with the question of how intentional cognition should solve for environments flooded with artifacts designed to cue human intentional cognition to better extract various commercial utilities. The problems of the philosophers dwell in ivory towers no more.

First we cloud the water, then we lay the bait—we are doing this to ourselves, after all. We are taking our first stumbling steps into what is becoming a global social crash space. Intentional cognition is heuristic cognition. Since heuristic cognition turns on shallow information cues, we have good reason to assume that our basic means of understanding ourselves and our projects will be incompatible with deep information accounts. The more we learn about cognition, the more apparent this becomes, the more our intentional modes of problem-solving will break down. I’m not sure there’s anything much to be done at this point save getting the word out, empowering some critical mass of people with a notion of what’s going on around them. This is what Harari does to a remarkable extent with Homo Deus, something which we may all have cause to thank him.

Science is steadily revealing the very sources intentional cognition evolved to neglect. Technology is exploiting these revelations, busily engineering emulators to pander to our desires, allowing us to shelter more and more skin from the risk and toil of natural and social reality. Designer experience is designer meaning. Thus the likely irony: the end of meaning will appear to be its greatest blooming, the consumer curled in the womb of institutional matrons, dreaming endless fantasies, living lives of spellbound delight, exploring worlds designed to indulge ancestral inclinations.

To make us weep and laugh for meaning, never knowing whether we are together or alone.

Autumnal Update

by rsbakker

I had some luck at the craps tables in Vegas, finishing $10 ahead, which is nothing short of a miracle. I made several others a small fortune shooting. Vegas, as far as I’m concerned, is the holiest city in the world, just edging out Jerusalem and Mecca, and leaving Rome in the dust. Why? Because of the living monumentality, all of it dedicated to the simulation of exceptionalism.

Just a few fiction related tidbits from the web (thanks to those of you who sent me links):

The Chicago Center for Literature and Photography has published an enthusiastic review of The Great Ordeal. My favourite blurb is,

“If you are waiting for George R.R. Martin’s next book to come out, make sure to read some R. Scott Bakker. It’s epic fantasy of an entirely different flavor, alien and grimdark, convoluted and terrifyingly beautiful.”

Dan Smyth over at Elitist Book Reviews has posted a decidedly less enthusiastic review (but still, a sight better than the personal offence he felt reading The White-Luck Warrior). When it comes to books, some people are prone to shout “poison!” when you serve them the wrong brand of tea., the biggest SF&F website in Russia, has posted a Russian translation of “The False Sun,” the preeminent Atrocity Tale, I think it’s safe to say.

Omni has listed Neuropath among the “Best Philosophically Driven Sci-Fi Books” in their Buyer’s Guide. My first experience with Omni came in grade seven, when Mr. Allen (who once gave me some Pascal to take home!) read “Sandkings” aloud to our class, convincing me there was something with staples cooler than comic books. So I guess you could say this closes an ancient circuit, wonder to wonder.

Look! Up in the sky!

by rsbakker



Derrida as Neurophenomenologist

by rsbakker


For the longest time I thought that unravelling the paradoxical nature of the now, understanding how it could be at once the same now and yet a different now entirely, was the key to resolving the problem of meaning and experience. The reason for this turned on my early philosophical love affair with Jacques Derrida, the famed French post-structuralist philosopher, who was very fond of writing passages such this tidbit from “Differance”:

An interval must separate the present from what it is not in order for the present to be itself, but this interval that constitutes it as present must, by the same token, divide the present in and of itself, thereby also dividing, along with the present, everything that is thought on the basis of the present, that is, in our metaphysical language, every being, and singularly substance or the subject. In constituting itself, in dividing itself dynamically, this interval is what might be called spacing, the becoming-space of time or the becoming-time of space (temporization). And it is this constitution of the present, as an ‘originary’ and irreducibly nonsimple (and therefore, stricto sensu nonoriginary) synthesis of marks, or traces of retentions and protentions (to reproduce analogically and provisionally a phenomenological and transcendental language that soon will reveal itself to be inadequate), that I propose to call archi-writing, archi-traces, or differance. Which (is) (simultaneously) spacing (and) temporization. Margins of Philosophy, 13

One of the big problems faced by phenomenology has to do with time. The problem in a nutshell is that any phenomena attended to is a present phenomena, and as such dependent upon absent enormities—namely the past and the future. The phenomenologist suffers from what is sometimes referred to as a ‘keyhole problem,’ the question of whether the information available—‘experience’—warrants the kinds of claims phenomenologists are prone to make about the truth of experience. Does the so-called ‘phenomenological attitude’ possess the access phenomenology needs to ground its analyses? How could they given so slight a keyhole as the present? Phenomenologists typically respond to the problem by invoking horizons, the idea that nonpresent contextual enormities nevertheless remain experientially accessible—present—as implicit features of the phenomenon at issue. You could argue that horizons scaffold the whole of reportable experience, insofar as so little, if anything, is available to us in our entirety at any given moment. We see and experience coffee cups, not perspectival slices of coffee cups. So in Husserl’s analysis of ‘time-consciousness,’ for instance, the past and future become intrinsic components of our experience of temporality as ‘retention’ and ‘protention.’ Even though absent, they nevertheless decisively structure phenomena. As such, they constitute important domains of phenomenological investigation in their own right.

From the standpoint of the keyhole problem, however, this answer simply doubles down on the initial question. Our experience of coffee cups is one thing, after all, and our experience of ourselves is quite another. How do we know we possess the information required to credibly theorize—make explicit—our implicit experience of the past as retention, say? After-all, as Derrida says, retention is always present retention. There are, as he famously argues, two pasts, the one experienced, and the one outrunning the very possibility of experience (as its condition of possibility). Our experience of the present does not arise ‘from nowhere,’ nor does it arise in our present experience of the past, since that experience is also present. Thus what he calls the ‘trace,’ which might be understood as a ‘meta-horizon,’ or a ‘super-implicit,’ the absent enormity responsible for horizons that seem to shape content. The apparently sufficient, unitary structure of present experience contains a structurally occluded origin, a difference making difference, that can in no way appear within experience.

One way to put Derrida’s point is that there is always some occluded context, always some integral part of the background, driving phenomenology. From an Anglo-American, pragmatic viewpoint, his point is obvious, yet abstrusely and extravagantly made: Nothing is given, least of all meaning and experience. What Derrida is doing, however, is making this point within the phenomenological idiom, ‘reproducing’ it, as he says in the quote. The phenomenology itself reveals its discursive impossibility. His argument is ontological, not epistemic, and so requires speculative commitments regarding what is, rather than critical commitments regarding what can be known. Derrida is providing what might be called a ‘hyper-phenomenology,’ or even better, what David Roden terms dark phenomenology, showing how the apparently originary, self-sustaining, character of experience is a product of its derivative nature. The keyhole of the phenomenological attitude only appears theoretically decisive, discursively sufficient, because experience possesses horizons without a far side, meta-horizons—limits that cannot appear as such, and so appears otherwise, as something unlimited. Apodictic.

But since Derrida, like the phenomenologist, has only the self-same keyhole, he does what humans always do in conditions of radical low-dimensionality: he confuses the extent of his ignorance for a new and special kind of principle. Even worse, his theory of meaning is a semantic one: as an intentionalist philosopher, he works with all the unexplained explainers, all the classic theoretical posits, handed down by the philosophical tradition. And like most intentionalists, he doesn’t think there’s anyway to escape those posits save by going through them. The deflecting, deferring, displacing outside, for Derrida, cannot appear inside as something ‘outer.’ Representation continually seals us in, relegating evidence of ‘differance’ to indirect observations of the kinds of semantic deformations that only it seems to explain, to the actual work of theoretical interpretation.

Now I’m sure this sounds like hokum to most souls reading this post, something artifactual. It should. Despite all my years as a Derridean, I now think of it as a discursive blight, something far more often used to avoid asking hard questions of the tradition than to pose them. But there is a kernel of neurophenomenological truth in his position. As I’ve argued in greater detail elsewhere, Derrida and deconstruction can be seen as an attempt to theorize the significance of source neglect in philosophical reflection generally, and phenomenology more specifically.

So far as ‘horizons’ belong to experience, they presuppose the availability of information required to behave in a manner sensitive to the recent past. So far as experience is ecological, we can suppose the information rendered will be geared to the solution of ancestral problem ecologies. We can suppose, in other words, that horizons are ecological, that the information rendered will be adequate to the problem-solving needs of our evolutionary ancestors. Now consider the mass-industrial character of the cognitive sciences, the sheer amount of resources, toil, and ingenuity dedicated to solving our own nature. This should convey a sense of the technical challenges any CNS faces attempting to cognize its own nature, and the reason why our keyhole has to be radically heuristic, a fractionate bundle of glimpses, each peering off in different directions to different purposes. The myriad problems this fact poses can be distilled into a single question: How much of the information rendered should we presume warrants theoretical generalizations regarding the nature of meaning and experience? This is the question upon which the whole of traditional philosophy presently teeters.

What renders the situation so dire is the inevitability of keyhole neglect, systematic insensitivity to the radically heuristic nature of the systems employed by philosophical reflection. Think of darkness, which like pastness, lays out the limits of experience in experience as a ‘horizon.’ To say we suffer keyhole neglect is to say our experience of cognition lacks horizons, that we are doomed to confuse what little we see for everything there is. In the absence of darkness (or any other experiential marker of loss or impediment), unrestricted visibility is the automatic assumption. Short sensitivity to information cuing insufficiency, sufficiency is the default. What Heidegger and the continental tradition calls the ‘Metaphysics of Presence’ can be seen as an attempt to tackle the problems posed by sufficiency in intentional terms. And likewise, Derrida’s purported oblique curative to the apparent inevitability of running afoul the Metaphysics of Presence can be seen as a way of understanding the ‘sufficiency effects’ plaguing philosophical reflection in intentional terms.

The human brain suffers medial neglect, the congenital inability to track its own high-dimensional (material) processes. This means the human brain is insensitive to its own irreflexive materiality as such, and so presumes no such irreflexive materiality underwrites its own operations—even though, as anyone who has spent a great deal of time in stroke recovery wards can tell you, everything turns upon them. What we call ‘philosophical reflection’ is simply an artifact of this ecological limitation, a brain attempting to solve its nature with tools adapted to solve absent any information regarding that nature. Differance, trace, spacing: these are the ways Derrida theorizes the inevitability of irreflexive contingency from the far side of default sufficiency. I read Derrida as tracking the material shadow of thought via semantic terms. By occluding all antecedents, source neglect dooms reflection to the illusion of sufficiency when no such sufficiency exists. In this sense, positions like Derrida’s theory of meaning can be seen as impressionistic interpretations of what is a real biomechanical feature of consciousness. Attend to the metacognitive impression and meaning abides, and representation seems inescapable. The neuromechanical is occluded, so sourceless differentiation is all we seem to have, the magic of a now that is forever changing, yet miraculously abides.

The UK and World Release of THE GREAT ORDEAL

by rsbakker


So it’s been a busy summer. Fawk.

I answered next to no emails. The computer I’m writing on this very moment is my only portal to the web, which tends to play laser light show to my kitten, so I avoided it like the plague, and managed to piss off a good number of people, I’m sure.

I’ve finished both “The Carathayan,” an Uster Scraul  story for the Evil is a Matter of Perspective anthology, as well as a philowank Foreword entitled, “On the Goodness of Evil.”

I submitted an outline for “Reading After the Death of Meaning,” an essay on literary criticism and eliminativism solicited by Palgrave for a critical anthology on Literature and Philosophy.

I finished a serious rewrite of The Unholy Consult, which I printed up and sent out to a few fellow humans for some critical feedback. My favourite line so far is “Perfect and insane”!

This brought back some memories, probably because I’m still basking in the post-coital glow of finishing The Unholy Consult. It really is hard to believe that I’m here, on the far side of the beast that has been gnawing at my creative bones for more than thirty years now. My agent has completed the deal with Overlook, so I can look forward to the odd night eating out, maybe even buying a drink or two!

And tomorrow, of course, is the day The Great Ordeal is set to be released in the UK and around the world. If you have a tub handy, thump away. Link the trailer if you think it might work. Or if you’re engaging an SF crowd, maybe link “Crash Space.” It would be nice to sell a bazillion books, but really, I would be happy selling enough to convince my publishers to continue investing in the Second Apocalypse.

To the Coffers, my friends. The Slog of Slogs is nearing its end.


On the Interpretation of Artificial Souls

by rsbakker


In “Is Artificial Intelligence Permanently Inscrutable?” Aaron M. Bornstein surveys the field of artificial neural networks, claiming that “[a]s exciting as their performance gains have been… there’s a troubling fact about modern neural networks: Nobody knows quite how they work.” The article is fascinating in its own right, and Peter over at Consciousness Entities provides an excellent overview, but I would like to use it to flex a little theoretical muscle, and show the way the neural network ‘Inscrutability Problem’ turns on the same basic dynamics underwriting the apparent ‘hard problem’ of intentionality. Once you have a workable, thoroughly naturalistic account of cognition, you can begin to see why computer science finds itself bedevilled with strange parallels of the problems one finds in the philosophy of mind.

This parallel is evident in what Bornstein identifies as the primary issue, interpretability. The problem with artificial neural networks is that they are both contingent and incredibly complex. Recurrent neural networks operate by producing outputs conditioned by a selective history of previous conditionings, one captured in the weighting of (typically) millions of artificial neurons arranged in multiple processing layers. Since  discrepancies in output serve as the primary constraint, and since the process of deriving new outputs is driven by the contingencies of the system (to the point where even electromagnetic field effects can become significant), the complexity means that searching for the explanation—or canonical interpretation—of the system is akin to searching for a needle in a haystack.

And as Bornstein points out, this has forced researchers to borrow “techniques from biological research that peer inside networks after the fashion of neuroscientists peering into brains: probing individual components, cataloguing how their internals respond to small changes in inputs, and even removing pieces to see how others compensate.” Unfortunately, importing neuroscientific techniques has resulted in importing neuroscience-like interpretative controversies as well. In “Could a neuroscientist understand a microprocessor?” Eric Jonas and Konrad Kording show how taking the opposite approach, using neuroscientific data analysis methods to understand the computational functions behind games like Donkey Kong and Space Invaders, fails no matter how much data they have available. The authors even go so far as to reference artificial neural network inscrutability as the problem, stating that “our difficulty at understanding deep learning may suggest that the brain is hard to understand if it uses anything like gradient descent on a cost function” (11).

Neural networks, artificial or natural, could very well be essential black boxes, systems that will always resist synoptic verbal explanation. Functional inscrutability in neuroscience is a pressing problem for obvious reasons. The capacity to explain how a given artificial neural network solves a given problem, meanwhile, remains crucial simply because “if you don’t know how it works, you don’t know how it will fail.” One of the widely acknowledged shortcomings of artificial neural networks is “that the machines are so tightly tuned to the data they are fed,” data that always falls woefully short the variability and complexity of the real world. As Bornstein points out, “trained machines are exquisitely well suited to their environment—and ill-adapted to any other.” As AI creeps into more and more real world ecological niches, this ‘brittleness,’ as Bornstein terms it, becomes more of a real world concern. Interpretability means lives in AI potentially no less than in neuroscience.

All this provokes Bornstein to pose the philosophical question: What is interpretability?

He references Marvin Minsky’s “suitcase words,” the legendary computer scientist’s analogy for many of the terms—such as “consciousness” or “emotion”—we use when we talk about our sentience and sapience. These words, he proposes, reflect the workings of many different underlying processes, which are locked inside the “suitcase.” As long as we keep investigating these words as stand-ins for more fundamental concepts, our insight will be limited by our language. In the study of intelligence, could interpretability itself be such a suitcase word?

Bornstein finds himself delivered to one of the fundamental issues in the philosophy of mind: the question of how to understand intentional idioms—Minsky’s ‘suitcase words.’ The only way to move forward on the issue of interpretability, it seems, is to solve nothing less than the cognitive (as opposed to the phenomenal) half of the hard problem. This is my bailiwick. The problem, here, is a theoretical one: the absence of any clear understanding of ‘interpretability.’ What is interpretation? Why do breakdowns in our ability to explain the operation of our AI tools happen, and why do they take the forms that they do?  I think I can paint a spare yet comprehensive picture that answers these questions and places them in the context of much more ancient form of interpreting neural networks.  In fact, I think it can pop open a good number of Minsky’s suitcases and air out their empty insides.

Three Pound Brain regulars, I’m sure, have noticed a number of striking parallels between Bornstein’s characterization of the Inscrutability Problem and the picture of ‘post-intentional cognition’ I’ve been developing over the years. The apparently inscrutable algorithms derived via neural networks are nothing if not heuristic, cognitive systems that solve via cues correlated to target systems. Since they rely on cues (rather than all the information potentially available), their reliability entirely depends on their ecology, which is to say, how those cues correlate. If those cues do not correlate, then disaster strikes (as when the truck trailer that killed Joshua Brown in his Tesla Model S cued more white sky).

The primary problem posed by inscrutability, in other words, is the problem of misapplication. The worry that arises again and again isn’t simply that these systems are inscrutable, but that they are ecological, requiring contexts often possessing quirky features given quirks in the ‘environments’—data sets—used to train them. Inscrutability is a problem because it entails blindness to potential misapplications, plain and simple. Artificial neural network algorithms, you could say, possess adaptive problem-ecologies the same as all heuristic cognition. They solve, not by exhaustively taking into account the high dimensional totality of the information available, but rather by isolating cues—structures in the data set—which the trainer can only hope will generalize to the world.

Artificial neural networks are shallow information consumers, systems that systematically neglect the high dimensional mechanical intricacies of their environments, focusing instead on cues statistically correlated to those high-dimensional mechanical intricacies to solve them. They are ‘brittle,’ therefore, so far as those correlations fail to obtain.

But humans are also shallow information consumers, albeit far more sophisticated ones. Short the prostheses of science, we are also systems prone to neglect the high dimensional mechanical intricacies of our environments, focusing instead on cues statistically correlated to those high-dimensional mechanical intricacies. And we are also brittle to the extent those correlations fail to obtain. The shallow information nets we throw across our environments appear to be seamless, but this is just an illusion, as magicians so effortlessly remind us with their illusions.

This is as much the case for our linguistic attempts to make sense of ourselves and our devices as it is for other cognitive modes. Minsky’s ‘suitcase words’ are such because they themselves are the product of the same cue-correlative dependency. These are the granular posits we use to communicate cue-based cognition of mechanical black box systems such as ourselves, let alone others. They are also the granular posits we use to communicate cue-based cognition of pretty much any complicated system. To be a shallow information consumer is to live in a black box world.

The rub, of course, is that this is itself a black box fact, something tucked away in the oblivion of systematic neglect, duping us into assuming most everything is clear as glass. There’s nothing about correlative cognition, no distinct metacognitive feature, that identifies it as such. We have no way of knowing whether we’re misapplying our own onboard heuristics in advance (thus the value of the heuristics and biases research program), let alone our prosthetic ones! In fact, we’re only now coming to grips with the fractionate and heuristic nature of human cognition as it is.


Inscrutability is a problem, recall, because artificial neural networks are ‘brittle,’ bound upon fixed correlations between their cues and the systems they were tasked with solving, correlations that may or may not, given the complexity of the world, be the case. The amazing fact here is that artificial neural networks are inscrutable, the province of interpretation at best, because we ourselves are brittle, and for precisely the same basic reason: we are bound upon fixed correlations between our cues and the systems we’re tasked with solving. The contingent complexities of artificial neural networks place them, presently at least, outside our capacity to solve—at least in a manner we can readily communicate.

The Inscrutability Problem, I contend, represents a prosthetic externalization of the very same problem of ‘brittleness’ we pose to ourselves, the almost unbelievable fact that we can explain the beginning of the Universe but not cognition—be it artificial or natural. Where the scientists and engineers are baffled by their creations, the philosophers and psychologists are baffled by themselves, forever misapplying correlative modes of cognition to the problem of correlative cognition, forever confusing mere cues for extraordinary, inexplicable orders of reality, forever lost in jungles of perpetually underdetermined interpretation. The Inscrutability Problem is the so-called ‘hard problem’ of intentionality, only in a context that is ‘glassy’ enough to moot the suggestion of ‘ontological irreducibility.’ The boundary faced by neuroscientists and AI engineers alike is mere complexity, not some eerie edge-of-nature-as-we-know-it. And thanks to science, this boundary is always moving. If it seems inexplicable or miraculous, it’s because you lack information: this seems a pretty safe bet as far as razors go.

‘Irreducibility’ is about to come crashing down. I think the more we study problem-ecologies and heuristic solution strategies the more we will be able to categorize the mechanics distinguishing different species of each, and our bestiary of different correlative cognitions will gradually, if laboriously, grow. I also think that artificial neural networks will play a crucial role in that process, eventually providing ways to model things like intentional cognition. If nature has taught us anything over the past five centuries it is that the systematicities, the patterns, are there—we need only find the theoretical and technical eyes required to behold them. And perhaps, when all is said and done, we can ask our models to explain themselves.


by rsbakker


One of my big goals with Three Pound Brain has always been to establish a ‘crossroads between incompatible empires,’ to occupy the uncomfortable in-between of pulp, science, and philosophy–a kind of ‘unholy consult,’ you might even say. This is where the gears grind. I’ve entertained some grave doubts over the years, and I still do, but posts like these are nothing if not heartening. The hope is that I can slowly gain the commercial and academic clout needed to awaken mainstream culture to this grinding, and to the trouble it portends.

I keep planning to write a review of Steven Shaviro’s wonderful Discognition, wherein he devotes an entire chapter to Neuropath and absolutely nails what I was trying to accomplish. It’s downright spooky, but really just goes to show, at least for those of us who periodically draw water from his Pinocchio Theory blog. For anyone wishing to place the relation of SF to consciousness research, I can’t think of a more clear-eyed, impeccably written place to begin. Not only does Shaviro know his stuff, he knows how to communicate it.

Robert Lamb considers “The Great Ordeal’s Outside Context Problem” over at Stuff to Blow Your Mind, where he asks some hard questions of the Tekne, and Kellhus’s understanding of it. SPOILER ALERT, though. Big time.

Dan Mellamphy and Nandita Biswas-Mellamphy have just released Digital Dionysus: Nietzsche and the Network-Centric Condition, a collection of various papers exploring the relevance of Nietzsche’s work to our technological age, including “Outing the It that Thinks: The Coming Collapse of an Intellectual Ecosystem,” by yours truly. The great thing about this collection is that it reads Nietzsche as a prophet of the now rather than as some post-structuralist shill. I wrote the paper some time ago, at a point when I was still climbing back into philosophy after a ten year hiatus, but I still stand by it and its autobiographical deconstruction of the Western intellectual tradition.

Dismiss Dis

by rsbakker

I came across this quote in “The Hard Problem of Content: Solved (Long Ago),” a critique of Hutto and Myin’s ‘radical enactivism’ by Marcin Milkowski:

Naıve semantic nihilism is not a philosophical position that deserves a serious debate because it would imply that expressing any position, including semantic nihilism, is pointless. Although there might still be defenders of such a position, it undermines the very idea of a philosophical debate, as long as the debate is supposed to be based on rational argumentation. In rational argumentation, one is forced to accept a sound argument, and soundness implies the truth of the premises and the validity of the argument. Just because these are universal standards for any rational debate, undermining the notion of truth can be detrimental; there would be no way of deciding between opposing positions besides rhetoric. Hence, it is a minimal requirement for rational argumentation in philosophy; one has to assume that one’s statements can be truth-bearers. If they cannot have any truth-value, then it’s no longer philosophy.” 74

These are the kind of horrible arguments that I take as the principle foe of anyone who thinks cognitive science needs to move beyond traditional philosophy to discover its natural scientific bases. I can remember having a great number of arguments long before I ever ‘assumed my statements were truth-bearers.’ In fact, I would wager that the vast majority of arguments are made by people possessing no assumption that their statement’s are ‘truth-bearers’ (whatever this means). What Milkowski would say, of course, is that we all have these assumptions nonetheless, only implicitly. This is because Milkowski has a theory of argumentation and truth, a story of what is really going on behind the scenes of ‘truth talk.’

The semantic nihilist, such as myself, famously disagrees with this theory. We think truth-talk actually amounts to something quite different, and that once enough cognitive scientists can be persuaded to close the ancient old cover of Milkowski’s book (holding their breath for all the dust and mold), a great number of spurious conundrums could be swept from the worktable, freeing up space for more useful questions. What Milkowski seems to be arguing here is that… hmm… Good question! Either he’s claiming the semantic nihilist cannot argue otherwise without contradicting his theory, which is the whole point of arguing otherwise. Or he’s claiming the semantic nihilistic cannot argue against his theory of truth because, well, his theory of truth is true. Either he’s saying something trivial, or he’s begging the question! Obviously so, given the issue between him and the semantic nihilist is the question of the nature of truth talk.

For those interested in a more full-blooded account of this problem, you can check out “Back to Square One: Towards a Post-intentional Future” over at Scientia Salon. Ramsey also tucks this strategy into bed in his excellent article on Eliminative Materialism over on Stanford Encyclopedia of Philosophy. And Stephen Turner, of course, has written entire books (such as Explaining the Normative) on this peculiar bug in our intellectual OS. But I think it’s high time to put an end to what has to be one of the more egregious forms of intellectual laziness one finds in philosophy of mind circles–one designed, no less, to shut down the very possibility of an important debate. I think I’m right. Milkowski thinks he’s right. I’m willing to debate the relative merits of our theories. He has no time for mine, because his theory is so super-true that merely disagreeing renders me incoherent.


Milkowski does go on to provide what I think is a credible counter-argument to eliminativism, what I generally refer to as the ‘abductive argument’ here. This is the argument that separates my own critical eliminativism (I’m thinking of terming my view ‘criticalism’–any thoughts?) from the traditional eliminativisms espoused by Feyerbrand, the Churchlands, Stich, Ramsey and others. I actually think my account possesses the parsimony everyone concedes to eliminativism without falling mute on the question of what things like ‘truth talk’ amount to. I actually think I have a stronger abductive case.

But it’s the tu quoque (‘performative contradiction’) style arguments that share that peculiar combination of incoherence and intuitive appeal that renders philosophical blind alleys so pernicious. This is why I would like to solicit recently published examples of these kinds of dismissals in various domains for a running ‘Dismiss Dis’ series. Send me a dismissal like this, and I will dis…

PS: For those interested in my own take on Hutto and Myin’s radical enactivism, check out “Just Plain Crazy Enactive Cognition,” where I actually agree with Milkowski that they are forced to embrace semantic nihilism–or more specifically, a version of my criticalism–by instabilities in their position.


AI and the Coming Cognitive Ecological Collapse: A Reply to David Krakauer

by rsbakker


Thanks to Dirk and his tireless linking generosity, I caught “Will AI Harm Us?” in Nautilus by David Krakauer, the President of the Santa Fe Institute, on the potential dangers posed by AI on this side of the Singularity. According to Krakauer, the problem lies in the fact that AI’s are competitive as opposed to complementary cognitive artifacts of the kind we have enjoyed until now. Complementary cognitive artifacts, devices such as everything from mnemonics to astrolabes to mathematical notations, allow us to pull up the cognitive ladder behind us in some way—to somehow do without the tool. “In almost every use of an ancient cognitive artifact,” he writes, “after repeated practice and training, the artifact itself could be set aside and its mental simulacrum deployed in its place.”

Competitive cognitive artifacts, however, things like calculators, GPS’s, and pretty much anything AI-ish, don’t let us kick away the ladder. We lose the artifact, and we lose the ability. As Krakauer writes:

In the case of competitive artifacts, when we are deprived of their use, we are no better than when we started. They are not coaches and teachers—they are serfs. We have created an artificial serf economy where incremental and competitive artificial intelligence both amplifies our productivity and threatens to diminish organic and complementary artificial intelligence…

So where complementary cognitive artifacts teach us how to fish, competitive cognitive artifacts simply deliver the fish, rendering us dependent. Krakauer’s complaint against AI, in other words, is the same as Plato’s complaint against writing, and I think fares just as well argumentatively. As Socrates famously claims in The Phaedrus,

For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.

The problem with writing is that it is competitive precisely in Krakauer’s sense: it’s a ladder we cannot kick away. What Plato could not foresee, of course, was the way writing would fundamentally transform human cognitive ecology. He was a relic of the preliterate age, just as Krakauer (like us) is a relic of the pre-AI age. The problem for Krakauer, then, is that the distinction between complementary and competitive cognitive artifacts—the difference between things like mnemonics and things like writing—possesses no reliable evaluative force. All tools involve trade-offs. Since Krakauer has no way of knowing how AI will transform our cognitive ecology he has no way of evaluating the kinds of trade-offs they will force upon us.

This is the problem with all ‘excess dependency arguments’ against technology, I think: they have no convincing way of assessing the kind of cognitive ecology that will result, aside from the fact that it involves dependencies. No one likes dependencies, ergo…

But I like to think I’ve figured the naturalistic riddle of cognition out,* and as a result I think I can make a pretty compelling case why we should nevertheless accept that AI poses a very grave threat this side of the Singularity. The problem, in a nut shell, is that we are shallow information consumers, evolved to generate as much gene-promoting behaviour out of as little environmental information as possible. Human cognition relies on simple cues to draw very complex conclusions simply because it could always rely on adaptive correlations between those cues and the systems requiring solution: it could always depend on what might be called cognitive ecological stability.

Since our growing cognitive dependency on our technology always involves trade-offs, it should remain an important concern (as it clearly seems to be, given the endless stream of works devoted to the downside of this or that technology in this or that context). The dependency we really need to worry about, however, is our cognitive biological dependency on ancestral environmental correlations, simply because we have good reason to believe those cognitive ecologies will very soon cease to exist. Human cognition is thoroughly heuristic, which is to say, thoroughly dependent on cues reliably correlated to whatever environmental system requires solution. AI constitutes a particular threat because no form of human cognition is more heuristic, more cue dependent, than social cognition. Humans are very easily duped into anthropomorphizing given the barest cues, let alone processes possessing AI. It pays to remember the simplicity of the bots Ashley Madison used to gull male subscribers into thinking they were getting female nibbles.

And herein lies the rub: the environmental proliferation of AI means the fundamental transformation of our ancestral sociocognitive ecologies, from one where the cues we encounter are reliably correlated to systems we can in fact solve—namely, each other—into one where the cues we encounter are correlated to systems that cannot be fathomed, and the only soul solved is the consumer’s.


*  Bakker, R. Scott. “On Alien Philosophy,” Journal of Consciousness Studies, forthcoming.

Myth as Meth

by rsbakker

What is the lesson that Tolkien teaches us with Middle-earth? The grand moral, I think, is that the illusion of a world can be so easily cued. Tolkien reveals that meaning is cheap, easy to conjure, easy to believe, so long as we sit in our assigned seats. This is the way, at least, I thematically approach my own world-building. Like a form of cave-painting.

The idea here is to look at culture as a meaning machine, where ‘meaning’ is understood not as content, but in a post-intentional sense: various static and dynamic systems cuing various ‘folk’ forms of human cognition. Think of the wonder of the ‘artists’ in Chauvet, the amazement of discovering how to cue the cognition of worlds upon walls using only charcoal. Imagine that first hand, that first brain, tracking that reflex within itself, simply drawing a blacked finger down the wall.

chauvet horses

Traditional accounts, of course, would emphasize the symbolic or representational significance of events such as Chauvet, thereby dragging the question of the genesis of human culture into the realm of endless philosophical disputation. On a post-intentional view, however, what Chauvet vividly demonstrates is how human cognition can be easily triggered out of school. Human cognition is so heuristic, in fact, that it has little difficulty simulating those cues once they have been discovered. Since human cognition also turns out to be wildly opportunistic, the endless socio-practical gerrymandering characterizing culture was all but inevitable. Where traditional views of the ‘human revolution’ focus on utterly mysterious modes of symbolic transmission and elaboration, the present account focuses on the processes of cue isolation and cognitive adaptation. What are isolated are material/behavioural means of simulating cues belonging to ancestral forms of cognition. What is adapted is the cognitive system so cued: the cave paintings at Chauvet amount to a socio-cognitive adaptation of visual cognition, a way to use visual cognitive cues ‘out of school’ to attenuate behaviour. Though meaning, understood intentionally, remains an important explanandum in this approach, ‘meaning’ understood post-intentionally simply refers to the isolation and adaptation of cue-based cognitive systems to achieve some systematic behavioural effect. The basic processes involved are no more mysterious than those underwriting camouflage in nature.*

A post-intentional theory of meaning focuses on the continuity of semantic practices and nature, and views any theoretical perspective entailing the discontinuity of those practices and nature as spurious artifacts of the application of heuristic modes of cognition to theoretical issues. A post-intentional theory of meaning, in other worlds, views culture as a natural phenomenon, and not some arcane artifact of something empirically inexplicable. Signification is wholly material on this account, with all the messiness that comes with it.

Cognitive systems optimize effectiveness by reaching out only as far into nature as they need to. If they can solve distal systems via proximal signals possessing reliable systematic relationships to those systems, they will do so. Humans, like all other species possessing nervous systems, are shallow information consumers in what might be called deep information environments.

Given the limits of human cognition, our ancestors could report whatever they wanted about the greater world (their deep information environments), so long as those reports came cheap and/or discharged some kind of implicit function. They enjoyed what might be called, deep discursive impunity.


Consider anthropomorphism, the reflexive application of radically heuristic socio-cognitive capacities dedicated to solving our fellow humans to nonhuman species and nature more generally. When we run afoul anthropomorphism we ‘misattribute’ folk posits adapted to human problem-solving to nonhuman processes. As misapplications, anthropomorphisms tell us nothing about the systems they take as their putative targets. One does not solve a drought by making offerings to gods of rain. This is what makes anthropomorphic worldviews ‘fantastic’: the fact that they tell us very little, if anything, about the very nature they purport to describe and explain.

Now this, on the face of things, should prove maladaptive, since it amounts to squandering tremendous resources and behaviour effecting solutions to problems that do not exist. But of course, as is the case with so much human behaviour, it likely possesses ulterior functions serving the interests of individuals in ways utterly inaccessible to those individuals, at least in ancestral contexts.

The cognitive sophistication required to solve those deep information environments effectively rendered them inscrutable, impenetrable black-boxes, short the development of science. What we painted across the sides those boxes, then, could only be fixed by our basic cognitive capacities and by whatever ulterior function they happened to discharge. Given the limits of human cognition, our ancestors could report whatever they wanted about the greater world (their deep information environments), so long as those reports came cheap and/or discharged some kind of implicit function. They enjoyed what might be called, deep discursive impunity. All they would need is a capacity to identify cues belonging to social cognition in the natural world—to see, for instance, retribution, in the random walk of weather—and the ulterior exploitation of anthropomorphism could get underway.

Given the ancestral inaccessibility of deep information, and given the evolutionary advantages of social coordination and cohesion, particularly in the context of violent intergroup competition, it becomes easy to see how the quasi-cognition of an otherwise impenetrable nature could become a resource. When veridicality has no impact one way or another, social and individual facilitation alone determines the selection of the mechanisms responsible. When anything can be believed, to revert to folk idioms, then only those beliefs that deliver matter. This, then, explains why different folk accounts of the greater world possess deep structural similarities despite their wild diversity. Their reliance on socio-cognitive systems assures deep commonalities in form, as do the common ulterior functions provided. The insolubility of the systems targeted, on the other hand, assures any answer meeting the above constraints will be as effective as any other.

Given the evolutionary provenance of this situation, we are now in a position to see how accurate deep information can be seen as a form of cognitive pollution, something alien that disrupts and degrades ancestrally stable, shallow information ecologies. Strangely enough, what allowed our ancestors to report the nature of nature was the out-and-out inscrutability of nature, the absence of any (deep) information to the contrary—and the discursive impunity this provides. Anthropomorphic quasi-cognition requires deep information neglect. The greater our scientifically mediated sensitivity to deep information becomes, the less tenable anthropomorphic quasi-cognition becomes, the more fantastic folk worlds become. The worlds arising out of our evolutionary heritage find themselves relegated to fairy tales.

Fantasy worlds, then, can be seen as an ontological analogue to the cave paintings at Chauvet. They cue ancestral modes of cognition, simulating the kinds of worlds our ancestors reflexively reported, folk worlds rife with those posits they used to successfully solve one another in a wide variety of practical contexts, meaningful worlds possessing the kinds of anthropomorphic ontologies we find in myths and religions.

With the collapse of the cognitive ecology that made these worlds possible, comes the ineffectiveness of the tools our ancestors used to navigate them. We now find ourselves in deep information worlds, environments not only rife with information our ancestors had neglected, but also crammed with environments engineered to manipulate shallow information cues. We now find ourselves in a world overrun with crash spaces, regions where our ancestral tools consistently fail, and cheat spaces, regions where they are exploited for commercial gain.

This is a rather remarkable fact, even if it becomes entirely obvious upon reflection. Humans possess ideal cognitive ecologies, solve spaces, environments rewarding their capacities, just as humans possess crash spaces, environments punishing their capacities. This is the sense in which fantasy worlds can be seen as a compensatory mechanism, a kind of cognitive eco-preserve, a way to inhabit more effortless shallow information worlds, pseudo-solution spaces, hypothetical environments serving up largely unambiguous cues to generally reliable cognitive capacities. And like biological eco-preserves, perhaps they serve an important function. As we saw with anthropomorphism above, pseudo-solution spaces can be solvers (as opposed to crashers) in their own respect—culture is nothing if not a testimony to this.

Fantasy is zombie scripture, the place where our ancient assumptions lurch in the semblance of life. The fantasy writer is the voodoo magician, imbuing dead meaning with fictional presence.


But fantasy worlds are also the playground of blind brains. The more we learn about ourselves, the more we learn how to cue different cognitive capacities out of school—how to cheat ourselves for good or ill. Our shallow information nature is presently the focus of a vast, industrial research program, one gradually providing the information, techniques, and technology required to utterly pre-empt our ancestral ecologies, which is to say, to perfectly simulate ‘reality.’ The reprieve from the cognitive pollution of actual environments itself potentially amounts to more cognitive pollution. We are, in some respect at least, a migratory species, one prone to gravitate toward greener pastures. Is the migration between realities any less inevitable than the migration across lands?

Via the direct and indirect deformation of existing socio-cognitive ecologies, deep information both drives the demand for and enables the high-dimensional cuing of fantastic cognition. In our day and age, a hunger for meaning is at once a predisposition to seek the fantastic. We should expect that hunger to explode with the pace of technological change. For all the Big Data ballyhoo, it pays to remember that we are bound up in an auto-adaptive macro-social system that is premised upon solving us, mastering our cognitive reflexes in ways invisible or that please. We are presently living through the age where it succeeds.

Fantasy is zombie scripture, the place where our ancient assumptions lurch in the semblance of life. The fantasy writer is the voodoo magician, imbuing dead meaning with fictional presence. This resurrection can either facilitate our relation to the actual world, or it can pre-empt it. Science and technology are the problem here. The mastery of deep information environments enables ever greater degrees of shallow information capture. As our zombie natures are better understood, the more effectively our reward systems are tuned, the deeper our descent into this or that variety of fantasy becomes. This is the dystopic image of Akratic society, a civilization ever more divided between deep and shallow information consumers, between those managing the mechanisms, and those captured in some kind of semantic cheat space.