Visions of the Semantic Apocalypse: A Critical Review of Yuval Noah Harari’s Homo Deus
by rsbakker
“Studying history aims to loosen the grip of the past,” Yuval Noah Harari writes. “It enables us to turn our heads this way and that, and to begin to notice possibilities that our ancestors could not imagine, or didn’t want us to imagine” (59). Thus does the bestselling author of Sapiens: A Brief History of Humankind rationalize his thoroughly historical approach to question of our technological future in his fascinating follow-up, Homo Deus: A Brief History of Tomorrow. And so does he identify himself as a humanist, committed to freeing us from what Kant would have called, ‘our tutelary natures.’ Like Kant, Harari believes knowledge will set us free.
Although by the end of the book it becomes difficult to understand what ‘free’ might mean here.
As Harari himself admits, “once technology enables us to re-engineer human minds, Homo sapiens will disappear, human history will come to an end and a completely new process will begin, which people like you and me cannot comprehend” (46). Now if you’re interested in mapping the conceptual boundaries of comprehending the posthuman, I heartily recommend David Roden’s skeptical tour de force, Posthuman Life: Philosophy at the Edge of the Human. Homo Deus, on the other hand, is primarily a book chronicling the rise and fall of contemporary humanism against the backdrop of apparent ‘progress.’ The most glaring question, of course, is whether Harari’s academic humanism possesses the resources required to diagnose the problems posed by the collapse of popular humanism. This challenge—the problem of using obsolescent vocabularies to theorize, not only the obsolescence of those vocabularies, but the successor vocabularies to come—provides an instructive frame through which to understand the successes and failures of this ambitious and fascinating book.
How good is Homo Deus? Well, for years people have been asking me for a lay point of entry for the themes explored here on Three Pound Brain and in my novels, and I’ve always been at a loss. No longer. Anyone surfing for reviews of the book are certain to find individuals carping about Harari not possessing the expertise to comment on x or y, but these critics never get around to explaining how any human could master all the silos involved in such an issue (while remaining accessible to a general audience, no less). Such criticisms amount to advocating no one dare interrogate what could be the greatest challenge to ever confront humanity. In addition to erudition, Harari has the courage to concede ugly possibilities, the sensitivity to grasp complexities (as well as the limits they pose), and the creativity to derive something communicable. Even though I think his residual humanism conceals the true profundity of the disaster awaiting us, he glimpses more than enough to alert millions of readers to the shape of the Semantic Apocalypse. People need to know human progress likely has a horizon, a limit, that doesn’t involve environmental catastrophe or creating some AI God.
The problem is far more insidious and retail than most yet realize.
The grand tale Harari tells is a vaguely Western Marxist one, wherein culture (following Lukacs) is seen as a primary enabler of relations of power, a fundamental component of the ‘social apriori.’ The primary narrative conceit of such approaches belongs to the ancient Greeks: “[T]he rise of humanism also contains the seeds of its downfall,” Harari writes. “While the attempt to upgrade humans into gods takes humanism to its logical conclusion, it simultaneously exposes humanism’s inherent flaws” (65). For all its power, humanism possesses intrinsic flaws, blindnesses and vulnerabilities, that will eventually lead it to ruin. In a sense, Harari is offering us a ‘big history’ version of negative dialectic, attempting to show how the internal logic of humanism runs afoul the very power it enables.
But that logic is also the very logic animating Harari’s encyclopedic account. For all its syncretic innovations, Homo Deus uses the vocabularies of academic or theoretical humanism to chronicle the rise and fall of popular or practical humanism. In this sense, the difference between Harari’s approach to the problem of the future and my own could not be more pronounced. On my account, academic humanism, far from enjoying critical or analytical immunity, is best seen as a crumbling bastion of pre-scientific belief, the last gasp of traditional apologia, the cognitive enterprise most directly imperilled by the rising technological tide, while we can expect popular humanism to linger for some time to come (if not indefinitely).
Homo Deus, in fact, exemplifies the quandary presently confronting humanists such as Harari, how the ‘creeping delegitimization’ of their theoretical vocabularies is slowly robbing them of any credible discursive voice. Harari sees the problem, acknowledging that “[w]e won’t be able to grasp the full implication of novel technologies such as artificial intelligence if we don’t know what minds are” (107). But the fact remains that “science knows surprisingly little about minds and consciousness” (107). We presently have no consensus-commanding, natural account of thought and experience—in fact, we can’t even agree on how best to formulate semantic and phenomenal explananda.
Humanity as yet lacks any workable, thoroughly naturalistic, theory of meaning or experience. For Harari this means the bastion of academic humanism, though besieged, remains intact, at least enough for him to advance his visions of the future. Despite the perplexity and controversies occasioned by our traditional vocabularies, they remain the only game in town, the very foundation of countless cognitive activities. “[T]he whole edifice of modern politics and ethics is built upon subjective experiences,” Harari writes, “and few ethical dilemmas can be solved by referring strictly to brain activities” (116). Even though his posits lie nowhere in the natural world, they nevertheless remain subjective realities, the necessary condition of solving countless problems. “If any scientist wants to argue that subjective experiences are irrelevant,” Harari writes, “their challenge is to explain why torture or rape are wrong without reference to any subjective experience” (116).
This is the classic humanistic challenge posed to naturalistic accounts, of course, the demand that they discharge the specialized functions of intentional cognition the same way intentional cognition does. This demand amounts to little more than a canard, of course, once we appreciate the heuristic nature of intentional cognition. The challenge intentional cognition poses to natural cognition is to explain, not replicate, its structure and dynamics. We clearly evolved our intentional cognitive capacities, after all, to solve problems natural cognition could not reliably solve. This combination of power, economy, and specificity is the very thing that a genuinely naturalistic theory of meaning (such as my own) must explain.
“… fiction might thereby become the most potent force on earth, surpassing even wayward asteroids and natural selection. Hence if we want to understand our future, cracking genomes and crunching numbers is hardly enough. We must decipher the fictions that give meaning to the world.”
So moving forward it is important to understand how his theoretical approach elides the very possibility of a genuinely post-intentional future. Because he has no natural theory of meaning, he has no choice but to take the theoretical adequacy of his intentional idioms for granted. But if his intentional idioms possess the resources he requires to theorize the future, they must somehow remain out of play; his discursive ‘subject position’ must possess some kind of immunity to the scientific tsunami climbing our horizons. His very choice of tools limits the radicality of the story he tells. No matter how profound, how encompassing, the transformational deluge, Harari must somehow remain dry upon his theoretical ark. And this, as we shall see, is what ultimately swamps his conclusions.
But if the Hard Problem exempts his theoretical brand of intentionality, one might ask why it doesn’t exempt all intentionality from scientific delegitimation. What makes the scientific knowledge of nature so tremendously disruptive to humanity is the fact that human nature is, when all is said and down, just more nature. Conceding general exceptionalism, the thesis that humans possess something miraculous distinguishing them from nature more generally, would undermine the very premise of his project.
Without any way out of this bind, Harari fudges, basically. He remains silent on his own intentional (even humanistic) theoretical commitments, while attacking exceptionalism by expanding the franchise of meaning and consciousness to include animals: whatever intentional phenomena consist in, they are ultimately natural to the extent that animals are natural.
But now the problem has shifted. If humans dwell on a continuum with nature more generally, then what explains the Anthropocene, our boggling dominion of the earth? Why do humans stand so drastically apart from nature? The capacity that most distinguishes humans from their nonhuman kin, Harari claims (in line with contemporary theories), is the capacity to cooperate. He writes:
“the crucial factor in our conquest of the world was our ability to connect many humans to one another. Humans nowadays completely dominate the planet not because the individual human is far more nimble-fingered than the individual chimp or wolf, but because Homo sapiens is the only species on earth capable of cooperating flexibly in large numbers.” 131
He poses a ‘shared fictions’ theory of mass social coordination (unfortunately, he doesn’t engage research on groupishness, which would have provided him with some useful, naturalistic tools, I think). He posits an intermediate level of existence between the objective and subjective, the ‘intersubjective,’ consisting of our shared beliefs in imaginary orders, which serve to distribute authority and organize our societies. “Sapiens rule the world,” he writes, “because only they can weave an intersubjective web of meaning; a web of laws, forces, entities and places that exist purely in their common imagination” (149). This ‘intersubjective web’ provides him with theoretical level of description he thinks crucial to understanding our troubled cultural future.
He continues:
“During the twenty-first century the border between history and biology is likely to blur not because we will discover biological explanations for historical events, but rather because ideological fictions will rewrite DNA strands; political and economic interests will redesign the climate; and the geography of mountains and rivers will give way to cyberspace. As human fictions are translated into genetic and electronic codes, the intersubjective reality will swallow up the objective reality and biology will merge with history. In the twenty-first century fiction might thereby become the most potent force on earth, surpassing even wayward asteroids and natural selection. Hence if we want to understand our future, cracking genomes and crunching numbers is hardly enough. We must decipher the fictions that give meaning to the world.” 151
The way Harari sees it, ideology, far from being relegated to prescientific theoretical midden, is set to become all powerful, a consumer of worlds. This launches his extensive intellectual history of humanity, beginning with the algorithmic advantages afforded by numeracy, literacy, and currency, how these “broke the data-processing limitations of the human brain” (158). Where our hunter-gathering ancestors could at best coordinate small groups, “[w]riting and money made it possible to start collecting taxes from hundreds of thousands of people, to organise complex bureaucracies and to establish vast kingdoms” (158).
Harari then turns to the question of how science fits in with this view of fictions, the nature of the ‘odd couple,’ as he puts it:
“Modern science certainly changed the rules of the game, but it did not simply replace myths with facts. Myths continue to dominate humankind. Science only makes these myths stronger. Instead of destroying the intersubjective reality, science will enable it to control the objective and subjective realities more completely than ever before.” 179
Science is what renders objective reality compliant to human desire. Storytelling is what renders individual human desires compliant to collective human expectations, which is to say, intersubjective reality. Harari understands that the relationship between science and religious ideology is not one of straightforward antagonism: “science always needs religious assistance in order to create viable human institutions,” he writes. “Scientists study how the world functions, but there is no scientific method for determining how humans ought to behave” (188). Though science has plenty of resources for answering means type questions—what you ought to do to lose weight, for instance—it lacks resources to fix the ends that rationalize those means. Science, Harari argues, requires religion to the extent that it cannot ground the all important fictions enabling human cooperation (197).
Insofar as science is a cooperative, human enterprise, it can only destroy one form of meaning on the back of some other meaning. By revealing the anthropomorphism underwriting our traditional, religious accounts of the natural world, science essentially ‘killed God’—which is to say, removed any divine constraint on our actions or aspirations. “The cosmic plan gave meaning to human life, but also restricted human power” (199). Like stage-actors, we had a plan, but our role was fixed. Unfixing that role, killing God, made meaning into something each of us has to find for ourselves. Harari writes:
“Since there is no script, and since humans fulfill no role in any great drama, terrible things might befall us and no power will come to save us, or give meaning to our suffering. There won’t be a happy ending or a bad ending, or any ending at all. Things just happen, one after the other. The modern world does not believe in purpose, only in cause. If modernity has a motto, it is ‘shit happens.’” 200
The absence of a script, however, means that anything goes; we can play any role we want to. With the modern freedom from cosmic constraint comes postmodern anomie.
“The modern deal thus offers humans an enormous temptation, coupled with a colossal threat. Omnipotence is in front of us, almost within our reach, but below us yawns the abyss of complete nothingness. On the practical level, modern life consists of a constant pursuit of power within a universe devoid of meaning.” 201
Or to give it the Adornian spin it receives here on Three Pound Brain: the madness of a society that has rendered means, knowledge and capital, its primary end. Thus the modern obsession with the accumulation of the power to accumulate. And thus the Faustian nature of our present predicament (though Harari, curiously, never references Faust), the fact that “[w]e think we are smart enough to enjoy the full benefits of the modern deal without paying the price” (201). Even though physical resources such as material and energy are finite, no such limit pertains to knowledge. This is why “[t]he greatest scientific discovery was the discovery of ignorance.” (212): it spurred the development of systematic inquiry, and therefore the accumulation of knowledge, and therefore the accumulation of power, which, Harari argues, cuts against objective or cosmic meaning. The question is simply whether we can hope to sustain this process—defer payment—indefinitely.
“Modernity is a deal,” he writes, and for all its apparent complexities, it is very straightforward: “The entire contract can be summarised in a single phrase: humans agree to give up meaning in exchange for power” (199). For me the best way of thinking this process of exchanging power for meaning is in terms of what Weber called disenchantment: the very science that dispels our anthropomorphic fantasy worlds is the science that delivers technological power over the real world. This real world power is what drives traditional delegitimation: even believers acknowledge the vast bulk of the scientific worldview, as do the courts and (ideally at least) all governing institutions outside religion. Science is a recursive institutional ratchet (‘self-correcting’), leveraging the capacity to leverage ever more capacity. Now, after centuries of sheltering behind walls of complexity, human nature finds itself the intersection of multiple domains of scientific inquiry. Since we’re nothing special, just more nature, we should expect our burgeoning technological power over ourselves to increasingly delegitimate traditional discourses.
Humanism, on this account, amounts to an adaptation to the ways science transformed our ancestral ‘neglect structure,’ the landscape of ‘unknown unknowns’ confronting our prehistorical forebears. Our social instrumentalization of natural environments—our inclination to anthropomorphize the cosmos—is the product of our ancestral inability to intuit the actual nature of those environments. Information beyond the pale of human access makes no difference to human cognition. Cosmic meaning requires that the cosmos remain a black box: the more transparent science rendered that box, the more our rationales retreated to the black box of ourselves. The subjectivization of authority turns on how intentional cognition (our capacity to cognize authority) requires the absence of natural accounts to discharge ancestral functions. Humanism isn’t so much a grand revolution in thought as the result of the human remaining the last scientifically inscrutable domain standing. The rationalizations had to land somewhere. Since human meaning likewise requires that the human remain a black box, the vast industrial research enterprise presently dedicated to solving our nature does not bode well.
But this approach, economical as it is, isn’t available to Harari since he needs some enchantment to get his theoretical apparatus off the ground. As the necessary condition for human cooperation, meaning has to be efficacious. The ‘Humanist Revolution,’ as Harari sees it, consists in the migration of cooperative efficacy (authority) from the cosmic to the human. “This is the primary commandment humanism has given us: create meaning for a meaningless world” (221). Rather than scripture, human experience becomes the metric for what is right or wrong, and the universe, once the canvas of the priest, is conceded to the scientist. Harari writes:
“As the source of meaning and authority was relocated from the sky to human feelings, the nature of the entire cosmos changed. The exterior universe—hitherto teeming with gods, muses, fairies and ghouls—became empty space. The interior world—hitherto an insignificant enclave of crude passions—became deep and rich beyond measure” 234
This re-sourcing of meaning, Harari insists, is true whether or not one still believes in some omnipotent God, insofar as all the salient anchors of that belief lie within the believer, rather than elsewhere. God may still be ‘cosmic,’ but he now dwells beyond the canvas as nature, somewhere in the occluded frame, a place where only religious experience can access Him.
Man becomes ‘man the meaning maker,’ the trope that now utterly dominates contemporary culture:
“Exactly the same lesson is learned by Captain Kirk and Captain Jean-Luc Picard as they travel the galaxy in the starship Enterprise, by Huckleberry Finn and Jim as they sail down the Mississippi, by Wyatt and Billy as they ride their Harley Davidson’s in Easy Rider, and by countless other characters in myriad other road movies who leave their home town in Pennsylvannia (or perhaps New South Wales), travel in an old convertible (or perhaps a bus), pass through various life-changing experiences, get in touch with themselves, talk about their feelings, and eventually reach San Francisco (or perhaps Alice Springs) as better and wiser individuals.” 241
Not only is experience the new scripture, it is a scripture that is being continually revised and rewritten, a meaning that arises out of the process of lived life (yet somehow always managing to conserve the status quo). In story after story, the protagonist must find some ‘individual’ way to derive their own personal meaning out of an apparently meaningless world. This is a primary philosophical motivation behind The Second Apocalypse, the reason why I think epic fantasy provides such an ideal narrative vehicle for the critique of modernity and meaning. Fantasy worlds are fantastic, especially fictional, because they assert the objectivity of what we now (implicitly or explicitly) acknowledge to be anthropomorphic projections. The idea has always been to invert the modernist paradigm Harari sketches above, to follow a meaningless character through a meaningful world, using Kellhus to recapitulate the very dilemma Harari sees confronting us now:
“What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design, or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket?” 277
And so Harari segues to the future and the question of the ultimate fate of human meaning; this is where I find his steadfast refusal to entertain humanistic conceit most impressive. One need not ponder ‘designer experiences’ for long, I think, to get a sense of the fundamental rupture with the past it represents. These once speculative issues are becoming ongoing practical concerns: “These are not just hypotheses of philosophical speculations,” simply because ‘algorithmic man’ is becoming a technological reality (284). Harari provides a whirlwind tour of unnerving experiments clearly implying trouble for our intuitions, a discussion that transitions into a consideration of the ways we can already mechanically attenuate our experiences. A good number of the examples he adduces have been considered here, all of them underscoring the same, inescapable moral: “Free will exists in the imaginary stories we humans have invented” (283). No matter what your philosophical persuasion, our continuity with the natural world is an established scientific fact. Humanity is not exempt from the laws of nature. If humanity is not exempt from the laws of nature, then the human mastery of nature amounts to the human mastery of humanity.
He turns, at this point, to Gazzaniga’s research showing the confabulatory nature of human rationalization (via split brain patients), and Daniel Kahneman’s account of ‘duration neglect’—another favourite of mine. He offers an expanded version of Kahneman’s distinction between the ‘experiencing self,’ that part of us that actually undergoes events, and the ‘narrating self,’ the part of us that communicates—derives meaning from—these experiences, essentially using the dichotomy as an emblem for the dual process models of cognition presently dominating cognitive psychological research. He writes:
“most people identify with their narrating self. When they say, ‘I,’ the mean the story in their head, not the stream of experiences they undergo. We identify with the inner system that takes the crazy chaos of life and spins out of it seemingly logical and consistent yarns. It doesn’t matter that the plot is filled with lies and lacunas, and that it is rewritten again and again, so that today’s story flatly contradicts yesterday’s; the important thing is that we always retain the feeling that we have a single unchanging identity from birth to death (and perhaps from even beyond the grave). This gives rise to the questionable liberal belief that I am an individual, and that I possess a consistent and clear inner voice, which provides meaning for the entire universe.” 299
Humanism, Harari argues, turns on our capacity for self-deception, the ability to commit to our shared fictions unto madness, if need be. He writes:
“Medieval crusaders believed that God and heaven provided their lives with meaning. Modern liberals believe that individual free choices provide life with meaning. They are all equally delusional.” 305
Social self-deception is our birthright, the ability to believe what we need to believe to secure our interests. This is why the science, though shaking humanistic theory to the core, has done so little to interfere with the practices rationalized by that theory. As history shows, we are quite capable of shovelling millions into the abattoir of social fantasy. This delivers Harari to yet another big theme explored both here and Neuropath: the problems raised by the technological concretization of these scientific findings. As Harari puts it:
“However, once heretical scientific insights are translated into everyday technology, routine activities and economic structures, it will become increasingly difficult to sustain this double-game, and we—or our heirs—will probably require a brand new package of religious beliefs and political institutions. At the beginning of the third millennium, liberalism [the dominant variant of humanism] is threatened not by the philosophical idea that there are no free individuals but rather by concrete technologies. We are about to face a flood of extremely useful devices, tools and structures that make no allowance for the free will of individual humans. Can democracy, the free market and human rights survive this flood?” 305-6
The first problem, as Harari sees it, is one of diminishing returns. Humanism didn’t become the dominant world ideology because it was true, it overran the collective imagination of humanity because it enabled. Humanistic values, Harari explains, afforded our recent ancestors with a wide variety of social utilities, efficiencies turning on the technologies of the day. Those technologies, it turns out, require human intelligence and the consciousness that comes with it. To depart from Harari, they are what David Krakauer calls ‘complementary technologies,’ tools that extend human capacity, as opposed to ‘competitive technologies,’ which render human capacities redundant).
Making humans redundant, of course, means making experience redundant, something which portends the systematic devaluation of human experience, or the collapse of humanism. Harari calls this process the ‘Great Decoupling’:
“Over the last decades there has been an immense advance in computer intelligence, but there has been exactly zero advance in computer consciousness. As far as we know, computers in 2016 are no more conscious than their prototypes in the 1950s. However, we are on the brink of a momentous revolution. Humans are in danger of losing their value, because intelligence is decoupling from consciousness.” 311
He’s quick to acknowledge all the problems yet confronting AI researchers, insisting that the trend unambiguously points toward every expanding capacities As he writes, “these technical problems—however difficult—need only be solved once” (317). The ratchet never stops clicking.
He’s also quick to block the assumption that humans are somehow exceptional: “The idea that humans will always have a unique ability beyond the reach of non-conscious algorithms is just wishful thinking” (319). He provides the (I think) terrifying example of David Cope, the University of California at Santa Cruz musicologist who has developed algorithms whose compositions strike listeners as more authentically human than compositions by humans such as J.S. Bach.
The second problem is the challenge of what (to once again depart from Harari) Neil Lawrence calls ‘System Zero,’ the question of what happens when our machines begin to know us better than we know ourselves. As Harari notes, this is already the case: “The shifting of authority from humans to algorithms is happening all around us, not as a result of some momentous governmental decision, but due to a flood of mundane choices” (345). Facebook can now guess your preferences better than your friends, your family, your spouse—and in some instances better than you yourself! He warns the day is coming when political candidates can receive real-time feedback via social media, when people can hear everything said about them always and everywhere. Projecting this trend leads him to envision something very close to Integration, where we become so embalmed in our information environments that “[d]isconnection will mean death” (344).
He writes:
“The individual will not be crushed by Big Brother; it will disintegrate from within. Today corporations and governments pay homage to my individuality and promise to provide medicine, education and entertainment customized to my unique needs and wishes. But in order to do so, corporations and governments first need to break me up into biochemical subsystems, monitor these subsystems with ubiquitous sensors and decipher their workings with powerful algorithms. In the process, the individual will transpire to be nothing but a religious fantasy.” 345
This is my own suspicion, and I think the process of subpersonalization—the neuroscientifically informed decomposition of consumers into economically relevant behaviours—is well underway. But I think it’s important to realize that as data accumulates, and researchers and their AIs find more and more ways to instrumentalize those data sets, what we’re really talking about are proliferating heuristic hacks (that happen to turn on neuroscientific knowledge). They need decipher us only so far as we comply. Also, the potential noise generated by a plethora of competing subpersonal communications seems to constitute an important structural wrinkle. It could be the point most targeted by subpersonal hacking will at least preserve the old borders of the ‘self,’ fantasy that it was. Post-intentional ‘freedom’ could come to reside in the noise generated by commercial competition.
The third problem he sees for humanism lies in the almost certainly unequal distribution of the dividends of technology, a trope so well worn in narrative that we scarce need consider it here. It follows that liberal humanism, as an ideology committed to the equal value of all individuals, has scant hope of squaring the interests of the redundant masses against those of a technologically enhanced superhuman elite.
… this isn’t any mere cultural upheaval or social revolution, this is an unprecedented transformation in the history of life on this planet, the point when the evolutionary platform of behaviour, morphology, becomes the product of behaviour.
Under pretty much any plausible scenario you can imagine, the shared fiction of popular humanism is doomed. But as Harari has already argued, shared fictions are the necessary condition of social coordination. If humanism collapses, some kind of shared fiction has to take its place. And alas, this is where my shared journey with Harari ends. From this point forward, I think his analysis is largely an artifact of his own, incipient humanism.
Harari uses the metaphor of ‘vacuum,’ implying that humans cannot but generate some kind of collective narrative, some way of making their lives not simply meaningful to themselves, but more importantly, meaningful to one another. It is the mass resemblance of our narrative selves, remember, that makes our mass cooperation possible. [This is what misleads him, the assumption that ‘mass cooperation’ need be human at all by this point.] So he goes on to consider what new fiction might arise to fill the void left by humanism. The first alternative is ‘technohumanism’ (transhumanism, basically), which is bent on emancipating humanity from the authority of nature much as humanism was bent on emancipating humanity from the authority of tradition. Where humanists are free to think anything in their quest to actualize their desires, technohumanists are free to be anything in their quest to actualize their desires.
The problem is that the freedom to be anything amounts to the freedom to reengineer desire. So where the objective meaning, following one’s god (socialization), gave way to subjective meaning, following one’s heart (socialization), it remains entirely unclear what the technohumanist hopes to follow or to actualize. As soon as we gain power over our cognitive being the question becomes, ‘Follow which heart?’
Or as Harari puts it,
“Techno-humanism faces an impossible dilemma here. It considers human will the most important thing in the universe, hence it pushes humankind to develop technologies that can control and redesign our will. After all, it’s tempting to gain control over the most important thing in the world. Yet once we have such control, techno-humanism will not know what to do with it, because the sacred human will would become just another designer product.” 366
Which is to say, something arbitrary. Where humanism aims ‘to loosen the grip of the past,’ transhumanism aims to loosen the grip of biology. We really see the limits of Harari’s interpretative approach here, I think, as well as why he falls short a definitive account of the Semantic Apocalypse. The reason that ‘following your heart’ can substitute for ‘following the god’ is that they amount to the very same claim, ‘trust your socialization,’ which is to say, your pre-existing dispositions to behave in certain ways in certain contexts. The problem posed by the kind of enhancement extolled by transhumanists isn’t that shared fictions must be ‘sacred’ to be binding, but that something neglected must be shared. Synchronization requires trust, the ability to simultaneously neglect others (and thus dedicate behaviour to collective problem solving) and yet predict their behaviour nonetheless. Absent this shared background, trust is impossible, and therefore synchronization is impossible. Cohesive, collective action, in other words, turns on a vast amount of evolutionary and educational stage-setting, common cognitive systems stamped with common forms of training, all of it ancestrally impervious to direct manipulation. Insofar as transhumanism promises to place the material basis of individual desire within the compass of individual desire, it promises to throw our shared background to the winds of whimsy. Transhumanism is predicated on the ever-deepening distortion of our ancestral ecologies of meaning.
Harari reads transhumanism as a reductio of humanism, the point where the religion of individual empowerment unravels the very agency it purports to empower. Since he remains, at least residually, a humanist, he places ideology—what he calls the ‘intersubjective’ level of reality—at the foundation of his analysis. It is the mover and shaker here, what Harari believes will stamp objective reality and subjective reality both in its own image.
And the fact of the matter is, he really has no choice, given he has no other way of generalizing over the processes underwriting the growing Whirlwind that has us in its grasp. So when he turns to digitalism (or what he calls ‘Dataism’), it appears to him to be the last option standing:
“What might replace desires and experiences as the source of all meaning and authority? As of 2016, only one candidate is sitting in history’s reception room waiting for the job interview. This candidate is information.” 366
Meaning has to be found somewhere. Why? Because synchronization requires trust requires shared commitments to shared fictions, stories expressing those values we hold in common. As we have seen, science cannot determine ends, only means to those ends. Something has to fix our collective behaviour, and if science cannot, we will perforce turn to be some kind of religion…
But what if we were to automate collective behaviour? There’s a second candidate that Harari overlooks, one which I think is far, far more obvious than digitalism (which remains, for all its notoriety, an intellectual position—and a confused one at that, insofar as it has no workable theory of meaning/cognition). What will replace humanism? Atavism… Fantasy. For all the care Harari places in his analyses, he overlooks how investing AI with ever increasing social decision-making power simultaneously divests humans of that power, thus progressively relieving us of the need for shared values. The more we trust to AI, the less trust we require of one another. We need only have faith in the efficacy of our technical (and very objective) intermediaries; the system synchronizes us automatically in ways we need not bother knowing. Ideology ceases to become a condition of collective action. We need not have any stories regarding our automated social ecologies whatsoever, so long as we mind the diminishing explicit constraints the system requires of us.
Outside our dwindling observances, we are free to pursue whatever story we want. Screw our neighbours. And what stories will those be? Well, the kinds of stories we evolved to tell, which is to say, the kinds of stories our ancestors told to each other. Fantastic stories… such as those told by George R. R. Martin, Donald Trump, myself, or the Islamic state. Radical changes in hardware require radical changes in software, unless one has some kind of emulator in place. You have to be sensible to social change to ideologically adapt to it. “Islamic fundamentalists may repeat the mantra that ‘Islam is the answer,’” Harari writes, “but religions that lose touch with the technological realities of the day lose their ability even to understand the questions being asked” (269). But why should incomprehension or any kind of irrationality disqualify the appeal of Islam, if the basis of the appeal primarily lies in some optimization of our intentional cognitive capacities?
Humans are shallow information consumers by dint of evolution, and deep information consumers by dint of modern necessity. As that necessity recedes, it stands to reason our patterns of consumption will recede with it, that we will turn away from the malaise of perpetual crash space and find solace in ever more sophisticated simulations of worlds designed to appease our ancestral inclinations. As Harari himself notes, “Sapiens evolved in the African savannah tens of thousands of years ago, and their algorithms are just not built to handle twenty-first century data flows” (388). And here we come to the key to understanding the profundity, and perhaps even the inevitability of the Semantic Apocalypse: intentional cognition turns on cues which turn on ecological invariants that technology is even now rendering plastic. The issue here, in other words, isn’t so much a matter of ideological obsolescence as cognitive habitat destruction, the total rewiring of the neglected background upon which intentional cognition depends.
The thing people considering the future impact of technology need to pause and consider is that this isn’t any mere cultural upheaval or social revolution, this is an unprecedented transformation in the history of life on this planet, the point when the evolutionary platform of behaviour, morphology, becomes the product of behaviour. Suddenly a system that leveraged cognitive capacity via natural selection will be leveraging that capacity via neural selection—behaviourally. A change so fundamental pretty clearly spells the end of all ancestral ecologies, including the cognitive. Humanism is ‘disintegrating from within’ because intentional cognition itself is beginning to founder. The tsunami of information thundering above the shores of humanism is all deep information, information regarding what we evolved to ignore—and therefore trust. Small wonder, then, that it scuttles intentional problem-solving, generates discursive crash spaces that only philosophers once tripped into.
The more the mechanisms behind learning impediments are laid bare, the less the teacher can attribute performance to character, the more they are forced to adopt a clinical attitude. What happens when every impediment to learning is laid bare? Unprecedented causal information is flooding our institutions, removing more and more behaviour from the domain of character, why? Because character judgments always presume individuals could have done otherwise, and presuming individuals could have done otherwise presumes that we neglect the actual sources of behaviour. Harari brushes this thought on a handful occasions, writing, most notably:
“In the eighteenth century Homo sapiens was like a mysterious black box, whose inner workings were beyond our grasp. Hence when scholars asked why a man drew a knife and stabbed another to death, an acceptable answer said: ‘Because he chose to…” 282
But he fails to see the systematic nature of the neglect involved, and therefore the explanatory power it affords. Our ignorance of ourselves, in other words, determines not simply the applicability, but the solvency of intentional cognition as well. Intentional cognition allowed our ancestors to navigate opaque or ‘black box’ social ecologies. The role causal information plays in triggering intuitions of exemption is tuned to the efficacy of this system overall. By and large our ancestors exempted those individuals in those circumstances that best served their tribe as a whole. However haphazardly, moral intuitions involving causality served some kind of ancestral optimization. So when actionable causal information regarding our behaviour becomes available, we have no choice but to exempt those behaviours, no matter what kind of large scale distortions result. Why? Because it is the only moral thing to do.
Welcome to crash space. We know this is crash space as opposed to, say, scientifically informed enlightenment (the way it generally feels) simply by asking what happens when actionable causal information regarding our every behaviour becomes available. Will moral judgment become entirely inapplicable? For me, the free will debate has always been a paradigmatic philosophical crash space, a place where some capacity always seems to apply, yet consistently fails to deliver solutions because it does not. We evolved to communicate behaviour absent information regarding the biological sources of behaviour: is it any wonder that our cause-neglecting workarounds cannot square with the causes they work around? The growing institutional challenges arising out of the medicalization of character turns on the same cognitive short-circuit. How can someone who has no choice be held responsible?
Even as we drain the ignorance intentional cognition requires from our cognitive ecologies, we are flooding them with AI, what promises to be a deluge of algorithms trained to cue intentional cognition, impersonate persons, in effect. The evidence is unequivocal: our intentional cognitive capacities are easily cued out of school—in a sense, this is the cornerstone of their power, the ability to assume so much on the basis of so little information. But in ecologies designed to exploit intentional intuitions, this power and versatility becomes a tremendous liability. Even now litigators and lawmakers find themselves beset with the question of how intentional cognition should solve for environments flooded with artifacts designed to cue human intentional cognition to better extract various commercial utilities. The problems of the philosophers dwell in ivory towers no more.
First we cloud the water, then we lay the bait—we are doing this to ourselves, after all. We are taking our first stumbling steps into what is becoming a global social crash space. Intentional cognition is heuristic cognition. Since heuristic cognition turns on shallow information cues, we have good reason to assume that our basic means of understanding ourselves and our projects will be incompatible with deep information accounts. The more we learn about cognition, the more apparent this becomes, the more our intentional modes of problem-solving will break down. I’m not sure there’s anything much to be done at this point save getting the word out, empowering some critical mass of people with a notion of what’s going on around them. This is what Harari does to a remarkable extent with Homo Deus, something which we may all have cause to thank him.
Science is steadily revealing the very sources intentional cognition evolved to neglect. Technology is exploiting these revelations, busily engineering emulators to pander to our desires, allowing us to shelter more and more skin from the risk and toil of natural and social reality. Designer experience is designer meaning. Thus the likely irony: the end of meaning will appear to be its greatest blooming, the consumer curled in the womb of institutional matrons, dreaming endless fantasies, living lives of spellbound delight, exploring worlds designed to indulge ancestral inclinations.
To make us weep and laugh for meaning, never knowing whether we are together or alone.
Goethe once said: “The Mothers, the Mothers…” … sounds like you’ve given it a nice update: “the consumer curled in the womb of institutional matrons, dreaming endless fantasies, living lives of spellbound delight, exploring worlds designed to indulge ancestral inclinations”. Sounds like J.G. Ballard’s meshing of technology and dream with the Real. We’ll become so artificial it want matter that we are all lost in the womb real-world VR … the world will have become and MMO Game.
Besides diagnosing the inevitable semantic apocalypse, what do we do? Am I the only person who reads Bakker’s blog and despairs? I feel like I should just give up on my English literature PhD. I had no illusions that it was worth anything, but this blog makes me feel even worse about it.
No need to worry about your Lit PhD just because of Bakker. What he diagnoses as “semantic apocalypse” is just a vague dislike of anything that doesn’t fit into his own niche, mixed with empty arrogance that attempts to appeal to science but fails. His attempt to merge his “hip” nihilism with neuroscience is nothing but just cute poetry. Sit down, relax and see for yourself whether Bakker’s arguments make sense and are relevant to anything.
Well, I do see a number of pejorative descriptions of my positions, but nothing in the way of actual argumentative warrant. Do you have an argument?
Are you kidding me? This is without a doubt the most exciting time to be any kind of thinker in the history of the human race. So it’s scary. Better the beast be seen, be battled. And think of the battle! What we need are new ways to think and communicate–only the old ways need to despair. We can grin.
All the English Lit instructors I know are adjuncts teaching eight classes on eight different campuses for eight dollars a class. Financially at least, you might be better off with a CCNA, or even a CDL. Regarding despair, I feel more a sense of resignation. I remember reading somewhere that more than 90% of all the species that have ever existed have eventually become extinct. Why should ours be any different?
I actually like the idea that machine intelligence will eventually eliminate the need for human intersubjectivity. If human beings needed to cooperate in order to cope with and overcome nature, it stands to reason that once nature is defeated for good human beings will no longer need to cooperate. In effect, each human being who survives the Semantic Apocalypse will become its own god presiding over its own universe. This makes a certain amount of sense as well. As has been argued elsewhere (for example in Rants Within the Undead God) all human striving is ultimately striving to achieve different kinds of godhood.
As I reread this prior to hitting the Post Comment button it occurs to me that I wrote one paragraph saying humanity is becoming extinct and another saying humanity is achieving godhood. Are they the same thing?
I think it’s just the opposite, now that we know meaning was and will always be an illusion that we join with Nietzsche who had already exposed that in nihilism, and finish it, complete this nihilism and realize it is not a reason to despair, but a reason to see things without us, without our illusions, without our anthropomorphisms, without our so called belief in ‘truth’ as anything more than a device, a heuristic device to test our own ignorance, our medial neglect. It’s neither positive nor negative in that sense, but truly pessimistic in the sense of pessimism is realism without human illusion (if there ever can be such a thing?).
Of course I could be wildly wrong. If narratives are about to become the most powerful force in the universe, the ability to create stories that create intersubjectivity might be financially, the next big thing. There might be an enormous business opportunity in figuring out how great stories have their effects and replicating those effects for commercial purposes. Scott’s post “A Bestiary of Future Literatures” is both funny and charming in that regard.
Yeah, I think I feel fairly similar, EM.At a guess Scott is probably trying to avoid groupish behaviour, but the articles tend to make one despair and be off and alone – which doesn’t really work for a social animal (and may contribute to some readers lack of comprehension – the article lacks the ‘togetherness’ bit a social animal expects (unless really hardcore academic, I guess, who nest with words rather than peoples)). But hey, you got a battle chant from ‘im, which I’ve never seen before, so go forth with that shared tub to thump! 🙂
Either that, or people have to give up on intentional cognition–and I can’t see that happening. So where you could see modernity as the point where our inscrutability allowed us to corner the meaning market, to the point of ontologically identifying with meaning, no less, you can see our (quite distinct) age, the one where natural and artificial cognitive science begins glassing that black box, and where intentional cognition has nowhere to go–nowhere real. On an evolutionary perspective you would chalk it up to the continuous optimization of fitness indicators, chasing them from the real into the virtual. Intentional cognition is deeply bound up in this pursuit. This is why intentional cognition has to go somewhere (and why Harari presumes digitalism will become the next great myth). Given that neglect is its condition, then feigned neglect is the only viable solution, or as Coleridge once called it, the wilful suspension of disbelief.
Of course the digitalist would argue that you reading there very words now… (!)
I think that’s the exciting thing. If intentional consciousness is an evolutionary end game, then our task – not for ourselves – is to invent in AGI not a way to solve the hard problem but to dissolve it, to invent the next stage of thought and thinking without consciousness: the riddle of the circle squared. Will it be possible to construct Artificial General Intelligence without consciousness that has access to the environment with all that entails – all of our advanced senses as prosthetics; or, will we develop machinic being without our use of ‘sense’? Is not the body / sense empirical problem what we’re talking about after all, the truth of what Deleuze was already onto in The Logic of Sense? I laugh when I read all these Neorationalists and Dialectical Materialists who argue against Deleuze’s basic premises in a non-dialectical sense based materialism of the body and embodiment. For it is not consciousness, per se that is at issue, but the body and sense which is after all what medial neglect is: the problem not of the limitation of the brain, but of the body (prosthetic appendage) evolution stuck it with… the brain had to use the kludgy body it was given to operate on the environment, so that it developed the senses: sight, touch, sound, smell, etc. If the brain or AGI (it’s progeny) had access to other more expansive senses (body/prosthetic) would this not open the door onto other modes of thought and being as well?
I find your articles too verbose. I’ve read your criticisms of most continental philosophy, but it seems you too are bogged down by its polemical and rhetorical indulgences. Sharper scientific, critical analyses are needed to support your “post-intentional” claims. If you are going to appeal to science, please make your writings comprehensible to a reader from a scientific background. I’m afraid that even your pet term, “post-intentionality”, belongs to a trope of academic humanistic vocabularies that needs to be thrown out.
Every once in a while I get someone saying the same about all my over technical scientistic rhetoric!
You’re right of course: the continental odour does me no favours in scientific circles–I remember quite vividly the days when I let variances in terminology and tone draw my conclusions for me. The problem I face is that I’m genuinely aiming between audiences, genuinely trying to think between them as well. Lord knows cognition and consciousness need some kind of radical rethink.
I keep making attempts either way. The article I have coming up in The Journal of Consciousness Studies will hopefully prove more accessible.
what’s the title of the article?
“On Alien Philosophy.”
I think that’s a false claim. What you’re really arguing is that it be Analytical pragmatic, or more American philosophical reductionism to the matheme or its derivative in jargon acceptable academic scientism. To force thought into a mold of conformism is once again not to clarify but rather to bind it to the very thing Scott is pointing out: medial neglect. Post-Analytical thinkers are moving beyond the types of descriptive strait-jacket you’re suggesting, so that it’s erroneous to enforce a mode of description on Scott that is already both cliché and surpassed even in philosophy of sciences. Humanism is no longer at issue here, having already been for sixty years eroded to the point that the non-human turn has developed alternatives. I suggest that it is you who is still bound within an outmoded form of discourse rather than Scott. And to universalize the ‘sciences’ and scientific communication as if it were some well known accepted ideology and description of reality is erroneous beyond acceptability.
I think you’re just doing the traditional human act of, for not understanding part, rejecting the whole. Plenty of comprehensible text there but you treat it all as incomprehensible – don’t worry, I think Scott has done the same as yourself, in the past 🙂
That only allows you to admit defeat, because of lack of depth in reading both in the sciences, philosophy, and a number of other cultural frameworks that Scott or I use. And, to ask of us to reduce such complexities to the inane or simple minded tomes of popular thought would be to assume the world, is, too just a popular front for the mindless incomprehensibility of its children. Which it is.
Sorry S.C, I meant to (and did) reply to Yapolski but as we trust to wordpress to decide our constraints, so wordpress confuses our discussion and made it look a reply to you. 🙂
Ah… yep, sorry bout that… 😉 my big blipo for the day! 😦
Nay – wordpress’s blipo!
“…but a reason to see things without us, without our illusions, without our anthropomorphisms, without our so called belief in ‘truth’ as anything more than a device,…”
How is this different from despair?
Because despair – a term from Old French desperer “be dismayed, lose hope, despair,” implies one had “hope” to begin with. I never did. Pessimism is not a turn toward despair as many deride it, but rather just the harsh truth of both the human condition, and of the world stripped of its human illusions: stark realism. One can trace this in Schopenhauer, Hartmann, Mainlander, Julius Bahnsen, Zappfe…. and so many others. Why be dismayed that the human condition is not what we thought? Why lose hope when hope was false? Why despair of the truth because it shows you the world stripped of our illusions? To me such is the problem not of pessimism, but those ardent believers in optimism who have taken hope as the sign of health, joy, and all those erroneous and fictional accounts of reality from myth, religion, and – dare, I say it, philosophy.
Appended:
Far too long we’ve displace our modes of being toward past or future without ever living out our lives in the moment of this world’s affairs. We’ve live retroactively through our condition of being: philosophy, sciences, poetry-literature, mathematics, politics, and love. Through these we apprehend the world as in a prism, a refraction of both mirror and representation. Already we are bound to ignorance within this duplicitous system, given only the base interference patterns the brain needs to fulfill: hunger and sex. Nothing more. To see more in it than this is to construct out of ‘medial neglect’ (ignorance) the illusions of meaning that our species has built up over thousands of years into vast and complex belief systems as if they gave us insight into the world, life, and the universe. The sciences, engineering, and technology and technics – which from the beginnings in Greek thought have developed specialized tools, prosthetics, etc. by which to investigate further the environment, manipulate it, and use it for the brain’s own goals of survival and replication has brought us to this point we are in on this planet. Others before us, other civilizations, using other technics and technologies also overreached themselves (i.e., in Greek harmartia: a fatal flaw leading to the downfall of a tragic hero), and brought about the demise of their civilizations either through natural depletion, war, disease, famine, or any number of other factors. We, too, are no different. We are moving in that direction, and are even now in a Sixth Extinction cycle well documented, and are changing weather patterns that may eventually bring this civilization to its needs as crops fail, disease runs riot, war for remaining resources kicks in, and political, social, and religious and ideological worlds collide and shape our real world.
To see this without illusions is not to despair as some might thing, but to wise up and challenge ourselves to complete this elimination of illusions, to strip the lies of political, social, and religious myths of their traction; to realign our knowledge of the earth and her ways, to bring to bare the sciences not to manipulate but to make healthy our planetary systems, to create a society and civilization based not on fiction but on the truth of things as they are, not as they might or ought to be as in both Idealism and Neorationalist normativity. We have to come together but not in the ways of us vs. them rhetoric and policies. There is only one foundation upon which politics and philosophy and the sciences can be built: the earth itself, but this is not to follow those mythicizers of Gaia as some Great Mother, etc., but to accept the naked unveiling of a world and its processes without us. And, by “without us,” I do not mean a literal death of the human species, but of a world stripped of our anthropomorphisms and our exceptionalism… a flattening to the world to its bare and minimal truth as the only platform of organic life we might survive in and replicate within: fulfilling the old hunger and sex, survival and replication of natural evolution. While at the same time developing other modes of being: AGI and robotic progeny that may adapt to environments hostile to humans.
I meant “knees” not needs in second paragraph, as well as “think” not “things” in the same.
I remember reading about multiple suicides among people in the financial sector after the stock market crash of 1929. If U. S. living standards are going to have to fall to what we now consider to be third world standards (for example only eating meat one meal a day instead of four, using public transportation instead of driving alone in a car every day, sweating instead of running the air conditioning all day etc.) the psychological dislocation will be severe, and some people will despair, because some people consider the current American middle class standard of living their birthright. I agree with you that a realistic appreciation of the earth and what it can sustainably provide is not despair, but it will seem that way to a lot of us. During the cold war there was serious discussion in the higher levels of the U. S. military and civilian leadership about how the United States could “prevail” in a nuclear “exchange” with the Soviet Union. There are always people who would rather burn it all down than accept less than what they believe is their due.
I understand, and can sympathize; yet, it still boils down to people “believing” they are the exception, that the world should be different for them because – as you say, they “deserve” it. No. No one deserves anything – we’re all on the same planet, we’re all born under the same sun, when reduced to naturalism we’re all the same species. The luck of birth does not give you some special “status” as these you speak of seem to think. It’s this truth I speak too. If they commit suicide or despair it still comes to their thinking they are better, they deserve, they hope… and, as you say, that think life “owes” them something. Life is valueless, impersonal, and indifferent to the human thought or will; impervious to our demands or our prayers. It is blank… we are the one’s with the illusions, not the world… so as sad as you make it sound it is still illusion that makes people suffer their deluded sense of exceptionalism and sense of status. Because the one’s you speak of are optimists…
@Murden I’ll try another tack: The earth is a killing machine, a natural predatory system or organic and anorganic processes. You’re nothing more that as Scott might say a three-pound wonder connected to a prosthetic set of appendages and nervous system that allows it to navigate this predatory sea for food and sex, survival and replication which drives it. Nothing more. After that comes the appended systems of belief we’ve built up and constructed out of ‘medial neglect’ and ignorance and crowned ourselves with the glorious notion of God’s little children, the Exception to this state of affairs of murderous natural process. We’ve deluded ourselves into beliefs and positive optimistic systems of complex relations from the kernel of denying this truth. Every metaphysical ploy is a denial system against the truth that we are valueless, that we’re mere killing and predatory systems living in the midst of a killing planet that could care less about our thoughts or our exceptionalism. Despair is just another reaction to this state of affairs, another system of denial and delusion that comes from belief that things should or ought to be different, that life owes us something. It doesn’t. It doesn’t even know we exist. It’s utterly blind and mute in regards to the status of our beliefs, our prayers, our existence.
Once you actually affirm this rather than denying it something else happens: you’re utterly alone and free for the first time in you life. Being alone isn’t really that bad. You’re going to die alone, even if you have a thousand people standing round you. Solitude, not in the sense of solipsism, but in the sense of affirmation of existence without us, is the beginning of enlightenment not through transcendence, but through the immanent relation we have to our ignorance; our ‘medial neglect’.
@S. C. Hickman
“…we’re mere killing and predatory systems living in the midst of a killing planet that could care less about our thoughts or our exceptionalism.”
I agree with you that the universe doesn’t give, and in fact is incapable of giving, a fuck about humanity. If you’ve been able to stomach the U. S. presidential campaign, you know that argument is not an election winner. It seems that humanity needs a sense of humility about its place in the world simply to keep from making the world uninhabitable for itself. Unfortunately humility is a tough sell…
And wealth disparity is quite the opposite – you’d have to remove extreme wealth disparity, or a sense of humility in the populous would get you a populous that no longer tolerates massive wealth disparities. ‘You too can be king – if you keep giving your labour!’ is why the population tolerates kings. Actually Bill Gates is probably richer than ye olde kings of yore – the less humility, the more acceptance of this.
It’s not that it doesn’t give a fuck – as you put it, it’s that it doesn’t even know it exists: the universe is a gigantic cannibalistic feeding machine, crunching every bit of energy out of the system until there is no more and everything is ground to dust and dispersed into the utter darkness. No metaphysical quandaries, no gods sitting out there beyond the void, nothing but this accidental time vector running on till it doesn’t anymore.
Politics. As a pessimist I no longer care, both parties are run by fat cats that lie their asses out promising the moon when they very well know the only thing they’ll be able to do is veto, start wars; or, as in Obama, create his Caesars to skirt the law and push the world by other means. Oligarchs, Plutocrats, and the mass stupidity of people who still think government counts… and, now, with Putin pushing war in Syria and the Middle-East, our country weakened by two idiots running for leadership. And, hackers plugging away bit by bit with the internet… I think we can see where we’re heading… more violence and war in our immediate future.
So yea, ‘medial neglect’ isn’t going to be on the agenda of many in the coming years…
Reblogged this on synthetic zero.
I’ve told you before, Daniel: roach isn’t an insult. We’re the ones still standing after the mammals build their nukes, we’re the ones with the stripped-down OS’s so damned simple they work under almost any circumstances. We’re the goddamned Kalashnikovs of thinking meat.
-Peter Watts, Echopraxia
Even if you think being pandered to by Disneyland dialed up to 11 forever is a pretty sweet deal, new tech rigging old meat is going to come with all kinds of problems just in terms of doing what it’s supposed to do. Right now, an east coast DNS gets flooded by malicious traffic and I can’t get my work done for half a day. My memory externalizing communicator catches fire and suddenly I can’t talk to most of the people I know in a timely manner. What does it even mean if ten years from now the technological apparatus that gives meaning to life itself pulls a “move fast and break things” update, or decides to shit the bed for a weekend?
Lovely quote. I still haven’t had a chance to check out echopraxia yet.
Systematic collapse is what makes modularity is key-key-key. I sometimes think that this is really what’s at issue in discussions of ‘freedom’ and ‘autonomy.’
Thus the modern obsession with the accumulation of the power to accumulate.
Modern obsession? And what’s wrong with accumulation of the power to accumulate?
I’d get if the argument was that government sets conditions that overclock our natural, survivalist based accumulation obsession (the government starting with things like ‘hut taxes’ that force children of tribes to go work for the government, then moving up), driving a ‘work hard/play(pray) hard’ mentality that demands higher and higher ‘play’ to compensate the work. And so a market for fantasy novels…
But I’m not sure a lack of reverence for survival based accumulation obsession is going to work out – in fact I’d say it’s part of the problem that it isn’t revered. That something grander is being demanded after all that work, and it’s an obsession with accumulating the power to accumulate meaning.
Anyway, I’ll pitch a promotion of reverence for survival accumulation, particularly outside the circuit of money.
In Adorno’s analysis it’s key to understanding modernity: once the accumulation of capacity to accumulate becomes your primary civilizational imperative your civilization has the facilitation of individual ends as its only collective end. Individuals have no collective means of fixing their ends apart from their shared biology (‘separation of church and state’), their common animal imperatives, which is why ‘following your heart’ always leads to consumerism. So you have the most efficiently animalistic system you can possibly imagine running nowhere for no point whatsoever.
I think I understand the model you describe, as a hypothetical model. But why is it only at that point that it’s ‘running nowhere for no point whatsoever’, rather than always having been that way*? Just sans ‘efficiency’? Seems like a charge against capitalism/civilisation that, for once, capitalism isn’t to blame for.
I think I get the model you describe, but are we really in a civilisation that facilitates individual ends? Do you facilitate the individual ends of the cells that comprise you? Surely not?
I’d think of ‘accumulation of capacity to accumulate’ as a survivalism – something other meanings build themselves out of or on, or quickly slough off to oblivion. And in terms of the individual I’d say that accumulation is not happening – despite all the consumerism! I think you’ve pitched the idea yourself of thinking where your shoes come from, or the many other resources come from. Consider that, like the cells of your body are not accumulating for their own survival as individuals but instead accumulating for something else. The shoes, the food – even the TV, the movie DVD’s, are not for the individual. There isn’t an ‘accumulation of accumulation’ effort going on – not for the individual, anyway. Indeed, the man hours put into purchasing that DVD stack aught to clearly show up as an impediment in terms of survivalism. How are all those DVD’s an accumulation of the capacity to accumulate? When really they just burn hours away – how could that ever be taken as accumulation?
But I feel like I’m telling granma how to suck eggs when I used ‘Consider’ above, so it feels likely I’m just missing something in your model – what is it?
* Ignoring grander mysteries for the moment, like why there is any matter at all, ever.
I’ll flatter myself and take the radio silence to mean my post sounded at least vaguely plausible >:)
castles made of sand…
https://www.schneier.com/blog/archives/2016/09/someone_is_lear.html
In evolutionary theory, modularity is key to robustness, since it allows components to carry on in the absence of superordinate stability. The great danger of integration is system-wide reboots. When the internet is down, how many businesses possess the capacity to continue discharging their function?
reboots is bad but so is contagion, more parts, more links, more vulnerabilities.
https://syntheticzero.net/2015/04/10/wendy-chun-on-media-thresholds-habits/
Scott says:
“The issue here, in other words, isn’t so much a matter of ideological obsolescence as cognitive habitat destruction, the total rewiring of the neglected background upon which intentional cognition depends.”
Isn’t that the key, that this has and is going on whether we will or no? I mean that this rewiring of the human has probably been going on for centuries, but we’re only just now apprehending the massive accumulation of effects, the subtle changes over those timeframes as our external narratives over the past two centuries in philosophy, politics, psychology, literature, etc. have failed us time and again… so that technics and technology as a force have accrued to the point that they used us as a piggy-back system on which to graft their own insurgency.
As in most philosophy it comes down to either a reversal or a replacement… humanism is not being reversed in this new century, but rather dismantled and replaced through a combination of decomposition and de-coupling of intelligence from the organic into the anorganic, along with its advance optimization. So that the human body, not humanism is being left behind, shorn like a snake, to rot among the organic tombs of its ancestral slime pits.
As you said, Scott: “The thing people considering the future impact of technology need to pause and consider is that this isn’t any mere cultural upheaval or social revolution, this is an unprecedented transformation in the history of life on this planet, the point when the evolutionary platform of behaviour, morphology, becomes the product of behaviour. Suddenly a system that leveraged cognitive capacity via natural selection will be leveraging that capacity via neural selection—behaviourally. A change so fundamental pretty clearly spells the end of all ancestral ecologies, including the cognitive.”
Your still speaking of organic “neural selection,” which might not be the case, since it might be the AGI or anorganic neural selectors that take over from the organic human host. Intelligence might have been all along a parasite awaiting its next host.
Let’s face it, when Scott says: “The more we trust to AI, the less trust we require of one another. We need only have faith in the efficacy of our technical (and very objective) intermediaries; the system synchronizes us automatically in ways we need not bother knowing. Ideology ceases to become a condition of collective action. We need not have any stories regarding our automated social ecologies whatsoever, so long as we mind the diminishing explicit constraints the system requires of us.”
That in itself would be tyranny of the machinic. Almost sounds eerily like Negarestani’s e-flux notion of Freedom = Absolute Slavery to Reason. This sense that conforming to an objective anorganic Superintelligence which will handle all the deep information while we almost Eloi (H.G. Wells) like bask in the ignorance of shallow informational fantasy in a playland controlled by a vast machinic platform of which we for the most part will remain neglectful and utterly unaware. While it subtly feeds us our heart’s desires, making us believe that our choices are our own individual free will. Is this not Thomas Ligotti’s nightmare universe of Puppets?
“This sense that conforming to an objective anorganic Superintelligence which will handle all the deep information while we almost Eloi (H.G. Wells) like bask in the ignorance of shallow informational fantasy in a playland controlled by a vast machinic platform of which we for the most part will remain neglectful and utterly unaware.”
” One of the things I like about God as a philosophical posit is that God can be both outside and inside, so God bridges the gaps. God is eternal, so for God every instant is now. God is infinite, so for God everywhere is here. I could add examples but my point is that God solves all these philosophical problems…”
Insert smiley face here.
The language is different but the meaning is the same as in Revelations. It’s just that some people prefer their Apocalypse stories with dragons and some without.
I guess I can see why you and I would have profound differences. 🙂 God as Dog might be more apropos for me… a sort of joke among jokes.
And, the meaning would not be the same at all, since for many God is transcendence itself, the beyond, salvation, the escape plan, immortality, etc., while the AGI is nothing but the immanence not of the transcendent beyond, but of the very real daemonic arising of anorganic matter into the equation of living death, a death-in-Life that is Intelligence itself no longer vitalistic but rather pure and total, absolute cold intelligence at the core of the inhuman within us.
Certainly atheist gods will be different from religious gods. The ‘practical theology’ of artificial intelligence is going to look different than the ‘theoretical theology’ of religion. Machine gods will require different forms of belief and different forms of worship, but most humans need something to bow down to. The miracle of God is that He is both transcendent and immanent. The Father spoke to universe into being, the Son was incarnated in flesh and redeemed our sins. The Holy Spirit dwells within each of our hearts.
The theology of artificial intelligence is still under construction, but books such as Posthuman Life and the one under review in this post, as well as this blog itself are part of that construction process.
The thing about gods who are really transcendent, Maxwell’s Daemon, outside the universe gods is that belief in such a god is functionally equivalent to atheism. There are no prayers to be made or stories to be told or hymns to be sung for such a god, so what’s the point?
As for machine Gods, this band is their J. S. Bach:
And one of my favorite stories about machine gods is “I Have No Mouth and I Must Scream” by Harlan Ellison.
The thing about gods who are really transcendent, Maxwell’s Daemon, outside the universe gods is that belief in such a god is functionally equivalent to atheism.
Ah, finally someone gets it.
The next step is understanding that also certain types of claims (determinism/free will) cannot work if the principle is believing in some scientific model that shares the demon position outside the world.
Though it’s Laplace’s demon, not Maxwell. I think Maxwell is metaphysical.
Maxwell’s demon represents the point of convergence between ontology and epistemology (the statistical thermodynamics of informational inscription)
Arguably both have been repudiated, and should be taken off the table as incoherent concept.s
Uhm, nope.
Maxwell demon contains a fallacy since it’s a description of a demon who’s inside the system and violates the rules. You can’t do that without supernatural powers. So Maxwell demon works on the premise of magic.
Laplace’s demon is one that exists solely outside the system of reality. It’s not only plausible, but completely compatible with science. The only reason why it’s not relevant for science it’s because being outside reality it falls outside the competency of the scientific domain.
Murden,
The thing about gods who are really transcendent, Maxwell’s Daemon, outside the universe gods is that belief in such a god is functionally equivalent to atheism. There are no prayers to be made or stories to be told or hymns to be sung for such a god, so what’s the point?
Why is that? By transcendent do you mean something more than how we are outside of every minecraft game world that is generated?
Imagine Mario and Luigi praying to the coders at Nintendo who created Donkey Kong.
You mean Bowser, heathen! 😉
I guess it depends how different the outside is, whether stories transfer. Though what actually runs minecraft is nothing like a world, the minecraft world doesn’t know that and it runs kind of like our own world – thus the stories could translate through.
Perhaps you mean outside and utterly alien in physics?
bakker bait:
https://sms.cam.ac.uk/media/2347790
his is one of the weekly seminars, led by Tim Crane, of the New Directions in the Study of the Mind project, based in the Faculty of Philosophy of the University of Cambridge, and supported by the John Templeton Foundation. It is attended by the project members and visitors, as well as undergraduate and graduate students at the University of Cambridge.This is from year two of the project, for which the theme for the seminar is intentionality.
Peter had a link to a position piece of his a way back, which I made the mistake of reading… This stuff just strikes me as so antediluvian any more. They all talk as if they’re solving something, but not a soul knows how to lace a skate.
thus the baiting aspect…
The one about “soft” eliminativism as a propadeutic for teasing apart normative concerns from empirical concerns?
I’ve finally read the whole thing and also still sad the other discussion stopped at the meaningful moment.
But anyway, I still think there are a few logical fallacies, or at least aspects that don’t follow, even without applying my own frame of reference.
I’ll start from the bottom, even if this is the premise of most of what was discussed, not just its conclusion:
“what happens when actionable causal information regarding our every behaviour becomes available.”
Really? You believe this an actual scenario?
The fallacy, from my point of view, is that you trust a fully reductionist scenario. You even apply it to stuff like politics: “customers and voters never make free choices”.
And you believe this is somehow next door.
But come on. Don’t you see the the application you describe is just another practical heuristic that has been slightly adapted to fit the theory. And it is instead not what you’d expect as an actual reductionist strategy?
What I mean is there’s a problem of scale, and of underestimation of the colossal complexity we’re dealing with. I agree that a reductionist approach is correct, but I don’t agree on the fact it’s *practical*.
This type of reductionism/determinism requires that you know the starting conditions. The causal information. But don’t you realize this defies every kind of plausible computational power?
How do you decide what’s relevant and what isn’t when problem solving a certain thing, if not by arbitrary slicing this complexity through heuristics? Or do you expect to compute IT ALL? How do you decide when trying to predict something what elements need to be included in the simulation and what element is not relevant? Where do you cut the causal chain? How upstream you expect to go?
A reductionist approach that is radical doesn’t conveniently stop the simulation within computational necessity.
Chaos and complexity theory do not deny determinism, but they prove the system is still unpredictable because we just cannot integrate ALL starting conditions, and all the laws that regulate every aspect of something we want to track. We are nowhere close to control this. We couldn’t even reliably predict the path of an ant.
This is one logical fallacy. A fully reductionist strategy, to work, needs to be complete. Otherwise it’s not reductionism, it’s statistics. And statistics is an high level heuristic to solve high level problems.
You say this even applies to politics and it’s even more absurd. Politics are very highly complex exactly because it happens at a so high level. It’s not even anymore about modeling a brain, but also a network, a whole community. So you have a fully reductionist model that not only perfectly determines a brain, but a whole community?
You’ll believe you have a great model, only to realize unexpected things continue to happen just because you didn’t foresee a certain element was going to be relevant.
Again, the problem isn’t that reductionism is wrong, the problem is we are underestimating the scale of complexity when we deal with this stuff. And you just cannot reliably use a reductionist strategy when you understand some 5% of a complex system.
That’s why from my point of view we’ll keep using high level heuristics that just grow to be adapted and closer to more grounded models. We’ll see these heuristics evolve to embrace more information.
BUT, and here is the crucial aspect, these aren’t opposite strategies. Top-down heuristics versus bottom-up reductionism. It will be instead still top-down heuristics that progressively lower to absorb more data, but maintaining their “semantic” quality. There’s no inversion happening.
Aspect number 2 of what I see as logical fallacy: BBT is not a description of the future, it’s instead a description of the present and the past. BBT doesn’t describe how we are going to evolve, or that we’re about to lose consciousness, but that we NEVER had it. It describes a perpetual, unchanging condition.
So, if BBT is true, then the revolution either never started, or it always happened at the beginning of life. If I have a theory for gravity it’s not like gravity begins to work after I formulate that theory. So, BBT can only affect the application of the theory, not the condition itself.
That means that we’ve always evolved along the SAME path. There’s nothing distinctly new happening because we always lived under BBT’s rules.
There’s a quote: “Over the last decades there has been an immense advance in computer intelligence, but there has been exactly zero advance in computer consciousness.”
This is a Ship of Theseus kind of problem. We haven’t decided what consciousness IS, so how can we decide that there’s no advance in computer consciousness? It’s just a bad argument.
The problem of consciousness is that consciousness is complex. Like the Ship of Theseus. We won’t recognize any “part” of the ship unless we have the whole ship. Until it’s completely indistinguishable. We haven’t made any progress because once again we are underestimating the scale of the complexity. We are not even close to map the thing.
So, in the case of BBT this revolution never had any beginning, it’s a process that started with humanity. It never stopped. You could say it’s accelerating in modern times, but I wouldn’t be sure it’s accelerating at a speed we cannot handle. It’s part of the system, and its speed likely is proportionate to the rules that have always regulated it.
There is no paradigm shift because the only paradigm shift is behavior consequent to the discovery of the theory. So it’s still “behavioral”. It’s still intentional.
The post-intentional description you make contains this logical fallacy: it’s a post-intentional reaction that happens within intentionality, OR it’s post-intentional behavior that always behaved the same way. Don’t mix these. Either you use one level of description or the other. Either you have free will, or don’t. But if you don’t then knowing you have not free will is a logical fallacy because your behavior has always been determined by that, and you cannot change it in the future. If you describe a permanent state then you cannot say knowing about it brings change or a revolution, because the permanent state is permanent and will never allow any change.
So, either it’s intentional change, or it’s post-intentional stasis. It’s either changing because you assume the first person and its evolution, or static because the rules of this world have never been modified and we’re still under the second law of thermodynamics. The system of reality is either a solid (already determined/from the outside), or a fluid (changing/living/from the inside).
A theory of consciousness cannot change consciousness. It cannot bring change. It cannot bring any kind of paradigm shift. But it can bring a change of behavior. It can dramatically change our perspective and change the world around us. But it’s not a new perspective, it’s the same one that is progressively evolving same as it did all along.
There’s nothing “new” to the process, unless you define new as a specific quality of something you want to semantically describe and specifically point at. It’s “meaning”. It means you decided to isolate a part of the system and point to it saying that is part has your attention and it’s particularly important and relevant for you. This is purely intentional.
But for a fully deterministic system no component of that system is more important than another. There’s no distinction of parts or functions, because everything works accordingly to the system as a whole, and where even a tiny new element transforms everything exactly because a tiny element means the starting conditions of the system aren’t true anymore. So the prediction becomes wrong.
Reductionism cannot work unless you have a complete model. A complete model isn’t computationally possible, even in science fiction hypothetical scenarios. So we’ll have to stick to simplified theories and heuristics INSPIRED by reductionism, but still far, far away from actual reductionism.
None of that is relevant to what he’s saying, which is the live possibility of using real time sensor feedback to update real time databases replete with multidimensional statistical correlations between behaviors, cues, and the deep structures (body and brain) exhibiting the behaviors.
Statistics is a very high level concept and entirely intentional.
No statistics exist without a prior model of interpretation. It’s still high level heuristics. And it’s the very opposite of “actionable causal information regarding our every behaviour”.
Beside the fact that “real time databases” of this kind need to grow at a pace that the idea wouldn’t work even if you spend ten years to map a single second.
We don’t remotely have the computational power to map a brain, so you can imagine how absurd is the idea of mapping a whole society across a span of time.
“Statistics is a very high level concept and entirely intentional.”
So it’s not something a mechanism can do?
So it’s not something a mechanism can do?
Statistics nope, heuristics yes.
But you plan to map completely the heuristics in a brain (or a whole community) without considering the fact that the output of the heuristics change depending on the data being analyzed. So you also need to account for ALL data that is being passed.
Unless you plan to build very rough heuristic on top of heuristics in order to make viable, underestimating the huge margin of error and garbage results you obtain.
I’m not an expert but this idea seems to me implausible in the way it’s supposed to work. Now and in the future.
Hmmm. So if mechanisms aren’t doing statistics, what is?
I fear I just don’t understand your complaint(s), Abe. Why does ‘ALL’ data have to be accounted for?
Statistics presume interpretations, and so representation and meaning. That’s why lots of statistics are completely wrong as they establish wrong correlations.
Heuristics instead is just about a passive filter. And as BBT says it suffers for misapplication and misuse as well. It’s far from being a reliable mechanism. It’s just fast.
If you instead want to turn statistics into a purely mechanical account, then you have to take all the interpretative work and replace it with a very complex series of heuristics. And the complexity of this formulation is beyond the roof as the interpretation of statistics isn’t just a neat system, but it includes the totality of behavior of the person that is interpreting the statistics. Maybe this guy is seeing a pattern just because he had a bad dream.
Again, you misunderstand. Statistics presume interpretation and interpretation presumes intentional cognition presumes a grab-bag of different heuristic devices. ‘Meaning,’ understood as something more than a verbal shorthand (original intentionality), is an artifact of philosophy’s attempt to comprehend intentional cognition via reflection alone. Statistics requires intentional cognition to be understood, but intentionality not at all. (If you understand BBT, then you should be clear on this distinction).
But the misunderstanding isn’t simply a misunderstanding of Blind Brain Theory, it’s the notion that ‘knowledge’ is anything more than things going systematically bump in the systematic dark, the idea that every potential confound has to be tracked down before anything can be predicted or claimed. Because doing so is impossible, why worry about it, and concentrate on what (necessarily limited) understanding a given theoretical position provides instead?
Did you not see the natural scene statistics experiments posted by ochlocrat? Have you read the study where using hypergeometric statistics they tease schizophrenia apart into 8 seperate syndromes, each with their own associated networks of single nucleotide polymorphisms whose gene products are well known, which therefore isolates points of intervention into the brain which could impact schizophrenic behavior. Well, imagine when we start doing this for all sets of behavior, all “dispositions”, and “character” traits.
You make mechanical generalizations without high dimensional causal information all the time, don’t you?
I’ve read this again, trying to figure out your criticism, but I really don’t see what you’re getting at. I’ve written extensively on the kinds of tu quoque charges you level, and why they beg the question. You assume intentionality is something real, so whenever you see me use intentional terminology, you assume I presume some notion of original intentionality. Since the issue, with BBT, is whether there’s any such thing, you’re essentially arguing that I’m contradicting myself because you’re right about original intentionality. Begging the question: a real logical fallacy, in fact.
But the list goes on. Who’s talking about ‘fully deterministic systems’? I personally don’t give a damn about metaphysical reductionism (why should anybody?), but methodological reductionism certainly doesn’t require ‘complete models’ (whatever they might be). I could go on.
I have no idea what “tu quoque” has anything to do with this. Nor I have assumed intentionality, nor I believe it’s something “real”. Nor I think you used a notion of original intentionality. Nor I’m arguing the premise that BBT proves intentionality doesn’t exist.
I argue about very specific aspects of what you say. A small part of the long post I’ve read above. And I’ve argued especially about two aspects that do not touch BBT in any way, the theory itself. They touch instead your predictions on how society is going to change, on the premise of BBT. The *effects* of BBT, not the merit of the theory.
I do believe in the model of BBT too, I don’t agree with how you describe it having an impact in the future. Or, the consequences on society when BBT gets acknowledged.
I do not think BBT can be reliably used to account for all “actionable causal information regarding our every behaviour”. Okay that you disagree, but how the hell it’s possible you don’t even understand my view of this?
You are saying that the behavior of some guy can be completely coded and predicted. I’m saying that while the premise is valid, to be able to reliably make a decent prediction you already need way more computational power it is reasonable to expect.
Or maybe I misrepresent what you said, but that’s how I understood it.
“Why does ‘ALL’ data have to be accounted for?”
Because all data is relevant. A woman who’s been victim of abuse is far more likely to see Donald Trump in a certain light. How do you “predict” this vote if your “mirror database” on which you play your heuristics doesn’t contain the whole life experience of that woman?
To simulate a slice of the world you have to simulate the whole world. There’s nothing “metaphysical” about this. Nothing operates in complete isolation. If you make a virtual perfect copy of a human being, make one grow up in New York and the other in Tokyo they are going to be very different persons. So how do you expect to predict the behavior of one of the two without feeding in your heuristics the whole context of their life experience, along their genetic wiring and development?
“presuming individuals could have done otherwise presumes that we neglect the actual sources of behaviour.”
And how do you know the actual source of behavior? Presuming individuals couldn’t have done otherwise presumes a complete, reliable knowledge of the sources of their behavior. So how this omniscience is acquired? Where do you expect to find the computational power required to make that sort of prediction?
And the second aspect I criticize is all about how you characterize the post-semantic as a paradigm shift that revolutionizes consciousness and the brain. Yet, as you say, there’s no original intentionality to cancel. Intentionality was never there. So how can anything happen on this specific premise if we still operate under the same unchanging rules?
How can there be change if you describe a perpetual condition. It’s not a complex question, and it’s not possible to misunderstand this.
“I’m not sure there’s anything much to be done at this point save getting the word out, empowering some critical mass of people with a notion of what’s going on around them.”
Empowering? Who’s presuming intentionality, me or you?
I’m just trying to make sense out of your argument Abe (and I’m not alone).
“To simulate a slice of the world you have to simulate the whole world. There’s nothing “metaphysical” about this. Nothing operates in complete isolation. If you make a virtual perfect copy of a human being, make one grow up in New York and the other in Tokyo they are going to be very different persons. So how do you expect to predict the behavior of one of the two without feeding in your heuristics the whole context of their life experience, along their genetic wiring and development?”
Your first statement implies there’s no such thing as simulation. Otherwise, OF COURSE nothing operates in complete isolation, and this is why hidden confounds are always a potential problem. So? Your last question can be rephrased as ‘How do you expect to predict where a hurricane will make landfall?’ Pardon me if I don’t take an argument against the possibility of any prediction whatsoever very seriously. Frogs make predictions. Flies. Bacterium. Any simple, living system can capitalize on systematicities in their environment while remaining entirely oblivious to that environment.
“And the second aspect I criticize is all about how you characterize the post-semantic as a paradigm shift that revolutionizes consciousness and the brain. Yet, as you say, there’s no original intentionality to cancel. Intentionality was never there. So how can anything happen on this specific premise if we still operate under the same unchanging rules?”
This is where I read the tu quoque. Intentional (social) cognition is very real (likely consisting of a bundled hierarchy of heuristic devices), even if intentionality is not. It’s the breakdown in intentional cognition that’s the problem.
My question, Abe, is why assume I’ve bungled my own theoretical position, as opposed to assume that you’re just reading something wrong?
Your last question can be rephrased as ‘How do you expect to predict where a hurricane will make landfall?’ Pardon me if I don’t take an argument against the possibility of any prediction whatsoever very seriously.
Finally we are at some point.
My own point is that making a prediction on a human being is just way more complex than a hurricane. For example hurricanes don’t store memories, when instead all past experiences might play a role when it comes to the behavior of a human being. The task is more like trying to predict an earthquake, and we really aren’t making any progress with that (the predictions you can make are so vague they are almost pointless), because we don’t remotely have the data we need.
That’s why I say I see it as improbable (not impossible theoretically). The more high-level, the more complex the prediction. And when you throw in something super high level as “politics” then I really wonder what sort of use you have for your approximate prediction.
And again, “presuming individuals could have done otherwise presumes that we neglect the actual sources of behaviour.”
Maybe I misunderstand the point, but for me that implies we don’t have some rough prediction of the sources of behavior. Because if we have a very rough sketch, then of course this individual will often “do otherwise”.
You presume we are going to design better and more powerful heuristics than those already implemented in the brain. And I simply see this as a too complex task, and especially not an imminent achievement. BBT describes how consciousness works, but it doesn’t make the task of replicating consciousness artificially any simpler.
My question, Abe, is why assume I’ve bungled my own theoretical position, as opposed to assume that you’re just reading something wrong?
Because you insist calling it post-semantic, as some event in the foreseeable future, and that implies change. You often describe a post-semantic world, happening sometime soon. But of course this post-semantic world is the present status.
So what’s the point of disguising it into a future perspective? Nothing is going to change semantically. In the post-semantic world of the future we’re still going to talk with the same language. The life style dramatically changes, but it’s qualitatively the same as today. No way of qualitatively altering consciousness. We cannot transform into something we already are.
I’m not sure if you see this the same I do.
if you instead call it post-semantic because consciousness is going to be qualitatively altered, then you have to explain how.
We evolved to communicate behaviour absent information regarding the biological sources of behaviour
Yet you say the alternative is the “possibility of a prediction”, knowing that heuristics work on the premise of possible failure and incompleteness.
The point is: on one side you have information neglect, we ignore the information, on the other we assume the information to exist, but it’s still largely “absent”, because we don’t have complete models, and heuristics are incomplete by definition. So we still don’t have the information we need.
We’ll still need to produce solutions in a timely manner. That variable, time, is a constant that isn’t going to change, so the brain process isn’t going to change either, because we’ll still need heuristics and approximations. And we’ll still need to produce results by using largely insufficient information. How’s this any different?
Free Will, as you say (and I agree), operates on the premise of neglect (or insufficiency, I’d add). But if we don’t have complete information, how can you say the scenario changes?
If humans were harder to predict than hurricanes no one would drive. Humans are far more predictable than hurricanes in many respects, far less so in others. When it comes to high level generalizations of complex, nonlinear systems, cartoons are all we have, and the best we can do (as I’ve said here many, many times) is hope something of the offending dynamic shines through. If this is your point, then it’s trivial. If it’s that predictions cannot be made period, then it’s pretty clearly false.
As for the rest, you don’t seem to understand the distinction between intentionality (what it is traditional philosophy thinks we’re doing) and intentional cognition (the information neglecting systems brains use to predict other brains). This is causing you to systematically misread my arguments, egregiously in some cases. Some of this stuff (like, “You presume we are going to design better and more powerful heuristics than those already implemented in the brain”) I find downright mystifying. You do understand the way heuristic cognition turns on stable background ecologies, don’t you?
“Because all data is relevant.”
Not for organisms it isn’t.
Not for organisms it isn’t.
I got completely lost in Scott’s line of thought, so I cannot parse it anymore.
I thought the whole point is to reduce human beings to just another step in the causal chain. A post-semantic description is a description that has no use for “organism” as a concept.
What I can consider organism is a nice slice of the world defined linguistically. Same as all “emergent” descriptions that we need in absence of “deep information” models. So it’s entirely within intentional cognition and the way it represents the world. If it’s emergent then it’s within intentional cognition.
And consequently I cannot conceive any post-semantic description that makes sense unless it’s COMPLETE (otherwise it’s not deep at all and just the same as usual incomplete intentional cognition).
Scott won’t explain me this, so I cannot move from there.
So every time Scott this post-semantic scenario while relying on incomplete and approximate reductionist methods I simply think it’s a way to not be radical and do two contradicting things.
Post-semantic, deep information, crash space and all that, in the way I understand those, require complete reductionism. Otherwise they are only strategies whose purpose is to integrate more information, but that aren’t fundamentally different from all previous science.
“Post-semantic, deep information, crash space and all that, in the way I understand those, require complete reductionism. Otherwise they are only strategies whose purpose is to integrate more information, but that aren’t fundamentally different from all previous science.”
‘Fundamentally different’? The point is to render understanding of the human continuous with the natural scientific worldview. The point is to escape all the dead-ends of traditional philosophy and get down to the business of understanding humanity in thoroughly naturalistic terms.
I find ‘complete reduction’ equally mystifying. All science is approximate in some sense, even the Standard Model. If you look at it algorithmically, you can see science as being in the compression business, providing low-dimensional recipes unlocking this or that power to do this or that. It’s heuristic as well, only in a manner that is open to precursors–causes.
Is this what’s tripping you up? Because intentional heuristics turn on ignoring information, are you assuming that causal cognition has to turn on ALL information (as opposed to remaining open to MORE information)? At some level, you have a set of commitments that prevent you from getting this, and causing you to continually attribute commitments to me that I simply do not possess.
I find ‘complete reduction’ equally mystifying. All science is approximate in some sense, even the Standard Model.
Ye, but I see all science happening within intentional cognition. That’s why I see BBT as “more science” without any peculiar quality.
That’s why there was that discussion across blogs about downward causation, emergentism and all that. They are all artifacts of consciousness and of limited knowledge. We need those because the “deep information” is not something we have available. Practical science is just about having models and replacing them when better ones produce better results. And we assume this efficacy comes from the fact our model gets closer to a fundamental reductionism we move toward. If the map is closer to the territory we have better luck navigating it.
(deep-er information, okay. We start from one model and proceed to better, more detailed, high-er resolution ones. But if you use ‘deep information’ as qualitatively different from shallow information then I disagree. I don’t see that distinction.)
But again, I interpret your words describing BBT as something else. It’s not simply more science, it’s a threshold to pass.
I explained in the past comments how I see the effects of BBT: I see that as a widening of the keyhole. BBT is a more precise model of how the brain works. And being a better description it means also better strategies (even at high level, like the “You’re not so smart” kind of thing). The point is: the keyhole has been widened constantly. That’s the history of the evolution. The keyhole is ever changing.
While instead I understand from your words you see BBT as stepping *out* of that keyhole, about seizing that outside and make it a new condition. You seem to believe post-semantic as something we can achieve.
If we talk in terms of human (right now) and post-human (future), then you imply change, and you imply a sharp threshold being passed. But as I said BBT describes “perpetuity”, what consciousness has always been. So there’s no transition between two positions.
But no transition doesn’t mean no change at all. There’s transition between two theories of the mind. One incredibly vague and ephemeral, one concrete. The keyhole can be widened, more information can get integrated into our models, so that the models get better and more precise (they employ deep-er information, but not deep information, at least the way I intend this).
I simply wouldn’t describe this as post-human, as, the way I see it, human evolution progressed all along, smoothly. It’s always changing gradually and we are simply not the kind of people we were thousands of years ago, and we won’t be the same in the future.
So, while you describe this post-intentional as some kind of precise event that is going to happen. A semantic apocalypse. A kind of singularity. Instead I see just the same process operating in the same way all along.
If we’ve always lived in the post-semantic, post-semantic can’t define a future event.
Because intentional heuristics turn on ignoring information, are you assuming that causal cognition has to turn on ALL information (as opposed to remaining open to MORE information)?
You assume the only difference between causal cognition and intentional heuristics is just about neglect, but actually no one really believes of making choices in absolute certainty.
We are very aware we’re missing information and we are constantly anxious about it.
What people don’t agree with is where to look. What people don’t agree with is that BBT is the answer, but no one assumes we know everything. In fact most people speak of “soul” as something that is there but we can’t get to. That’s awareness of missing information.
What BBT says is that not only there’s missing information, but that we can get to it instead of having it blocked off.
And that’s why I said you can widen the keyhole (you can reach some information) but you cannot step outside (because stepping outside requires to unlock deep information, the complete account instead of just one chunk you decide is somewhat “more” important).
Sorry, Abe. You just keep butchering my view! I don’t think of the semantic apocalypse as a precise event, but something ongoing that ends with the utter collapse of intentional cognition. I have no clue as why you think I’m arguing we can ‘step outside the keyhole,’ unless you mean I’m arguing that we can leave the philosophical tradition and its perpetually controversial intentional posits behind. I could go on.
I don’t know who it is you’re arguing with, but it’s not me, and it makes no sense debating someone who’s intent on debating himself.
You just keep butchering my view! I don’t think of the semantic apocalypse as a precise event, but something ongoing that ends with the utter collapse of intentional cognition.
I don’t see it as “butchering” when what you’ve done here is moving the event to the end of the process.
That utter collapse must happen at some point. So I don’t think I’m misrepresenting it if I call it an “event” and assume it happens in the future.
So, again, we agree on the process. We disagree on the fact it has an ending. Because this “utter collapse” is what I think requires full reductionism.
This is what we disagree on:
1- You say the semantic apocalypse is a process with a definite end. I instead say it’s a process but with no end.
2- You say it’s a process about to start. I instead say it’s a process we’ve been part of all along.
3- You define the term as the process itself, where for me that term more clearly define the moment of the conclusion of that process. What happens when that utter collapse happened.
If I’ve butchered your view I haven’t done it intentionally. I just describe what I understand.
I appreciate it isn’t intentional Abe. But I have to call it quits.
I think I’d agree with Abalieno that reductionism wont happen.
However, I think that’s irrelevant. Dirty reductionism can happen, probably* will happen – reductionism without knowing all the details. And it can be powerful, if only like a swung club is powerful – brutish and crude, but clearly clubs work. And it’ll just be a club swung in a place it’s never been felt before, jamming a whole bunch of systems (politics/democracy, for one)
I’m not sure why Abalieno is adamant to get it accepted that pure reductionism wont happen. But I think he’s quite right about that.
* Hey, I enjoy some denial, k?
Because in that case what is going to happen isn’t different than usual scientific progress.
There isn’t anything particularly “new” beside the scale of the change.
We obtain more information and integrate that to gain more accuracy in our models. Same as usual.
There are positive changes, like those described here:
But I don’t think a bland form of reductionism is going to have the impact Scott describes in the post above.
Moreover, I’m adamant about splitting this hair because it’s crucial for the other side of the theory discussed elsewhere.
Post-semantic or post-human needs to require that, or you’re still within human as usual.
Scott implied the third person opposed to the first person. That requires exiting perspective, and exiting perspective requires a radical approach.
Otherwise you just “emulate” the third person while still being rooted in the first.
I’m just saying that a decent notion of post-semantic cannot rely on a crude club, because a crude club is what we’ve used since the beginning.
And if you accept this, then there’s a cascading effect toward that other theory.
It’s not about exiting perspective, it’s about understanding what perspective amounts to from the broadest, most powerful perspective we have available.
That’s enough for now, Abe. This is getting tiresome. You have no interest in understanding my view, just misinterpreting it to fit with some nagging intuition or something. All you’re doing is making that more and more clear.
Well, since “person” is a heuristic device used by some processes to couple to the activities of some processes in view of neglecting deep causal and path/historical information concerning those same processes, I don’t think it’s too much of stretch to speculate that Scott’s position actually maintains that the “first person” and “third person” themselves are heuristic devices. This is only a conundrum from within the position that absolutizes the Subject-Object heuristic as an a priori.
[…] As a bona fide theory of cognition, HNT bears as much on artificial cognition as on biological cognition, and as such, can be used to understand and navigate the already radical and accelerating transformation of our cognitive ecologies. HNT scales, from the subpersonal to the social, and this means that HNT is relevant to the technological madness of the now. […]
oi, my young Yuval… hogwash in troves… delusions of nihilistic grandeur.. myth of eternal progress.. perpetuum mobile much?
[…] What Pinker would insist is that enhancement will allow us to overcome our Pleistocene shortcomings, and that our hitherto inexhaustible capacity to adapt will see us through. Even granting the technical capacity to so remediate, the problem with this reformulation is that transforming intentional cognition to account for transforming social environments automatically amounts to a further transformation of social environments. The problem, in other words, is that Enlightenment entails the end of invariances, the end of shared humanity, in fact. Yuval Harari (2017) puts it with characteristic brilliance in Homo Deus: […]
[…] I encourage everyone to ask why, when it comes to the topic of meaning, we insist on believing in happy endings? We evolved to neglect our fundamental ecological nature, […]
[…] I encourage everyone to ask why, when it comes to the topic of meaning, we insist on believing in happy endings? We evolved to neglect our fundamental ecological nature, […]
[…] however, post-truth is a prediction come to pass—a manifestation of what I’ve long called the ‘semantic apocalypse.’ Far from a perfect storm of suspects coming together in unlikely ways to murder ‘all of […]
[…] Deus, the default presumption that meaning somehow lies outside the circuit of ecology. Harari, recall, realizes that humanism, the ‘man-the-meaning-maker’ narrative of Western civilization, is […]
[…] What Pinker would insist is that enhancement will allow us to overcome our Pleistocene shortcomings, and that our hitherto inexhaustible capacity to adapt will see us through. Even granting the technical capacity to so remediate, the problem with this reformulation is that transforming intentional cognition to account for transforming social environments automatically amounts to a further transformation of social environments. The problem, in other words, is that Enlightenment entails the end of invariances, the end of shared humanity, in fact. Yuval Harari (2017) puts it with characteristic brilliance in Homo Deus: […]