Three Pound Brain

No bells, just whistling in the dark…

Category: POLITICS

The Crash of Truth: A Critical Review of Post-Truth by Lee C. Mcintyre

by rsbakker

Lee Mcintyre is a philosopher of science at Boston University, and author of Dark Ages: The Case for a Science of Human Behaviour. I read Post-truth on the basis of Fareed Zakaria’s enthusiastic endorsement on CNN’s GPS, so I fully expected to like it more than I ultimately did. It does an admirable job scouting the cognitive ecology of post-truth, but because it fails to understand that ecology in ecological terms, the dynamic itself remains obscured. The best Mcintyre can do is assemble and interrogate the usual suspects. As a result, his case ultimately devolves into what amounts to yet another ingroup appeal.

As perhaps, we should expect, given the actual nature of the problem.

Mcintyre begins with a transcript of an interview where CNN’s Alisyn Camerota presses Newt Gingrich at the 2016 Republican convention on Trump’s assertions regarding crime:

GINGRICH: No, but what I said is equally true. People feel more threatened.

CAMEROTA: Feel it, yes. They feel it, but the facts don’t support it.

GINGRICH: As a political candidate, I’ll go with how people feel and let you go with the theoreticians.

There’s a terror you feel in days like these. I felt that terror most recently, I think, watching Sarah Huckabee Sanders insisting that the out-going National Security Advisor, General H. R. McMaster, had declared that no one had been tougher on Russia than Trump after a journalist had quoted him saying almost exactly otherwise. I had been walking through the living-room and the exchange stopped me in my tracks. Never in my life had I ever witnessed a Whitehouse Official so fecklessly, so obviously, contradict what everyone in the room had just heard. It reminded me of the psychotic episodes I witnessed as a young man working tobacco with a friend who suffered schizophrenia—only this was a social psychosis. Nothing was wrong with Sarah Huckabee Sanders. Rather than lying in malfunctioning neural machinery, this discrepancy lay in malfunctioning social machinery. She could say what she said because she knew that statements appearing incoherent to those knowing what H. R. McMaster had actually said would not appear as such to those ignorant of or indifferent to what he had actually said.  She knew, in other words, that even though the journalists in the room saw this:

given the information available to their perspective, the audience that really mattered would see this:

which is to say, something rendered coherent for neglecting that information.

The task Mcintyre sets himself in this brief treatise is to explain how such a thing could have come to pass, to explain, not how a sitting President could lie, but how he could lie without consequences. When Sarah Huckabee Sanders asserts that H. R. McMaster’s claim that the Administration is not doing enough is actually the claim that no Administration has done more she’s relying on innumerable background facts that simply did not obtain a mere generation ago. The social machinery of truth-telling has fundamentally changed. If we look at the sideways picture of Disney’s faux New York skyline as the ‘deep information view,’ and the head-on picture as the ‘shallow information view,’ the question becomes one of how she could trust that her audience, despite the availability of deep information, would nevertheless affirm the illusion of coherence provided by the shallow information view. As Mcintyre writes, “what is striking about the idea of post-truth is not just that truth is being challenged, but that it is being challenged as a mechanism for asserting political dominance.” Sanders, you could say, is availing herself of new mechanisms, ones antagonistic to the traditional mechanisms of communicating the semantic authority of deep information. Somehow, someway, the communication of deep information has ceased to command the kinds of general assent it once did. It’s almost preposterous on the face of it: in attributing Trump’s claims to McMaster, Sanders is gambling that somehow, either by dint of corruption, delusion, or neglect, her false claim will discharge functions ideally belonging to truthful claims, such as informing subsequent behaviour. For whatever reason, the circumstances once preventing such mass dissociations of deep and shallow information ecologies have yielded to circumstances that no longer do.

Mcintyre provides a chapter by chapter account of those new circumstances. For reasons that will become apparent, I’ll skip his initial chapter, which he devotes to defining ‘post-truth,’ and return to it in the end.

Science Denial

He provides clear, pithy outlines of the history of the tobacco industry’s seminal decision to argue the science, to wage what amounts to an organized disinformation campaign. He describes the ways resource companies adapted these tactics to scramble the message and undermine the authority of climate science. And by ‘disinformation,’ he means this literally, given “that even while ExxonMobil was spending money to obfuscate the facts about climate change, they were making plans to explore new drilling opportunities in the Arctic once the polar ice cap had melted.” This part of the story is pretty well-known, I think, but Mcintyre tells the tale in a way that pricks the numbness of familiarity, reminding us of the boggling scale of what these campaigns achieved: generating a political/cultural alliance that is—not simply bent on—hastening untold misery and global economic loss in the name of short term parochial economic gain.

Cognitive Bias

He gives a curiously (given his background) two-dimensional sketch of the role cognitive bias plays in the problem, focusing primarily on cognitive dissonance, our need to minimize cognitive discrepancies, and the backfire effect, how counter-arguments actually strengthen, as opposed to mitigate, commitment to positions. (I would recommend Steven Sloman and Philip Fernbach’s The Knowledge Illusion for a more thorough consideration of the dynamics involved). He discusses research showing the profound ways that social identification, even cued by things so flimsy as coloured wristbands, profoundly transforms our moral determinations. But he underestimates, I think, the profound nature of what Dan Kahan and his colleagues call the “Tragedy of the Risk-Perception Commons,” the individual rationality of espousing irrational collective claims. There’s so much research directly pertinent to his thesis that he passes over in silence, especially that belonging to ecological rationality.

Traditional versus social media

If Mcintyre’s consideration of the cognitive science left me dissatisfied, I thoroughly enjoyed his consideration of media’s contribution to the problem of post-truth. He reminds us that the existence of entities, like Fox News, disguising advocacy as disinterested reporting, is the historical norm, not the rule. Disinterested journalistic reporting was more the result how AP, which served papers grinding different political axes, required stories expressing as little overt bias as possible. Rather than seize upon this ecological insight (more on this below), he narrates the gradual rise of television news from small, money-losing network endeavours, to money-making enterprises culminating in CNN, Fox, MSNBC, and the return of ‘yellow journalism.’

He provides a sobering assessment of the eclipse of traditional media, and the historically unprecedented rise of social media. Here, more than anywhere else, we find Mcintyre taking steps toward a genuine cognitive ecological understanding of the problem:

“In the past, perhaps our cognitive biases were ameliorated by our interactions with others. It is ironic to think that in today’s media deluge, we could perhaps be more isolated from contrary opinion than when our ancestors were forced to live and work among other members of their tribe, village, or community, who had to interact with one another to get information.”

Since his understanding of the problem is primarily normative, however, he fails to see how cognitive reflexes that misfire in experimental contexts, and so strike observers as normative breakdowns, actually facilitate problem-solving in ancestral contexts. What he notes as ‘ironic’ should strike him (and everyone else) as astounding, as one of the doors that any adequate explanation of post-truth must kick down. But it is heartening, I have to say, to see these ideas begin to penetrate more and more brainpans. Despite the insufficiency of his theoretical tools, Mcintyre glimpses something of the way cognitive technology has impacted human cognitive ecology: “Indeed,” he writes, “what a perfect storm for the exploitation of our ignorance and cognitive biases by those with an agenda to put forward.” But even if the ‘perfect storm’ metaphor captures the complex relational nature of what’s happened, it implies that we find ourselves suffering a spot of bad luck, and nothing more.

Postmodernism

At last he turns to the role postmodernism has played in all this: this is the only chapter where I smelled a ‘legacy effect,’ the sense that the author is trying to shoe-horn some independently published material.

He acknowledges that ‘postmodernism’ is hopelessly overdetermined, but he thinks two theses consistently rise above the noise: the first is that “there is no such thing as objective truth,” and the second is “that any profession of truth is nothing more than a reflection of the political ideology of the person who is making it.”

To his credit, he’s quick to pile on the caveats, to acknowledge the need to critique both the possibility of absolute truth as well as the social power of scientific truth-claims. Because of this, it quickly becomes apparent that his target isn’t so much ‘postmodernism’ as it is social constructivism, the thesis that ‘truth-telling,’ far from connecting us to reality, bullies us into affirming interest serving constructs. This, as it turns out, is the best way to think post-truth “[i]n its purest form” as “when one thinks that the crowd’s reaction actually does change the facts about a lie.”

In other words, for Mcintyre, post-truth is the consequence of too many people believing in social constructivism—or in other words, presuming the wrong theory of truthHis approach to the question of post-truth is that of a traditional philosopher: if the failure is one of correspondence, then the blame has to lie with anti-correspondence theories of truth. The reason Sarah Huckabee Sanders could lie about McMaster’s final speech turns on (among other things) the wide-spread theoretical belief that there is no such thing as objective truth,’ that it’s power plays all the way down.

Thus the (rather thick) irony of citing Daniel Dennett—an interpretivist!—stating that “what the postmodernists did was truly evil” so far as they bear responsibility “for the intellectual fad that made it respectable to be cynical about truth and facts.”

The sin of the postmodern left has very, very little to do with generating semantically irresponsible theoriesDennett’s own positions are actually a good deal more radical in this regard! When it comes to the competing narratives involving ‘meaning of’ questions and answers, Dennett knows we have no choice but to advert to the ‘dramatic idiom’ of intentionality. If the problem were one of providing theoretical ammunition then Dennett is as much a part of the problem as Baudrillard.

And yet Mcintyre caps Dennett’s assertion by asking, “Is there more direct evidence than this?” Not a shining moment, dialectically speaking.

I agree with him that tools have been lifted from postmodernists, but they have been lifted from pragmatists (Dennett’s ilk) as well. Talk of ‘stances’ and ‘language games’ is also rife on the right! And I should know. What’s happening now is the consequence of a trend that I’ve been battling since the turn of the millennium. All my novels constitute self-conscious attempts to short-circuit the conditions responsible for ‘post-truth.’ And I’ve spent thousands of hours trolling the alt-Right (before they were called such) trying to figure out what was going on. The longest online debate I ever had was with a fundamentalist Christian who belonged to a group using Thomas Kuhn to justify their belief in the literal truth of Genesis.

Defining Post-truth

Which brings us, as promised, back to the book’s beginning, the chapter that I skipped, where, in the course of refining his definition of post-truth, Mcintyre acknowledges that no one knows what the hell truth is:

“It is important at this point to give at least a minimal definition of truth. Perhaps the most famous is that of Aristotle, who said: ‘to say of what is that it is not, or of what is not, that it is, is false, while to say of what is that it is, and what of is not that it is not, is true.’ Naturally, philosophers have fought for centuries over whether this sort of “correspondence” view is correct, whereby we judge the truth of a statement only by how well it fits reality. Other prominent conceptions of truth (coherentist, pragmatist, semantic) reflect a diversity of opinion among philosophers about the proper theory of truth, even while—as a value—there seems little dispute that truth is important.”

He provides a minimal definition with one hand—truth as correspondence—which he immediately admits is merely speculative! Truth, he’s admitting, is both indispensable and inscrutable. And yet this inscrutability, he thinks, need not hobble the attempt to understand post-truth: “For now, however, the question at hand is not whether we have the proper theory of truth, but how to make sense of the different ways that people subvert truth.”

In other words, we don’t need to know what is being subverted to agree that it is being subverted. But this goes without saying; the question is whether we need to know what is being subverted to explain what Mcintyre is purporting to explain, namely, how truth is being subverted. How do we determine what’s gone wrong with truth when we don’t even know what truth is?

Mcintyre begins Post-truth, in other words, by admitting that no canonical formulation of his explanandum exists, that it remains a matter of mere speculation. Truth remains one of humanity’s confounding questions.

But if truth is in question, then shouldn’t the blame fall upon those who question truth? Perhaps the problem isn’t this or that philosophy so much as philosophy itself. We see as much at so many turns in Mcintyre’s account:

“Why not doubt the mainstream news or embrace a conspiracy theory? Indeed, if news is just political expression, why not make it up? Whose facts should be dominant? Whose perspective is the right one? Thus is postmodernism the godfather of post-truth.”

Certainly, the latter two questions belong to philosophy as whole, and not postmodernism in particular. To that extent, the two former questions—so far as they follow from the latter—have to be seen as falling out of philosophy in general, and not just some ‘philosophical bad apples.’

But does it make sense to blame philosophy, to suggest we should have never questioned the nature of truth? Of course not.

The real question, the one that I think any serious attempt to understand post-truth needs to reckon, is the one Mcintyre breezes by in the first chapter: Why do we find truth so difficult to understand?

On the one hand, truth seems to be crashing. On the other, we have yet to take a step beyond Aristotle when it comes to answering the question of the nature of truth. The latter is the primary obstacle, since the only way to truly understand the nature of the crash is to understand the nature of truth. Could the crash and the inscrutability of truth be related? Could post-truth somehow turn on our inability to explain truth?

Adaptive Anamorphosis

Truth lies murdered in the Calais Coach, and Mcintyre has assembled all the suspects: denialism, cognitive biases, traditional and social media, and (though he knows it not) philosophy. He knows all of them had some part to play, either directly, or as accessories, but the Calais Coach remains locked—his crime scene is a black box. He doesn’t even have a body!

For me, however, post-truth is a prediction come to pass—a manifestation of what I’ve long called the ‘semantic apocalypse.’ Far from a perfect storm of suspects coming together in unlikely ways to murder ‘all of factual reality,’ it is an inevitable consequence of our rapidly transforming cognitive ecologies.

Biologically speaking, human communication and cooperation represent astounding evolutionary achievements. Human cognition is the most complicated thing human cognition has ever encountered: only now are we beginning to reverse-engineer its nature, and to use that knowledge to engineer unprecedented cognitive artifacts. We know that cognition is structurally and dynamically composite, heavily reliant on heuristic specialization to solve its social and natural environments. The astronomical complexity of human cognition means that sociocognition and metacognition are especially reliant on composite, source-insensitive systems, devices turning on available cues that correlate, given that various hidden regularities obtain, with specific outcomes. Despite being legion, we manage to synchronize with our fellows and our environments without the least awareness of the cognitive machinery responsible.

We suffer medial neglect, a systematic insensitivity to our own nature—a nature that includes this insensitivity. Like every other organism on this planet we cognize without cognizing the concurrent act of cognition. Well, almost like every other organism. Where other species utterly depend on the reliability of their cognitive capacities, have no way of repairing failures in various enabling—medial—systems, we do have recourse. Despite our blindness to the machinery of human cognition, we’ve developed a number of different ways to nudge that machinery—whack the TV set, you could say.

Truth-talk is one of those ways. Truth-talk allows us to minimize communicative discrepancies absent, once again, sensitivity to the complexities involved. Truth-talk provides a way to circumvent medial neglect, to resolve problems belonging to the enabling dimension of cognition despite our systematic insensitivity to the facts of that dimension. When medial issues—problems pertaining to cognitive function—arise, truth-talk allows for the metabolically inexpensive recovery of social and environmental synchronization. Incompatible claims can be sorted, at least so far as our ancestors required in prehistoric cognitive ecologies. The tribe can be healed, despite its profound ignorance of natures.

To say human cognition is heuristic is to say it is ecologically dependent, that it requires the neglected regularities underwriting the utility of our cues remain intact. Overthrow those regularities, and you overthrow human cognition. So, where our ancestors could simply trust the systematic relationship between retinal signals and environments while hunting, we have to remove our VR goggles before raiding the fridge. Where our ancestors could simply trust the systematic relationship between the text on the page or the voice in our ear and the existence of a fellow human, we have to worry about chatbots and ‘conversational user interfaces.’ Where our ancestors could automatically depend on the systematic relationship between their ingroup peers and the environments they reported, we need to search Wikipedia—trust strangers. More generally, where our ancestors could trust the general reliability (and therefore general irrelevance) of their cognitive reflexes, we find ourselves confronted with an ever growing and complicating set of circumstances where our reflexes can no longer be trusted to solve social problems.

The tribe, it seems, cannot be healed.

And, unfortunately, this is the very problem we should expect given the technical (tactical and technological) radicalization of human cognitive ecology.* Philosophy, and now, cognitive science, provide the communicative tactics required to neutralize (or ‘threshold’) truth-talk. Cognitive technologies, meanwhile, continually complicate the once direct systematic relationships between our suites of cognitive reflexes and our social and natural environments. The internet doesn’t simply render the sum of human knowledge available, it also renders the sum of human rationalization available as well. The curious and the informed, meanwhile, no longer need suffer the company of the incurious and the uninformed, and vice versa. The presumptive moral superiority of the former stands revealed: and in ever greater numbers the latter counter-identify, with a violence aggravated by phenomena such as the ‘online disinhibition effect.’ (One thing Mcintyre never pauses to consider is the degree to which he and his ilk are hated, despised, so much so as to see partners in traditional foreign adversaries, and to think lies and slander simply redress lies and slander). Populations begin spontaneously self-selecting. Big data identifies the vulnerable, who are showered with sociocognitive cues—atrocity tales to threaten, caricatures to amuse—engineered to provoke ingroup identification and outgroup alienation. In addition to ‘backfiring,’ counter-arguments are perceived as weapons, evidence of outgroup contempt for you and your own. And as the cognitive tactics become ever more adept at manipulating our biases, ever more scientifically informed, and as the cognitive technology becomes ever more sophisticated, ever more destructive of our ancestral cognitive habitat, the break between the two groups, we should expect, will only become more, not less, profound.

None of this is intuitive, of course. Medial neglect means reflection is source blind, and so inclined to conceive things in super-ecological terms. Thus the value of the prop building analogy I posed at the beginning.

Disney’s massive Manhattan anamorph depends on the viewer’s perspectival position within the installation to assure the occlusion of incompatible information. The degrees of cognitive freedom this position possesses—basically, how far one can wander this way and that—depends on the size and sophistication of the anamorph. The stability of illusion, in other words, entirely depends on the viewer: the deeper one investigates, the less stable the anamorph becomes. Their dependence on cognitive ‘sweet spots’ is their signature vulnerability.

The cognitive fragility of the anamorph, however, resides in the fact that we can move, while it cannot. Overcoming this fragility, then, either requires 1) de-animating observation, 2) complicating the anamorph, or 3) animating the anamorph. The problem we face can be understood as the problem of adaptive cognitive anamorphosis, the way cognitive science, in combination with cognitive technology, enables the de-animation of information consumers by gaming sociocognitive cues, while both complicating and animating the artifactual anamorphic information they consume.

Once a certain threshold is crossed, Sarah Huckabee Sanders can lie without shame or apology on national television. We don’t know what we don’t know. Mcintyre references the notorious Dunning-Kruger effect, the way cognitive incompetence correlates with incompetent assessments of competence, but the underlying mechanism is more basic: cognitive systems lacking access to information function independent of that information. Medial neglect assures we take the sufficiency of our perspectives for granted absent information indicating insufficiency or ‘medial misalignment.’ Trusting our biology and community is automatic. Perhaps we refuse to move, to even consider the information belonging to:

But if we do move, the anamorph, thanks to cognitive technology, adapts, the prop-facades grow prop sides, and the deep (globally synchronized) information presented above, has to compete with ‘faux deep’ information. The question becomes one of who has been systematically deceived—a question that ingroup biases have already answered in illusion’s favour. We can return to our less inquisitive peers and assure them they were right all along.

What is ‘post-truth’? Insofar as it names anything it refers to diminishing capacity of globally, versus locally, synchronized claims to drive public discourse. It’s almost as if, via technology, nature is retooling itself to conceal itself by creating adaptive ‘faux realities.’ It’s all artifactual, all biologically ‘constructed’: the question is whether our cognitive predicament facilitates global (or deep) synchronization geared to what happens to be the case, or facilitates local (or shallow) synchronization geared to ingroup expectations and hidden political and commercial interests.

There’s no contest between spooky correspondence and spooky construction. There’s no ‘assertion of ideological supremacy,’ just cognitive critters (us) stranded in a rapidly transforming cognitive ecology that has become too sophisticated to see, and too powerful to credit. Post-truth, in other words, is an inevitable consequence of scientific progress, particularly as it pertains to cognitive technologies.

Sarah Huckabee Sanders can lie without shame or apology on national television because Trump was able to lure millions of Americans across a radically transformed (and transforming) anamorphic threshold. And we should find this terrifying. Most doomed democracies elect their executioner. In his The Death of Democracy: Hitler’s Rise to Power, Benjamin Carter Hett blames the success of Nazism on the “reality deficit” suffered by the German people. “Hostility to reality,” he writes, “translated into contempt for politics, or, rather, desire for a politics that was somehow not political: a thing that can never be” (14). But where Germany in the 1930’s had every reason to despise the real, “a lost war that had cost the nation almost two million of her sons, a widely unpopular revolution, a seemingly unjust peace settlement, and economic chaos accompanied by huge social and technological change” (13), America finds itself suffering only the latter. The difference lies in the way the latter allows for the cultivation and exploitation of this hostility in an age of unparalleled peace and prosperity. In the German case, the reality itself drove the populace to embrace atavistic political fantasies. Thanks to technology, we can now achieve the same effect using only human cognitive shortcomings and corporate greed.

Buckle up. No matter what happens to Trump, the social dysfunction he expresses belongs to the very structure of our civilization. Competition for the market he’s identified is only going to intensify.

 

Advertisements

Enlightenment How? Omens of the Semantic Apocalypse

by rsbakker

“In those days the world teemed, the people multiplied, the world bellowed like a wild bull, and the great god was aroused by the clamor. Enlil heard the clamor and he said to the gods in council, “The uproar of mankind is intolerable and sleep is no longer possible by reason of the babel.” So the gods agreed to exterminate mankind.” –The Epic of Gilgamesh

We know that human cognition is largely heuristic, and as such dependent upon cognitive ecologies. We know that the technological transformation of those ecologies generates what Pinker calls ‘bugs,’ heuristic miscues due to deformations in ancestral correlative backgrounds. In ancestral times, our exposure to threat-cuing stimuli possessed a reliable relationship to actual threats. Not so now thanks to things like the nightly news, generating (via, Pinker suggests, the availability heuristic (42)) exaggerated estimations of threat.

The toll of scientific progress, in other words, is cognitive ecological degradation. So far that degradation has left the problem-solving capacities of intentional cognition largely intact: the very complexity of the systems requiring intentional cognition has hitherto rendered cognition largely impervious to scientific renovation. Throughout the course of revolutionizing our environments, we have remained a blind-spot, the last corner of nature where traditional speculation dares contradict the determinations of science.

This is changing.

We see animals in charcoal across cave walls so easily because our visual systems leap to conclusions on the basis of so little information. The problem is that ‘so little information’ also means so easily reproduced. The world is presently engaged in a mammoth industrial research program bent on hacking every cue-based cognitive reflex we possess. More and more, the systems we evolved to solve our fellow human travelers will be contending with artificial intelligences dedicated to commercial exploitation. ‘Deep information,’ meanwhile, is already swamping the legal system, even further problematizing the folk conceptual (shallow information) staples that ground the system’s self-understanding. Creeping medicalization continues unabated, slowly scaling back warrant for things like character judgment in countless different professional contexts.

Now that the sciences are colonizing the complexities of experience and cognition, we can see the first clear-cut omens of the semantic apocalypse.

 

Crash Space

He assiduously avoids the topic in Enlightenment Now, but in The Blank Slate, Pinker devotes several pages to deflating the arch-incompatibility between natural and intentional modes of cognition, the problem of free will:

“But how can we have both explanation, with its requirement of lawful causation, and responsibility, with its requirement of free choice? To have them both we don’t need to resolve the ancient and perhaps irresolvable antinomy between free will and determinism. We have only to think clearly about what we want the notion of responsibility to achieve.” 180

He admits there’s no getting past the ‘conflict of intuitions’ underwriting the debate. Since he doesn’t know what intentional and natural cognition amount to, he doesn’t understand their incompatibility, and so proposes we simply side-step the problem altogether by redefining ‘responsibility’ to mean what we need it to mean—the same kind of pragmatic redefinition proposed by Dennett. He then proceeds to adduce examples of ‘clear thinking’ by providing guesses regarding ‘holding responsible’ as deterrence, which is more scientifically tractable. “I don’t claim to have solved the problem of free will, only to show that we don’t need to solve it to preserve personal responsibility in the face of an increasing understanding of the causes of behaviour” (185).

Here we can see how profoundly Pinker (as opposed to Nietzsche and Adorno) misunderstands the profundity of Enlightenment disenchantment. The problem isn’t that one can’t cook up alternate definitions of ‘responsibility,’ the problem is that anyone can, endlessly. ‘Clear thinking’ is as liable to serve Pinker as well as ‘clear and distinct ideas’ served Descartes, which is to say, as more grease for the speculative mill. No matter how compelling your particular instrumentalization of ‘responsibility’ seems, it remains every bit as theoretically underdetermined as any other formulation.

There’s a reason such exercises in pragmatic redefinition stall in the speculative ether. Intentional and mechanical cognitive systems are not optional components of human cognition, nor are the intuitions we are inclined to report. Moreover, as we saw in the previous post, intentional cognition generates reliable predictions of system behaviour absent access to the actual sources of that behaviour. Intentional cognition is source-insensitive. Natural cognition, on the other hand, is source sensitive: it generates predictions of system behaviour via access to the actual sources of that behaviour.

Small wonder, then, that our folk intentional intuitions regularly find themselves scuttled by scientific explanation. ‘Free will,’ on this account, is ancestral lemonade, a way to make the best out of metacognitive lemons, namely, our blindness to the sources of our thought and decisions. To the degree it relies upon ancestrally available (shallow) saliencies, any causal (deep) account of those sources is bound to ‘crash’ our intuitions regarding free will. The free will debate that Pinker hopes to evade with speculation can be seen as a kind of crash space, the point where the availability of deep information generates incompatible causal intuitions and intentional intuitions.

The confusion here isn’t (as Pinker thinks) ‘merely conceptual’; it’s a bona fide, material consequence of the Enlightenment, a cognitive version of a visual illusion. Too much information of the wrong kind crashes our radically heuristic modes of cognizing decisions. Stipulating definitions, not surprisingly, solves nothing insofar as it papers over the underlying problem—this is why it merely adds to the literature. Responsibility-talk cues the application of intentional cognitive modes; it’s the incommensurability of these modes with causal cognition that’s the problem, not our lexicons.

 

Cognitive Information

Consider the laziness of certain children. Should teachers be allowed to hold students responsible for their academic performance? As the list of learning disabilities grows, incompetence becomes less a matter of ‘character’ and more a matter of ‘malfunction’ and providing compensatory environments. Given that all failures of competence redound on cognitive infelicities of some kind, and given that each and every one of these infelicities can and will be isolated and explained, should we ban character judgments altogether? Should we regard exhortations to ‘take responsibility’ as forms of subtle discrimination, given that executive functioning varies from student to student? Is treating children like (sacred) machinery the only ‘moral’ thing to do?

So far at least. Causal explanations of behaviour cue intentional exemptions: our ancestral thresholds for exempting behaviour from moral cognition served larger, ancestral social equilibria. Every etiological discovery cues that exemption in an evolutionarily unprecedented manner, resulting in what Dennett calls “creeping exculpation,” the gradual expansion of morally exempt behaviours. Once a learning impediment has been discovered, it ‘just is’ immoral to hold those afflicted responsible for their incompetence. (If you’re anything like me, simply expressing the problem in these terms rankles!) Our ancestors, resorting to systems adapted to resolving social problems given only the merest information, had no problem calling children lazy, stupid, or malicious. Were they being witlessly cruel doing so? Well, it certainly feels like it. Are we more enlightened, more moral, for recognizing the limits of that system, and curtailing the context of application? Well, it certainly feels like it. But then how do we justify our remaining moral cognitive applications? Should we avoid passing moral judgment on learners altogether? It’s beginning to feel like it. Is this itself moral?

This is theoretical crash space, plain and simple. Staking out an argumentative position in this space is entirely possible—but doing so merely exemplifies, as opposed to solves, the dilemma. We’re conscripting heuristic systems adapted to shallow cognitive ecologies to solve questions involving the impact of information they evolved to ignore. We can no more resolve our intuitions regarding these issues than we can stop Necker Cubes from spoofing visual cognition.

The point here isn’t that gerrymandered solutions aren’t possible, it’s that gerrymandered solutions are the only solutions possible. Pinker’s own ‘solution’ to the debate (see also, How the Mind Works, 54-55) can be seen as a symptom of the underlying intractability, the straits we find ourselves in. We can stipulate, enforce solutions that appease this or that interpretation of this or that displaced intuition: teachers who berate students for their laziness and stupidity are not long for their profession—at least not anymore. As etiologies of cognition continue to accumulate, as more and more deep information permeates our moral ecologies, the need to revise our stipulations, to engineer them to discharge this or that heuristic function, will continue to grow. Free will is not, as Pinker thinks, “an idealization of human beings that makes the ethics game playable” (HMW 55), it is (as Bruce Waller puts it) stubborn, a cognitive reflex belonging to a system of cognitive reflexes belonging to intentional cognition more generally. Foot-stomping does not change how those reflexes are cued in situ. The free-will crash space will continue to expand, no matter how stubbornly Pinker insists on this or that redefinition of this or that term.

We’re not talking about a fall from any ‘heuristic Eden,’ here, an ancestral ‘golden age’ where our instincts were perfectly aligned with our circumstances—the sheer granularity of moral cognition, not to mention the confabulatory nature of moral rationalization, suggests that it has always slogged through interpretative mire. What we’re talking about, rather, is the degree that moral cognition turns on neglecting certain kinds of natural information. Or conversely, the degree to which deep natural information regarding our cognitive capacities displaces and/or crashes once straightforward moral intuitions, like the laziness of certain children.

Or the need to punish murderers…

Two centuries ago a murderer suffering irregular sleep characterized by vocalizations and sometimes violent actions while dreaming would have been prosecuted to the full extent of the law. Now, however, such a murderer would be diagnosed as suffering an episode of ‘homicidal somnambulism,’ and could very likely go free. Mammalian brains do not fall asleep or awaken all at once. For some yet-to-be-determined reason, the brains of certain individuals (mostly men older than 50), suffer a form of partial arousal causing them to act out their dreams.

More and more, neuroscience is making an impact in American courtrooms. Nita Farahany (2016) has found that between 2005 and 2012 the number of judicial opinions referencing neuroscientific evidence has more than doubled. She also found a clear correlation between the use of such evidence and less punitive outcomes—especially when it came to sentencing. Observers in the burgeoning ‘neurolaw’ field think that for better or worse, neuroscience is firmly entrenched in the criminal justice system, and bound to become ever more ubiquitous.

Not only are responsibility assessments being weakened as neuroscientific information accumulates, social risk assessments are being strengthened (Gkotsi and Gasser 2016). So-called ‘neuroprediction’ is beginning to revolutionize forensic psychology. Studies suggest that inmates with lower levels of anterior cingulate activity are approximately twice as likely to reoffend as those relatively higher levels of activity (Aharoni et al 2013). Measurements of ‘early sensory gating’ (attentional filtering) predict the likelihood that individuals suffering addictions will abandon cognitive behavioural treatment programs (Steele et al 2014). Reduced gray matter volumes in the medial and temporal lobes identify youth prone to commit violent crimes (Cope et al 2014). ‘Enlightened’ metrics assessing recidivism risks already exist within disciplines such as forensic psychiatry, of course, but “the brain has the most proximal influence on behavior” (Gaudet et al 2016). Few scientific domains illustrate the problems secondary to deep environmental information than the issue of recidivism. Given the high social cost of criminality, the ability to predict ‘at risk’ individuals before any crime is committed is sure to pay handsome preventative dividends. But what are we to make of justice systems that parole offenders possessing one set of ‘happy’ neurological factors early, while leaving others possessing an ‘unhappy’ set to serve out their entire sentence?

Nothing, I think, captures the crash of ancestral moral intuitions in modern, technological contexts quite so dramatically as forensic danger assessments. Consider, for instance, the way deep information in this context has the inverse effect of deep information in the classroom. Since punishment is indexed to responsibility, we generally presume those bearing less responsibility deserve less punishment. Here, however, it’s those bearing the least responsibility, those possessing ‘social learning disabilities,’ who ultimately serve the longest. The very deficits that mitigate responsibility before conviction actually aggravate punishment subsequent conviction.

The problem is fundamentally cognitive, and not legal, in nature. As countless bureaucratic horrors make plain, procedural decision-making need not report as morally rational. We would be mad, on the one hand, to overlook any available etiology in our original assessment of responsibility. We would be mad, on the other hand, to overlook any available etiology in our subsequent determination of punishment. Ergo, less responsibility often means more punishment.

Crash.

The point, once again, is to describe the structure and dynamics of our collective sociocognitive dilemma in the age of deep environmental information, not to eulogize ancestral cognitive ecologies. The more we disenchant ourselves, the more evolutionarily unprecedented information we have available, the more problematic our folk determinations become. Demonstrating this point demonstrates the futility of pragmatic redefinition: no matter how Pinker or Dennett (or anyone else) rationalizes a given, scientifically-informed definition of moral terms, it will provide no more than grist for speculative disputation. We can adopt any legal or scientific operationalization we want (see Parmigiani et al 2017); so long as responsibility talk cues moral cognitive determinations, however, we will find ourselves stranded with intuitions we cannot reconcile.

Considered in the context of politics and the ‘culture wars,’ the potentially disastrous consequences of these kinds of trends become clear. One need only think of the oxymoronic notion of ‘commonsense’ criminology, which amounts to imposing moral determinations geared to shallow cognitive ecologies upon criminal contexts now possessing numerous deep information attenuations. Those who, for whatever reason, escaped the education system with something resembling an ancestral ‘neglect structure’ intact, those who have no patience for pragmatic redefinitions or technical stipulations will find appeals to folk intuitions every bit as convincing as those presiding over the Salem witch trials in 1692. Those caught up in deep information environments, on the other hand, will be ever more inclined to see those intuitions as anachronistic, inhumane, immoral—unenlightened.

Given the relation between education and information access and processing capacity, we can expect that education will increasingly divide moral attitudes. Likewise, we should expect a growing sociocognitive disconnect between expert and non-expert moral determinations. And given cognitive technologies like the internet, we should expect this dysfunction to become even more profound still.

 

Cognitive Technology

Given the power of technology to cue intergroup identifications, the internet was—and continues to be—hailed as a means of bringing humanity together, a way of enacting the universalistic aspirations of humanism. My own position—one foot in academe, another foot in consumer culture—afforded me a far different perspective. Unlike academics, genre writers rub shoulders with all walks, and often find themselves debating outrageously chauvinistic views. I realized quite quickly that the internet had rendered rationalizations instantly available, that it amounted to pouring marbles across the floor of ancestral social dynamics. The cost of confirmation had plummeted to zero. Prior to the internet, we had to test our more extreme chauvinisms against whomever happened to be available—which is to say, people who would be inclined to disagree. We had to work to indulge our stone-age weaknesses in post-war 20th century Western cognitive ecologies. No more. Add to this phenomena such as online disinhibition effect, as well as the sudden visibility of ingroup, intellectual piety, and the growing extremity of counter-identification struck me as inevitable. The internet was dividing us into teams. In such an age, I realized, the only socially redemptive art was art that cut against this tendency, art that genuinely spanned ingroup boundaries. Literature, as traditionally understood, had become a paradigmatic expression of the tribalism presently engulfing us now. Epic fantasy, on the other hand, still possessed the relevance required to inspire book burnings in the West.

(The past decade has ‘rewarded’ my turn-of-the-millennium fears—though in some surprising ways. The greatest attitudinal shift in America, for instance, has been progressive: it has been liberals, and not conservatives, who have most radically changed their views. The rise of reactionary sentiment and populism is presently rewriting European politics—and the age of Trump has all but overthrown the progressive political agenda in the US. But the role of the internet and social media in these phenomena remains a hotly contested one.)

The earlier promoters of the internet had banked on the notional availability of intergroup information to ‘bring the world closer together,’ not realizing the heuristic reliance of human cognition on differential information access. Ancestrally, communicating ingroup reliability trumped communicating environmental accuracy, stranding us with what Pinker (following Kahan 2011) calls the ‘tragedy of the belief commons’ (Enlightenment Now, 358), the individual rationality of believing collectively irrational claims—such as, for instance, the belief that global warming is a liberal myth. Once falsehoods become entangled with identity claims, they become the yardstick of true and false, thus generating the terrifying spectacle we now witness on the evening news.

The provision of ancestrally unavailable social information is one thing, so long as it is curated—censored, in effect—as it was in the mass media age of my childhood. Confirmation biases have to swim upstream in such cognitive ecologies. Rendering all ancestrally unavailable social information available, on the other hand, allows us to indulge our biases, to see only what we want to see, to hear only what we want to hear. Where ancestrally, we had to risk criticism to secure praise, no such risks need be incurred now. And no surprise, we find ourselves sliding back into the tribalistic mire, arguing absurdities haunted—tainted—by the death of millions.

Jonathan Albright, the research director at the Tow Center for Digital Journalism at Columbia, has found that the ‘fake news’ phenomenon, as the product of a self-reinforcing technical ecosystem, has actually grown worse since the 2016 election. “Our technological and communication infrastructure, the ways we experience reality, the ways we get news, are literally disintegrating,” he recently confessed in a NiemanLab interview. “It’s the biggest problem ever, in my opinion, especially for American culture.” As Alexis Madrigal writes in The Atlantic, “the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.”

The individual cost of fantasy continues to shrink, even as the collective cost of deception continues to grow. The ecologies once securing the reliability of our epistemic determinations, the invariants that our ancestors took for granted, are being levelled. Our ancestral world was one where seeking risked aversion, a world where praise and condemnation alike had to brave condemnation, where lazy judgments were punished rather than rewarded. Our ancestral world was one where geography and the scarcity of resources forced permissives and authoritarians to intermingle, compromise, and cooperate. That world is gone, leaving the old equilibria to unwind in confusion, a growing social crash space.

And this is only the beginning of the cognitive technological age. As Tristan Harris points out, social media platforms, given their commercial imperatives, cannot but engineer online ecologies designed to exploit the heuristic limits of human cognition. He writes:

“I learned to think this way when I was a magician. Magicians start by looking for blind spots, edges, vulnerabilities and limits of people’s perception, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano.”

More and more of what we encounter online is dedicated to various forms of exogenous attention capture, maximizing the time we spend on the platform, so maximizing our exposure not just to advertising, but to hidden metrics, algorithms designed to assess everything from our likes to our emotional well-being. As with instances of ‘forcing’ in the performance of magic tricks, the fact of manipulation escapes our attention altogether, so we always presume we could have done otherwise—we always presume ourselves ‘free’ (whatever this means). We exhibit what Clifford Nass, a pioneer in human-computer interaction, calls ‘mindlessness,’ the blind reliance on automatic scripts. To the degree that social media platforms profit from engaging your attention, they profit from hacking your ancestral cognitive vulnerabilities, exploiting our shared neglect structure. They profit, in other words, from transforming crash spaces into cheat spaces.

With AI, we are set to flood human cognitive ecologies with systems designed to actively game the heuristic nature of human social cognition, cuing automatic responses based on boggling amounts of data and the capacity to predict our decisions better than our intimates, and soon, better than we can ourselves. And yet, as the authors of the 2017 AI Index report state, “we are essentially “flying blind” in our conversations and decision-making related to AI.” A blindness we’re largely blind to. Pinker spends ample time domesticating the bogeyman of superintelligent AI (296-298) but he completely neglects this far more immediate and retail dimension of our cognitive technological dilemma.

Consider the way humans endure as much as need one another: the problem is that the cues signaling social punishment and reward are easy to trigger out of school. We’ve already crossed the borne where ‘improving the user experience’ entails substituting artificial for natural social feedback. Notice the plethora of nonthreatening female voices at all? The promise of AI is the promise of countless artificial friends, voices that will ‘understand’ your plight, your grievances, in some respects better than you do yourself. The problem, of course, is that they’re artificial, which is to say, not your friend at all.

Humans deceive and manipulate one another all the time, of course. And false AI friends don’t rule out true AI defenders. But the former merely describes the ancestral environments shaping our basic heuristic tool box. And the latter simply concedes the fundamental loss of those cognitive ecologies. The more prosthetics we enlist, the more we complicate our ecology, the more mediated our determinations become, the less efficacious our ancestral intuitions become. The more we will be told to trust to gerrymandered stipulations.

Corporate simulacra are set to deluge our homes, each bent on cuing trust. We’ve already seen how the hypersensitivity of intentional cognition renders us liable to hallucinate minds where none exist. The environmental ubiquity of AI amounts to the environmental ubiquity of systems designed to exploit granular sociocognitive systems tuned to solve humans. The AI revolution amounts to saturating human cognitive ecology with invasive species, billions of evolutionarily unprecedented systems, all of them camouflaged and carnivorous. It represents—obviously, I think—the single greatest cognitive ecological challenge we have ever faced.

What does ‘human flourishing’ mean in such cognitive ecologies? What can it mean? Pinker doesn’t know. Nobody does. He can only speculate in an age when the gobsmacking power of science has revealed his guesswork for what it is. This was why Adorno referred to the possibility of knowing the good as the ‘Messianic moment.’ Until that moment comes, until we find a form of rationality that doesn’t collapse into instrumentalism, we have only toothless guesses, allowing the pointless optimization of appetite to command all. It doesn’t matter whether you call it the will to power or identity thinking or negentropy or selfish genes or what have you, the process is blind and it lies entirely outside good and evil. We’re just along for the ride.

 

Semantic Apocalypse

Human cognition is not ontologically distinct. Like all biological systems, it possesses its own ecology, its own environmental conditions. And just as scientific progress has brought about the crash of countless ecosystems across this planet, it is poised to precipitate the crash of our shared cognitive ecology as well, the collapse of our ability to trust and believe, let alone to choose or take responsibility. Once every suboptimal behaviour has an etiology, what then? Once everyone us has artificial friends, heaping us with praise, priming our insecurities, doing everything they can to prevent non-commercial—ancestral— engagements, what then?

‘Semantic apocalypse’ is the dramatic term I coined to capture this process in my 2008 novel, Neuropath. Terminology aside, the crashing of ancestral (shallow information) cognitive ecologies is entirely of a piece with the Anthropocene, yet one more way that science and technology are disrupting the biology of our planet. This is a worst-case scenario, make no mistake. I’ll be damned if I see any way out of it.

Humans cognize themselves and one another via systems that take as much for granted as they possibly can. This is a fact. Given this, it is not only possible, but exceedingly probable, that we would find squaring our intuitive self-understanding with our scientific understanding impossible. Why should we evolve the extravagant capacity to intuit our nature beyond the demands of ancestral life? The shallow cognitive ecology arising out of those demands constitutes our baseline self-understanding, one that bears the imprimatur of evolutionary contingency at every turn. There’s no replacing this system short replacing our humanity.

Thus the ‘worst’ in ‘worst case scenario.’

There will be a great deal of hand-wringing in the years to come. Numberless intentionalists with countless competing rationalizations will continue to apologize (and apologize) while the science trundles on, crashing this bit of traditional self-understanding and that, continually eroding the pilings supporting the whole. The pieties of humanism will be extolled and defended with increasing desperation, whole societies will scramble, while hidden behind the endless assertions of autonomy, beneath the thundering bleachers, our fundamentals will be laid bare and traded for lucre.

Unkempt Nation, Disheveled Soul

by rsbakker

So this has been a mad summer in pretty much every respect. The first week of May, my hard-drive died, and I lost pretty much everything I had written the previous six months. My wife was in Venezuela at the time, marching, so I had a hard time wrapping my head around the psychological enormity of the event. It’s not every day you turn on the news to watch events embroiling your loved ones.

Anyway, I’m still pulling the pieces together. I had occasion to revisit some of my first blog posts, and I thought I would post a few snippets from way back in 2010, when we could still pretend technology wasn’t driving the world insane. Rather than get angry all over again at the lack of reviews, or fret for the future of democratic society in the technological age, I thought I would let my younger, less well-groomed self do the ranting.

I’ll be back with things more substantial soon.

 

September 14, 2010 – So why are so many writers heros? Aside from good old human psychology, I blame it on the old ‘Write What You Know’ literary maxim.

Like so many literary maxims it sounds appealing at first blush. After all, how can you be honest–authentic–unless you write ‘what you know.’ But like all maxims it has a flip side: Telling practitioners what they should do is at once telling them what they should not do. Telling writers to only write what they know is telling them to studiously avoid all the things their lives lack–adventure, romance, spectacle–which is to say, the very things that regular people crave.

So this maxim has the happy side-effect of policing who gets to communicate to whom, and so securing the institutional boundaries of the literary specialist. Not only is real culture left to its own naive devices, it becomes the unflagging foil, a kind of self-congratulatory resource, one that can be tapped over and over again to confirm the literary writer’s sense of superiority. Thus all the writerly heros, stranded in seas of absurdity.

September 16, 2010 – The pigeonhole has no bottom, believe you me. I used to be so naive as to think I could climb out, but now I’m starting to think that it swallows everyone in the end. I wonder about all the other cranks and crackpots out there, about all the other sparks that have been snuffed by relentless inattention. It’s no accident that eulogies are so filled with cliches.

After all, it’s neurophysiology that I’m up against more than any passing cultural bigotry. The brain pigeonholes everything it encounters to better lower its caloric load, to economize. We sort far more than we ponder. Novelty, when we encounter it, is either confused for something old and stupid or comes across as errant noise. Things were this way long before corporations and capital.

So I find myself wondering what I should do. Maybe I should just resign myself to my fate, numb the pain, mellow those revenge fantasies. Become a fatalist.

But then there’s nothing like bitterness to keep that fire scorching your belly. And there’s nothing I fear more than becoming old and complacent. Only the well-groomed don’t have chips on their shoulders.

September 18, 2010 – What really troubles me is the way this hypocrisy has been institutionalized. So long as you treat ‘culture’ as a what, which is to say, as a abstract construct, a formalism, then you can congratulate yourself for all the myriad ways in which your abstractions disrupt those abstractions. But as soon as you treat ‘culture’ as a who, which is to say, as a cartoon we use to generalize over millions of living, breathing people, the notion of ‘disruption’ becomes pretty ridiculous pretty quick. All it takes is one simple question: “Who is disrupted?” and the illusion of criticality is dispelled. One little question.

The conceit is so weak. And yet somehow we’ve managed to raise a veritable landfill of illusory subversion upon it. ‘Literature,’ we call it.

Says a lot about the power of vanity, if you think about it.

As well as why I’m probably doomed to fail.

September 20, 2010 – But our culture has become frightfully compartmentalized. The web, which was supposed to blow open the doors of culture–to ‘flatten everything’–seems to have had the opposite effect. Since we’re hardwired to reflexively seek out affirmation and confirmation, rendering everything equally available has meant our paths of least resistence no longer take us across unfamiliar territory. We can get what we want and need without taking detours through things we didn’t realize we wanted or needed. We can make an expedient bastion out of our parochial tastes.

February 27, 2011 – These people, it seems to me, have to be engaged, have to be challenged, if only so that the masses don’t succumb to their own weaknesses for self-serving chauvinism. These people are appealing simply because they are so adept at generating ‘reasons’ for self-serving intuitions that we all share. That we and our ways are special, exempt, and that Others are a threat to us. That our high-school is, like, really the greatest high-school on the planet. Confirmation bias, my-side bias, the list goes on. And given that humans have evolved to be easily and almost irrevocably programmed, it seems to me that the most important place to wage this battle is in classroom. To begin teaching doubt as the highest virtue, as opposed to the madness of belief.

The prevailing madness.

Funny, huh? It’s the lapse in belief that these guys typically see as symptomatic of modern societal decline. But really what they’re talking about is a lapse in agreement. Belief is as pervasive as ever, but as a principle rather than any specific consensual canon. It stands to reason that the lack of ‘moral and cognitive solidarity’ would make us uncomfortable, considering the kinds of scarcity and competition faced by our ancestors.

January 13, 2011 – The problem is that human nature is adapted to environments where the access to information was geographically indexed, where its accumulation exacted a significant caloric toll. We don’t call private investigators ‘gumshoes’ for no reason. We are adapted to environments where the info-gathering workload continually forced us to ‘settle,’ which is to say, make due with something other than what we originally desired, when it comes to information.

This is what makes the ‘global village’ such a deceptive misnomer. In the preindustrial village, where everyone depended upon one another, our cognitive selfishness made quite a bit of adaptive sense: in environments where scarcity and interdependency force cognitive compromise, you can see how cognitive selfishness–finding ways to justify oneself while impugning potential competitors–might pay real dividends in terms of in-group prestige. Where the circumstantial leash is tight, it pays to pull and pull, and perhaps reach those morsels that escape others.

In the industrial village, however, the leash is far longer. But even still, if you want pursue your views, geographical constraints force you to engage individuals who do not share them. Who knows what Bob across the road believes? (My Bob was an evangelical Christian, and I count myself lucky for having endlessly argued with him).

In the information village the leash is cut altogether. The likeminded can effortlessly congregate in innumerable echo chambers. Of course, they can effortlessly congregate with those they disagree with as well, but… The tendency, by and large, is not only to seek confirmation, but to confuse it with intelligence and truth–which is why right-wingers tend to watch more Fox than PBS.

Now, enter all these specialized programs, which are bent on moulding your information environment into something as pleasing as possible. Don’t like the N-word? Well, we can make sure you never need to encounter it again–ever.

The world is sycophantic, and it’s becoming more so all the time. This, I think, is a far better cartoon generalization than ‘flat,’ insofar as it references the user, the intermediary, as well as the information environment.

The contemporary (post-posterity) writer has to incorporate this radically different social context into their practice (if that practice is to be considered even remotely self-critical). If you want to produce literary effects, then you have to write for a sycophantic world, find ways not simply to subvert the ideological defences of readers, but to trick the inhuman, algorithmic gate-keepers as well.

This means being strategically sycophantic. To give people what they want, sure, but with something more as well.

 

Breakneck: Review and Critical Commentary of Whiplash: How to Survive our Faster Future by Joi Ito and Jeff Howe

by rsbakker

whiplash-cover

The thesis I would like to explore here is that Whiplash by Joi Ito and Jeff Howe is at once a local survival guide and a global suicide manual. Their goal “is no less ambitious than to provide a user’s manual to the twenty-first century” (246), a “system of mythologies” (108) embodying the accumulated wisdom of the storied MIT Media Lab. Since this runs parallel to my own project, I applaud their attempt. Like them, I think understanding the consequences of the ongoing technological revolution demands “an entirely new mode of thinking—a cognitive evolution on the scale of a quadruped learning to stand on its hind feet” (247). I just think we need to recall the number of extinctions that particular evolutionary feat required.

Whiplash was a genuine delight for me to read, and not simply because I’m a sucker for technoscientific anecdotes. At so many points I identified with the collection of misfits and outsiders that populate their tales. So, as an individual who fairly embodies the values promulgated in this book, I offer my own amendments to Ito and Howe’s heuristic source code, what I think is a more elegant and scientifically consilient way to understand not only our present dilemma, but the kinds of heuristics we will need to survive them…

Insofar as that is possible.

 

Emergence over Authority

General Idea: Pace of change assures normative obsolescence, which in turn requires openness to ‘emergence.’

“Emergent systems presume that every individual within that system possesses unique intelligence that would benefit the group.” 47

“Unlike authoritarian systems, which enable only incremental change, emergent systems foster the kind of nonlinear innovation that can react quickly to the kind of change of rapid changes that characterize the network age.” 48

Problems: Insensitive to the complexities of the accelerating social and technical landscape. The moral here should be, Does this heuristic still apply?

The quote above also points to the larger problem, which becomes clear by simply rephrasing it to read, ‘emergent systems foster the kind of nonlinear transformation that can react quickly to the kind of nonlinear transformations that characterize the network age.’ The problem, in other words, is also the solution. Call this the Putting Out Fire with Gasoline Problem. I wish Ito and Howe would have spent some more time considering it since it really is the heart of their strategy: How do we cope with accelerating innovation? We become as quick and innovative as we can.

 

Pull over Push

General Idea: Command and control over warehoused resources lacks the sensitivity to solve many modern problems, which are far better resolved by allowing the problems themselves to attract the solvers.

“In the upside-down, bizarre universe created by the Internet, the very assets on your balance sheet—from printing presses to lines of code—are now liabilities from the perspective of agility. Instead, we should try to use resources that can be utilized just in time, for just that time necessary, then relinquished.” 69

“As the cost of innovation continues to fall, entire communities that have been sidelined by those in power will be able to organize themselves and become active participants in society and government. The culture emergent innovation will allow everyone to feel a sense of both ownership and responsibility to each other and to the rest of the world, which will empower them to create more lasting change that the authorities who write policy and law.” 71

Problems: In one sense, I think this chapter speaks to the narrow focus of the book, the degree it views the world through IT glasses. Trump examples the power of Pull. ISIS examples the power of Pull. ‘Empowerment’ is usually charged with positive connotations, until one applies it to criminals, authoritarian governments and so on. It’s important to realize that ‘pull’ runs any which way, rather than directly toward better.

 

Compasses over Maps

General Idea: Sensitivity to ongoing ‘facts on the ground’ generally trumps reliance on high-altitude appraisals of yesterday’s landscape.

“Of all the nine principles in the book, compasses over maps has the greatest potential for misunderstanding. It’s actually very straightforward: a map implies a detailed knowledge of the terrain, and the existence of an optimum route; the compass is a far more flexible tool and requires the user to employ creativity and autonomy in discovering his or her own path.” 89

Problems: I actually agree that this principle is the most apt to be misunderstood because I’m inclined to think Ito and Howe themselves might be misunderstanding it! Once again, we need to see the issue in terms of cognitive ecology: Our ancestors, you could say, suffered a shallow present and enjoyed a deep future. Because the mechanics of their world eluded them, they had no way of re-engineering them, and so they could trust the machinery to trundle along the way it always had. We find ourselves in the opposite predicament: As we master more and more of the mechanics of our world, we discover an ever-expanding array of ways to re-engineering them, meaning we can no longer rely on the established machinery the way our ancestors—and here’s the important bit—evolved to. We are shallow present, deep future creatures living in a deep present, shallow future world.

This, I think, is what Ito and Howe are driving at: just as the old rules (authorities) no longer apply, the old representations (maps) no longer apply either, forcing us to gerrymander (orienteer) our path.

 

Risk over Safety

General Idea: The cost of experimentation has plummeted to such an extent that being wrong no longer has the catastrophic market consequences it once had.

“The new rule, then, is to embrace risk. There may be nowhere else in this book that exemplifies how far our collective brains have fallen behind our technology.” 116

“Seventy million years ago it was great to be a dinosaur. You were a complete package; big, thick-skinned, sharp-toothed, cold-blooded, long-lived. And it was great for a long, long time. Then, suddenly… it wasn’t so great. Because of your size, you needed an awful lot of calories. And you needed an awful lot of room. So you died. You know who outlived you? The frog.” 120

Problems: Essentially the argument is that risky ventures in the old economy are now safe, and that safe ventures are now risky, which means the argument is actually a ‘safety over risk’ one. I find this particular maxim so interesting because I think it really throws their lack of any theory of the problem they take themselves to be solving/ameliorating into relief. Really the moral here is experimentation pays.


This means the cognitive ecology Ito and Howe are both describing and advocating is in some sense antithetical—and therefore alienating—to our ancestral ways of making sense of ourselves.


 

Disobedience over Compliance

General Idea: Traditional forms of development stifle the very creativity institutions require to adapt to the accelerating pace of technological change.

“Since the 1970’s, social scientists have recognized the positive impact of “positive deviants,” people whose unorthodox behavior improves their lives and has the potential to improve their communities if it’s adopted more widely.” 141

“The people who will be the most successful in this environment will be the ones who ask questions, trust their instincts, and refuse to follow the rules when the rules get in their way.” 141

Problems: Disobedience is not critique, and Ito and Howe are careful to point this out, but they fail to mention what role, if any, criticality plays in their list of principles. Another problem has to do with the obvious exception bias at work in their account. Sure, being positive deviants has served Ito and Howe and the generally successful people they count as their ingroup, but what about the rest of us? This is why I cringe every time I hear Oscar acceptance speeches urging young wannabe thespians to ‘never give up on their dream,’ because winners—who are winners by virtue of being the exception—see themselves as proof positive that it can be done if you just try-try-try… This stuff is what powers the great dream smashing factory called Hollywood—as well as Silicon Valley. All things being equal, I think being a ‘positive deviant’ is bound to generate far more grief than success.

And this, I think, underscores the fundamental problem with the book, which is the question of application. I like to think of myself as a ‘positive deviant,’ but I’m aware that I am often identified as a ‘contrarian flake’ in the various academic silos I piss in now and again. By opening research ingroups to the wider world, the web immediately requires members to vet communications in a manner they never had to before. The world, as it turns out, is filled with contrarian flakes, so the problem becomes one of sorting positive deviants (like myself (maybe)), extra-institutional individuals with positive contributions to make, from all those contrarian flakes (like myself (maybe)).

Likewise, given that every communal enterprise possesses wilful, impassioned, but unimaginative employees, how does a manager sort the ‘positive deviant’ out?

When does disobedience over compliance apply? This is where the rubber hits the road, I think. The whole point of the (generally fascinating) anecdotes is to address this very issue, but aside from some gut estimation of analogical sufficiency between cases, we really have nothing to go on.

 

Practice over Theory

General Idea: Traditional forms of education and production emphasize planning before and learning outside the relevant context of applications, when humans are simply not wired for this, and when those contexts are transforming so quickly.

“Putting practice over theory means recognizing that in a faster future, in which change has become a new constant, there is often a higher cost to waiting and planning that there is to doing and improvising.” 159

“The Media Lab is focussed on interest-driven, passion-driven learning through doing. It is also trying to understand and deploy this form of creative learning into a society that will increasingly need more creative learners and fewer human beings who can solve problems better tackled by robots and computers.” 170

Problems: Humans are the gerrymandering species par excellence, leveraging technical skills into more and more forms of environmental mastery. In this respect it’s hard to argue against Ito and Howe’s point, given the caveats they are careful to provide.

The problem lies in the supercomplex environmental consequences of that environmental mastery: Whiplash is advertised as a how-to environmentally master the consequences of environmental mastery manual, so obviously, environmental mastery, technical innovation, ‘progress’—whatever you want to call it—has become a life and death matter, something to be ‘survived.’

The thing people really need to realize in these kinds of discussions is just how far we have sailed into uncharted waters, and just how fast the wind is about to grow.

 

Diversity over Ability

General Idea: Crowdsourcing, basically, the term Jeff Howe coined referring to the way large numbers of people from a wide variety of backgrounds can generate solutions eluding experts.

“We’re inclined to believe the smartest, best trained people in a given discipline—the experts—are the best qualified to a solve a problem in their specialty. And indeed, they often are. When they fail, as they will from time to time, our unquestioning faith in the principle of ‘ability’ leads us to imagine that we need to find a better solver: other experts with similarly high levels of training. But it is in the nature of high ability to reproduce itself—the new team of experts, it turns out, trained at the same amazing schools, institutes, and companies as the previous experts. Similarly brilliant, out two sets of experts can be relied on to apply the same methods to the problem, and share as well the same biases, blind spots, and unconscious tendencies.” 183

Problems: Again I find myself troubled not so much by the moral as by the articulation. If you switch the register from ‘ability’ to competence and consider the way ingroup adjudications of competence systematically perceive outgroup contributions to be incompetent, then you have a better model to work with here, I think. Each of us carry a supercomputer in our heads and all cognition exhibits path-dependency and is therefore vulnerable to blind alleys, so the power of distributed problem solving should come as no surprise. The problem, here, rather, is one of seeing though our ingroup blinders, and coming to understand how we instinctively identify competence forecloses on distributed cognitive resources (which can take innumerable forms).

Institutionalizing diversity seems like a good first step. But what about overcoming ingroup biases more generally? And what about the blind-alley problem (which could be called the ‘double-blind alley problem,’ given the way reviewing the steps taken tends to confirm the necessity of the path taken)? Is there a way to suss out the more pernicious consequences of cognitive path-dependency?

 

Resilience over Strength

General Idea: The reed versus the tree.

Problems: It’s hard to bitch about a chapter beginning with a supercool Thulsa Doom quote.

Strike that—impossible.

 

Systems over Objects

General Idea: Unravelling contemporary problems means unravelling complex problems necessitating adoption of the systems view.

“These new problems, whether we’re talking about curing Alzheimer’s or learning to predict volatile weather systems, seem to be fundamentally different, in that they seem to require the discovery of all the building blocks in a complex system.” 220

“Systems over objects recognizes that responsible innovation requires more than speed and efficiency. It also requires a constant focus on the overall impact of new technologies, and an understanding of the connections between people, their communities, and their environments.” 224

Problems: Since so much of Three Pound Brain is dedicated to understanding human experience and cognition in naturally continuous terms, I tend to think that ‘Systems over Subjects’ offers a more penetrating approach. The idea that things and events cannot be understood or appreciated in isolation is already firmly rooted in our institutional DNA, I think. The challenge, here, lies in squaring this way of thinking with everyday cognition, with our default ways of making sense of each other and ourselves. We are hardwired to see simple essences and sourceless causes everywhere we look. This means the cognitive ecology Ito and Howe are both describing and advocating is in some sense antithetical—and therefore alienating—to our ancestral ways of making sense of ourselves.


Algorithms are set to flood this space, to begin cuing social cognition to solve biological brains in the absence of any biological brains.


 

Conclusion

When I decided to post a review on this book, I opened an MSWord doc the way I usually do and began jotting down jumbled thoughts and impressions, including the reminder to “Bring up the problem of theorizing politics absent any account of human nature.” I had just finished reading the introduction by that point, so I read the bulk of Whiplash with this niggling thought in the back of my mind. Ito and Howe take care to avoid explicit political references, but as I’m sure they will admit, their project is political through and through. Politics has always involved science fiction; after all, how do you improve a future you can’t predict? Knowing human nature, their need to eat, to secure prestige, to mate, to procreate, and so on, is the only thing that allows us to predict human futures at all. Dystopias beg Utopias beg knowing what makes us tick.

In a time of radical, exponential social and environmental transformation, the primary question regarding human nature has to involve adaptability, our ability to cope with social and environmental transformation. The more we learn about human cognition, however, the more we discover that the human capacity to solve new problems is modular as opposed to monolithic, complex as opposed to simple. This in turn means that transforming different elements in our environments (the way technology does) can have surprising results.

So for example, given the ancestral stability of group sizes, it makes sense to suppose we would assess the risk of victimization against a fixed baseline whenever we encountered information regarding violence. Our ability to intuitively assess threats, in other words, depends upon a specific cognitive ecology, one where the information available is commensurate with the small communities of farmers and/or hunter-gatherers. This suggests the provision of ‘deep’ (ancestrally unavailable) threat information, such as that provided by the web or the evening news, would play havoc with our threat intuitions—as indeed seems to be the case.

Human cognition is heuristic, through and through, which is to say dependent on environmental invariances, the ancestral stability of different relevant backgrounds. The relation between group size and threat information is but one of countless default assumptions informing our daily lives. The more technology transforms our cognitive ecologies, the more we should expect our intuitions to misfire, to prompt ineffective problem-solving behaviour like voting for ‘tough-on-crime’ political candidates. The fact is technology makes things easy that were never ‘meant’ to be easy. Consider how humans depended on all the people they knew before the industrial concentration of production, and so were forced to compromise, to see themselves as requiring friends and neighbours. You could source your clothes, your food, even your stories and religion to some familiar face. You grew up in an atmosphere of ambient, ingroup gratitude that continually counterbalanced your selfish impulses. After the industrial concentration of production, the material dependencies enforcing cooperation evaporated, allowing humans to indulge egocentric intuitions, the sweet-tooth of themselves, and ‘individualism’ was born, and with it all the varieties of social isolation comprising the ‘modern malaise.’

This cognitive ecological lens is the reason why I’ve been warning that the web was likely to aggravate processes of group identification and counter-identification, why I’ve argued that the tactics of 20th century progressivism had actually become more pernicious than efficacious, and suggested that forms of political atavism, even the rise of demagoguery, would become bigger and bigger problems. Where most of the world saw the Arab Spring as a forceful example of the web’s capacity to emancipate, I saw it as an example of ‘flash civil unrest,’ the ability of populations to spontaneously organize and overthrow existing institutional orders period, and only incidentally ‘for the better.’

If you entertained extremist impulses before the internet, you had no choice but to air your views with your friends and neighbours, where, all things being equal, the preponderance of views would be more moderate. The network constraints imposed by geography, I surmised, had the effect of ameliorating extremist tendencies. Absent the difficulty of organizing about our darker instincts, rationalizing and advertising them, I think we have good reason to fear. Humans are tribal through and through, as prone to acts of outgroup violence as ingroup self-sacrifice. On the cognitive ecological picture, it just so happens that technological progress and moral/political progress have marched hand in hand thus far. The bulk of our prosocial, democratic institutions were developed—at horrendous cost, no less—to maximize the ‘better angels’ of our natures and to minimize the worst, to engineer the kind of cognitive ecologies we required to flourish in the new social and technical environments—such as the industrial concentration of material dependency—falling out of the Renaissance and Enlightenment.

I readily acknowledge that better accounts can be found for the social phenomena considered above: what I contend is that all of those accounts will involve some nuanced understanding of the heuristic nature of human cognition and the kinds of ecological invariance they take for granted. My further contention is that any adequate understanding of that heuristic nature raises the likelihood, perhaps even the inevitability, that human social cognition will effectively breakdown altogether. The problem lies in the radically heuristic nature of the cognitive modes we use to understand each other and ourselves. Since the complexity of our biocomputational nature renders it intractable, we had to develop ways of predicting/explaining/manipulating behaviour that have nothing to do with the brains behind that behaviour, and everything to do with its impact on our reproductive fortunes. Social problem-solving, in other words, depends on the stability of a very specific cognitive ecology, one entirely innocent to the possibility of AI.

For me, the most significant revelation from the Ashley Madison scandal was the ease with which men were fooled into thinking they were attracting female interest. And this just wasn’t an artifact of the venue: Ito’s MIT colleague Sherry Turkle, in addition to systematically describing the impact of technology on interpersonal relationships, often warns of the ease with which “Darwinian buttons” can be pushed. What makes simple heuristics so powerful is precisely what renders them so vulnerable (and it’s no accident that AI is struggling to overcome this issue now): they turn on cues physically correlated to the systems they track. Break those correlations, and those cues are connected to nothing at all, and we enter Crash Space, the kind of catastrophic cognitive ecological failure that warns away everyone but philosophers.

Virtual and Augmented Reality, or even Vegas magic acts, provide excellent visual analogues. Whether one looks at stereoscopic 3-D systems like Occulus Rift, or the much-ballyhooed ‘biomimetics’ of Magic Leap, or the illusions of David Copperfield, the idea is to cue visual environments that do not exist as effectively and as economically as possible. Goerztal and Levesque and others can keep pounding at the gates of general cognition (which may exist, who knows), but research like that of the late Clifford Nass is laying bare the landscape of cues comprising human social cognition, and given the relative resources required, it seems all but inevitable that the ‘taking to be’ approach, designing AIs focused not so much on being a genuine agent (whatever that is) as cuing the cognition of one, will sweep the field. Why build Disney World when you can project it? Developers will focus on the illusion, which they will refine and refine until the show becomes (Turing?) indistinguishable from the real thing—from the standpoint of consumers.

The differences being, 1) that the illusion will be perspectivally robust (we will have no easy way of seeing through it); and 2) the illusion will be a sociocognitive one. As AI colonizes more and more facets of our lives, our sociocognitive intuitions will become increasingly unreliable. This prediction, I think, is every bit as reliable as the prediction that the world’s ecosystems will be increasingly disrupted as human activity colonizes more and more of the world. Human social cognition turns access to cues into behaviour solving otherwise intractable biological brains—this is a fact. Algorithms are set to flood this space, to begin cuing social cognition to solve biological brains in the absence of any biological brains. Neil Lawrence likens the consequences to the creation of ‘System Zero,’ an artificial substratum for the System 1 (automatic, unconscious) and System 2 (deliberate, conscious) organization of human cognition. He writes:

“System Zero will come to understand us so fully because we expose to it our inner most thoughts and whims. System Zero will exploit massive interconnection. System Zero will be data rich. And just like an elephant, System Zero will never forget.”

Even as we continue attempting to solve it with systems we evolved to solve one another—a task which is going to remain as difficult as it always has, and will likely grow less attractive as fantasy surrogates become increasingly available. Talk about Systems over Subjects! The ecology of human meaning, the shared background allowing us to resolve conflict and to trust, will be progressively exploited and degraded—like every other ancestral ecology on this planet. When I wax grandiloquent (I am a crazy fantasy writer after all), I call this the semantic apocalypse.

I see no way out. Everyone thinks otherwise, but only because the way that human cognition neglects cognitive ecology generates the illusion of unlimited, unconstrained cognitive capacity. And this, I think, is precisely the illusion informing Ito and Howe’s theory of human nature…

Speaking of which, as I said, I found myself wondering what this theory might be as I read the book. I understood I wasn’t the target audience of the book, so I didn’t see its absence as a failing so much as unfortunate for readers like me, always angling for the hard questions. And so it niggled and niggled, until finally, I reached the last paragraph of the last page and encountered this:

“Human beings are fundamentally adaptable. We created a society that was more focussed on our productivity than our adaptability. These principles will help you prepare to be flexible and able to learn the new roles and to discard them when they don’t work anymore. If society can survive the initial whiplash when we trade our running shoes for a supersonic jet, we may yet find that the view from the jet is just what we’ve been looking for.” 250

This first claim, uplifting as it sounds, is simply not true. Human beings, considered individually or collectively, are not capable of adapting to any circumstance. Intuitions systematically misfire all the time. I appreciate how believing as much balms the conscience of those in the innovation game, but it is simply not true. And how could it be, when it entails that humans somehow transcend ecology, which is a far different claim than saying humans, relative to other organisms, are capable of spanning a wide-variety of ecologies. So long as human cognition is heuristic it depends on environmental invariances, like everything else biological. Humans are not capable of transcending system, which is precisely why we need to think the human in systematic terms, and to look at the impact of AI ecologically.

What makes Whiplash such a valuable book (aside from the entertainment factor) is that it is ecologically savvy. Ito and Howe’s dominant metaphor is that of adaptation and ecology. The old business habitat, they argue, has collapsed, leaving old business animals in the ecological lurch. The solution they offer is heuristic, a set of maxims meant to transform (at a sub-ideological level no less!) old business animals into newer, more adaptable ones. The way to solve the problem of innovation uncertainty is to contribute to that problem in the right way—be more innovative. But they fail to consider the ecological dimensions of this imperative, to see how feeding acceleration amounts to the inevitable destruction of cognitive ecologies, how the old meaning habitat is already collapsing, leaving old meaning animals in the ecological lurch, grasping for lies because those, at least, they can recognize.

They fail to see how their local survival guide likely doubles as a global suicide manual.


The meta-heuristics they offer, the new guiding mythologies, are meant to encapsulate the practical bases of evolvability itself… They’re teaching ferns how to grow flowers.


 

PS: The Big Picture

“In the past twenty-five years,” Ito and Howe write, “we have moved from a world dominated by simple systems to a world beset and baffled by complex systems” (246). This claim caught my attention because it is both true and untrue, depending how you look at it. We are pretty much the most complicated thing we know of in the universe, so it’s certainly not the case that we’ve ever dwelt in a world dominated by simple systems. What Ito and Howe are referring to, of course, is our tools. We are moving from a world dominated by simple tools to a world beset and baffled by complex ones. Since these tools facilitate tool-making, we find the great ratchet that lifted us out of the hominid fog clicking faster and faster and faster.

One of these ‘simple tools’ is what we call a ‘company’ or ‘business,’ an institution itself turning on the systematic application of simple tools, ones that intrinsically value authority over emergence, push over pull, maps over compasses, safety over risk, compliance over disobedience, theory over practice, ability over diversity, strength over resilience, and objects over systems. In the same way the simplicity of our physical implements limited the damage they could do to our physical ecologies, the simplicity of our cognitive tools limited the damage they could do to our cognitive ecology. It’s important to understand that the simplicity of these tools is what underwrites the stability of the underlying cognitive ecology. As the growing complexity and power of our physical tools intensified the damage done to our physical ecologies, the growing complexity and power of our cognitive tools is intensifying the damage done to our cognitive ecologies.

Now, two things. First, this analogy suggests that not all is hopeless, that the same way we can use the complexity and power of our physical tools to manage and prevent the destruction of our physical environment, we should be able to use the complexity and power of our cognitive tools to do the same. I concede the possibility, but I think the illusion of noocentrism (the cognitive version of geocentrism) is simply too profound. I think people will endlessly insist on the freedom to concede their autonomy. System Zero will succeed because it will pander ever so much better than a cranky old philosopher could ever hope to.

Second, notice how this analogy transforms the nature of the problem confronting that old animal, business, in the light of radical ecological change. Ancestral human cognitive ecology possessed a shallow present and a deep future. For all his ignorance, a yeoman chewing his calluses in the field five hundred years ago could predict that his son would possess a life very much resembling his own. All the obsolete items that Ito and Howe consider are artifacts of a shallow present. When the world is a black box, when you have no institutions like science bent on the systematic exploration of solution space, the solutions happened upon are generally lucky ones. You hold onto the tools you trust, because it’s all guesswork otherwise and the consequences are terminal. Authority, Push, Compliance, and so on are all heuristics in their own right, all ways of dealing with supercomplicated systems (bunches of humans), but selected for cognitive ecologies where solutions were both precious and abiding.

Oh, how things have changed. Ambient information sensitivity, the ability to draw on everything from internet search engines, to Big Data, to scientific knowledge more generally, means that businesses have what I referred to earlier as a deep present, a vast amount of information and capacity to utilize in problem solving. This allows them to solve systems as systems (the way science does) and abandon the limitations of not only object thinking, but (and this is the creepy part) subject thinking as well. It allows them to correct for faulty path-dependencies by distributing problem-solving among a diverse array of individuals. It allows them to rationalize other resources as well, to pull what they need when they need it rather than pushing warehoused resources.

Growing ambient information sensitivity means growing problem-solving economy—the problem is that this economy means accelerating cognitive ecological transformation. The cheaper optimization becomes, the more transient it becomes, simply because each and every new optimization transforms, in ways large or small but generally unpredictable, the ecology (the network of correlations) prior heuristic optimizations require to be effective. Call this the Optimization Spiral.

This is the process Ito and Howe are urging the business world to climb aboard, to become what might be called meta-ecological institutions, entities designed in the first instance, not to build cars or to mediate social relations or to find information on the web, but to evolve. As an institutionalized bundle of heuristics, a business’s ability to climb the Optimization Spiral, to survive accelerating ecological change, turns on its ability to relinquish the old while continually mimicking, tinkering, and birthing with the new. Thus the value of disobedience and resilience and practical learning: what Ito and Howe are advocating is more akin to the Precambrian Explosion or the rise of Angiosperms than simply surviving extinction. The meta-heuristics they offer, the new guiding mythologies, are meant to encapsulate the practical bases of evolvability itself… They’re teaching ferns how to grow flowers.

And stepping back to take the systems view they advocate, one cannot but feel an admixture of awe and terror, and wonder if they aren’t sketching the blueprint for an entirely unfathomable order of life, something simultaneously corporate and corporeal.

The Death of Wilson: How the Academic Left Created Donald Trump

by rsbakker

Tim and Wilson 2

People need to understand that things aren’t going to snap back into magical shape once Trump becomes archive footage. The Economist had a recent piece on all the far-right demagoguery in the past, and though they stress the impact that politicians like Goldwater have had subsequent to their electoral losses, they imply that Trump is part of a cyclical process, essentially more of the same. Perhaps this might have been the case were this anything but the internet age. For all we know, things could skid madly out of control.

Society has been fundamentally rewired. This is a simple fact. Remember Home Improvement, how Tim would screw something up, then wander into the backyard to lay his notions and problems on his neighbour Wilson, who would only ever appear as a cap over the fence line? Tim was hands on, but interpersonally incompetent, while Wilson was bookish and wise to the ways of the human heart—as well as completely obscured save for his eyes and various caps by the fence between them.

This is a fantastic metaphor for the communication of ideas before the internet and its celebrated ability to ‘bring us together.’ Before, when you had chauvinist impulses, you had to fly them by whoever was available. Pre-internet, extreme views were far more likely to be vetted by more mainstream attitudes. Simple geography combined with the limitations of analogue technology had the effect of tamping the prevalence of such views down. But now Tim wouldn’t think of hassling Wilson over the fence, not when he could do a simple Google and find whatever he needed to confirm his asinine behaviour. Our chauvinistic impulses no longer need to run any geographically constrained social gauntlet to find articulation and rationalization. No matter how mad your beliefs, evidence of their sanity is only ever a few keystrokes away.

This has to have some kind of aggregate, long-term effect–perhaps a dramatic one. The Trump phenomenon isn’t the manifestation of an old horrific contagion following the same old linear social vectors; it’s the outbreak of an old horrific contagion following new nonlinear social vectors. Trump hasn’t changed anything, save identifying and exploiting an ecological niche that was already there. No one knows what happens next. Least of all him.

What’s worse, with the collapse of geography comes the collapse of fences. Phrases like “cretinization of the masses” is simply one Google search away as well. Before, Wilson would have been snickering behind that fence, hanging with his friends and talking about his moron neighbour, who really is a nice guy, you know, but needs help to think clearly all the same. Now the fence is gone, and Tim can finally see Wilson for the condescending, self-righteous bigot he has always been.

Did I just say ‘bigot’? Surely… But this is what Trump supporters genuinely think. They think ‘liberal cultural elites’ are bigoted against them. As implausible as his arguments are, Murray is definitely tracking a real social phenomenon in Coming Apart. A good chunk of white America feels roundly put upon, attacked economically and culturally. No bonus this Christmas. No Christmas tree at school. Why should a minimum wage retail worker think they somehow immorally benefit by dint of blue eyes and pale skin? Why should they listen to some bohemian asshole who’s both morally and intellectually self-righteous? Why shouldn’t they feel aggrieved on all sides, economically and culturally disenfranchised?

Who celebrates them? Aside from Donald Trump.

Trump

You have been identified as an outgroup competitor.

Last week, Social Psychological and Personality Science published a large study conducted by William Chopik, a psychologist out of Michigan State University, showing the degree to which political views determine social affiliations: it turns out that conservatives generally don’t know Clinton supporters and liberals generally don’t know any Trump supporters. Americans seem to be spontaneously segregating along political lines.

Now I’m Canadian, which, although it certainly undermines the credibility of my observations on the Trump phenomenon in some respects, actually does have its advantages. The whole thing is curiously academic, for Canadians, watching our cousins to the south play hysterical tug-o-war with their children’s future. What’s more, even though I’m about as academically institutionalized as a human can be, I’m not an academic, and I have steadfastly resisted the tendency of the highly educated to surround themselves with people who are every bit as institutionalized—or at least smitten—by academic culture.

I belong to no tribe, at least not clearly. Because of this, I have Canadian friends who are, indeed, Trump supporters. And I’ve been whaling on them, asking questions, posing arguments, and they have been whaling back. Precisely because we are Canadian, the whole thing is theatre for us, allowing, I like to think, for a brand of honesty that rancour and defensiveness would muzzle otherwise.

When I get together with my academic friends, however, something very curious happens whenever I begin reporting these attitudes: I get interrupted. “But-but, that’s just idiotic/wrong/racist/sexist!” And that’s when I begin whaling on them, not because I don’t agree with their estimation, but because, unlike my academic confreres, I don’t hold Trump supporters responsible. I blame them, instead. Aren’t they the ‘critical thinkers’? What else did they think the ‘cretins’ would do? Magically seize upon their enlightened logic? Embrace the wisdom of those who openly call them fools?

Fact is, you’re the ones who jumped off the folk culture ship.

The Trump phenomenon falls into the wheelhouse of what has been an old concern of mine. For more than a decade now, I’ve been arguing that the social habitat of intellectual culture is collapsing, and that the persistence of the old institutional organisms is becoming more and more socially pernicious. Literature professors, visual artists, critical theorists, literary writers, cultural critics, intellectual historians and so on all continue acting and arguing as though this were the 20th century… as if they were actually solving something, instead of making matters worse.

See before, when a good slice of media flushed through bottlenecks that they mostly controlled, the academic left could afford to indulge in the same kind of ingroup delusions that afflict all humans. The reason I’m always interrupted in the course of reporting the attitudes of my Trump supporting friends is simply that, from an ingroup perspective, they do not matter.

More and more research is converging upon the notion that the origins of human cooperation lie in human enmity. Think Band of Brothers only in an evolutionary context. In the endless ‘wars before civilization’ one might expect those groups possessing members willing to sacrifice themselves for the good of their fellows would prevail in territorial conflicts against groups possessing members inclined to break and run. Morality has been cut from the hip of murder.

This thesis is supported by the radical differences in our ability to ‘think critically’ when interacting with ingroup confederates as opposed to outgroup competitors. We are all but incapable of listening, and therefore responding rationally, to those we perceive as threats. This is largely why I think literature, minimally understood as fiction that challenges assumptions, is all but dead. Ask yourself: Why is it so easy to predict that so very few Trump supporters have read Underworld? Because literary fiction caters to the likeminded, and now, thanks to the precision of the relationship between buyer and seller, it is only read by the likeminded.

But of course, whenever you make these kinds of arguments to academic liberals you are promptly identified as an outgroup competitor, and you are assumed to have some ideological or psychological defect preventing genuine critical self-appraisal. For all their rhetoric regarding ‘critical thinking,’ academic liberals are every bit as thin-skinned as Trump supporters. They too feel put upon, besieged. I gave up making this case because I realized that academic liberals would only be able to hear it coming from the lips of one of their own, and even then, only after something significant enough happened to rattle their faith in their flattering institutional assumptions. They know that institutions are self-regarding, they admit they are inevitably tarred by the same brush, but they think knowing this somehow makes them ‘self-critical’ and so less prone to ingroup dysrationalia. Like every other human on the planet, they agree with themselves in ways that flatter themselves. And they direct their communication accordingly.

I knew it was only a matter of time before something happened. Wilson was dead. My efforts to eke out a new model, to surmount cultural balkanization, motivated me to engage in ‘blog wars’ with two very different extremists on the web (both of whom would be kind enough to oblige my predictions). This experience vividly demonstrated to me how dramatically the academic left was losing the ‘culture wars.’ Conservative politicians, meanwhile, were becoming more aggressively regressive in their rhetoric, more willing to publicly espouse chauvinisms that I had assumed safely buried.

The academic left was losing the war for the hearts and minds of white America. But so long as enrollment remained steady and book sales remained strong, they remained convinced that nothing fundamental was wrong with their model of cultural engagement, even as technology assured a greater match between them and those largely approving of them. Only now, with Trump, are they beginning to realize the degree to which the technological transformation of their habitat has rendered them culturally ineffective. As George Saunders writes in “Who Are All These Trump Supporters?” in The New Yorker:

Intellectually and emotionally weakened by years of steadily degraded public discourse, we are now two separate ideological countries, LeftLand and RightLand, speaking different languages, the lines between us down. Not only do our two subcountries reason differently; they draw upon non-intersecting data sets and access entirely different mythological systems. You and I approach a castle. One of us has watched only “Monty Python and the Holy Grail,” the other only “Game of Thrones.” What is the meaning, to the collective “we,” of yon castle? We have no common basis from which to discuss it. You, the other knight, strike me as bafflingly ignorant, a little unmoored. In the old days, a liberal and a conservative (a “dove” and a “hawk,” say) got their data from one of three nightly news programs, a local paper, and a handful of national magazines, and were thus starting with the same basic facts (even if those facts were questionable, limited, or erroneous). Now each of us constructs a custom informational universe, wittingly (we choose to go to the sources that uphold our existing beliefs and thus flatter us) or unwittingly (our app algorithms do the driving for us). The data we get this way, pre-imprinted with spin and mythos, are intensely one-dimensional.

The first, most significant thing to realize about this passage is that it’s written by George Saunders for The New Yorker, a premier ingroup cultural authority on a premier ingroup cultural podium. On the view given here, Saunders pretty much epitomizes the dysfunction of literary culture, an academic at Syracuse University, the winner of countless literary awards (which is to say, better at impressing the likeminded than most), and, I think, clearly a genius of some description.

To provide some rudimentary context, Saunders attends a number of Trump rallies, making observations and engaging Trump supporters and protesters alike (but mostly the former) asking gentle questions, and receiving, for the most part, gentle answers. What he describes observation-wise are instances of ingroup psychology at work, individuals, complete strangers in many cases, making forceful demonstrations of ingroup solidarity and resolve. He chronicles something countless humans have witnessed over countless years, and he fears for the same reasons all those generations have feared. If he is puzzled, he is unnerved more.

He isolates two culprits in the above passage, the ‘intellectual and emotional weakening brought about by degraded public discourse,’ and more significantly, the way the contemporary media landscape has allowed Americans to ideologically insulate themselves against the possibility of doubt and negotiation. He blames, essentially, the death of Wilson.

As a paradigmatic ‘critical thinker,’ he’s careful to throw his own ‘subject position’ into mix, to frame the problem in a manner that distributes responsibility equally. It’s almost painful to read, at times, watching him walk the tightrope of hypocrisy, buffeted by gust after gust of ingroup outrage and piety, trying to exemplify the openness he mistakes for his creed, but sounding only lyrically paternalistic in the end–at least to ears not so likeminded. One can imagine the ideal New Yorker reader, pursing their lips in empathic concern, shaking their heads with wise sorrow, thinking…

But this is the question, isn’t it? What do all these aspirational gestures to openness and admissions of vague complicity mean when the thought is, inevitably, fools? Is this not the soul of bad faith? To offer up portraits of tender humanity in extremis as proof of insight and impartiality, then to end, as Saunders ends his account, suggesting that Trump has been “exploiting our recent dullness and aversion to calling stupidity stupidity, lest we seem too precious.”

Academics… averse to calling stupidity stupid? Trump taking advantage of this aversion? Lordy.

This article, as beautiful as it is, is nothing if not a small monument to being precious, to making faux self-critical gestures in the name of securing very real ingroup imperatives. We are the sensitive ones, Saunders is claiming. We are the light that lets others see. And these people are the night of American democracy.

He blames the death of Wilson and the excessive openness of his ingroup, the error of being too open, too critically minded…

Why not just say they’re jealous because he and his friends are better looking?

If Saunders were at all self-critical, anything but precious, he would be asking questions that hurt, that cut to the bone of his aggrandizing assumptions, questions that become obvious upon asking them. Why not, for instance, ask Trump supporters what they thought of CivilWarLand in Bad Decline? Well, because the chances of any of them reading any of his work aside from “CommComm” (and only then because it won the World Fantasy Award in 2010) were virtually nil.

So then why not ask why none of these people has read anything written by him or any of his friends or their friends? Well, he’s already given us a reason for that: the death of Wilson.

Okay, so Wilson is dead, effectively rendering your attempts to reach and challenge those who most need to be challenged with your fiction toothless. And so you… what? Shrug your shoulders? Continue merely entertaining those whom you find the least abrasive?

If I’m right, then what we’re witnessing is so much bigger than Trump. We are tender. We are beautiful. We are vicious. And we are capable of believing anything to secure what we perceive as our claim. What matters here is that we’ve just plugged billions of stone-age brains chiselled by hundreds of millions of years of geography into a world without any. We have tripped across our technology and now we find ourselves in crash space, a domain where the transformation of our problems has rendered our traditional solutions obsolete.

It doesn’t matter if you actually are on their side or not, whatever that might mean. What matters is that you have been identified as an outgroup competitor, and that none of the authority you think your expertise warrants will be conceded to you. All the bottlenecks that once secured your universal claims are melting away, and you need to find some other way to discharge your progressive, prosocial aspirations. Think of all the sensitive young talent sifting through your pedagogical fingers. What do you teach them? How to be wise? How to contribute to their community? Or how to play the game? How to secure the approval of those just like you—and so, how to systematically alienate them from their greater culture?

So. Much. Waste. So much beauty, wisdom, all of it aimed at nowhere… tossed, among other places, into the heap of crumpled Kleenexes called The New Yorker.

Who would have thunk it? The best way to pluck the wise from the heart of our culture was to simply afford them the means to associate almost exclusively with one another, then trust to human nature, our penchant for evolving dialects and values in isolation. The edumacated no longer have the luxury of speaking among themselves for the edification of those servile enough to listen of their own accord. The ancient imperative to actively engage, to have the courage to reach out to the unlikeminded, to write for someone else, has been thrust back upon the artist. In the days of Wilson, we could trust to argument, simply because extreme thoughts had to run a gamut of moderate souls. Not so anymore.

If not art, then argument. If not argument, then art. Invade folk culture. Glory in delighting those who make your life possible–and take pride in making them think.

Sometimes they’re the idiot and sometimes we’re the idiot–that seems to be the way this thing works. To witness so many people so tangled in instinctive chauvinisms and cartoon narratives is to witness a catastrophic failure of culture and education. This is what Trump is exploiting, not some insipid reluctance to call stupid stupid.

I was fairly bowled over a few weeks back when my neighbour told me he was getting his cousin in Florida to send him a Trump hat. I immediately asked him if he was crazy.

“Name one Donald Trump who has done right by history!” I demanded, attempting to play Wilson, albeit minus the decorum and the fence.

Shrug. Wild eyes and a genuine smile. “Then I hope he burns it down.”

“How could you mean that?”

“I dunno, brother. Can’t be any worse than this fucking shit.”

Nothing I could say could make him feel any different. He’s got the internet.

Visions of the Semantic Apocalypse: A Critical Review of Yuval Noah Harari’s Homo Deus

by rsbakker

homo-deus-na

“Studying history aims to loosen the grip of the past,” Yuval Noah Harari writes. “It enables us to turn our heads this way and that, and to begin to notice possibilities that our ancestors could not imagine, or didn’t want us to imagine” (59). Thus does the bestselling author of Sapiens: A Brief History of Humankind rationalize his thoroughly historical approach to question of our technological future in his fascinating follow-up, Homo Deus: A Brief History of Tomorrow. And so does he identify himself as a humanist, committed to freeing us from what Kant would have called, ‘our tutelary natures.’ Like Kant, Harari believes knowledge will set us free.

Although by the end of the book it becomes difficult to understand what ‘free’ might mean here.

As Harari himself admits, “once technology enables us to re-engineer human minds, Homo sapiens will disappear, human history will come to an end and a completely new process will begin, which people like you and me cannot comprehend” (46). Now if you’re interested in mapping the conceptual boundaries of comprehending the posthuman, I heartily recommend David Roden’s skeptical tour de force, Posthuman Life: Philosophy at the Edge of the Human. Homo Deus, on the other hand, is primarily a book chronicling the rise and fall of contemporary humanism against the backdrop of apparent ‘progress.’ The most glaring question, of course, is whether Harari’s academic humanism possesses the resources required to diagnose the problems posed by the collapse of popular humanism. This challenge—the problem of using obsolescent vocabularies to theorize, not only the obsolescence of those vocabularies, but the successor vocabularies to come—provides an instructive frame through which to understand the successes and failures of this ambitious and fascinating book.

How good is Homo Deus? Well, for years people have been asking me for a lay point of entry for the themes explored here on Three Pound Brain and in my novels, and I’ve always been at a loss. No longer. Anyone surfing for reviews of the book are certain to find individuals carping about Harari not possessing the expertise to comment on x or y, but these critics never get around to explaining how any human could master all the silos involved in such an issue (while remaining accessible to a general audience, no less). Such criticisms amount to advocating no one dare interrogate what could be the greatest challenge to ever confront humanity. In addition to erudition, Harari has the courage to concede ugly possibilities, the sensitivity to grasp complexities (as well as the limits they pose), and the creativity to derive something communicable. Even though I think his residual humanism conceals the true profundity of the disaster awaiting us, he glimpses more than enough to alert millions of readers to the shape of the Semantic Apocalypse. People need to know human progress likely has a horizon, a limit, that doesn’t involve environmental catastrophe or creating some AI God.

The problem is far more insidious and retail than most yet realize.

The grand tale Harari tells is a vaguely Western Marxist one, wherein culture (following Lukacs) is seen as a primary enabler of relations of power, a fundamental component of the ‘social apriori.’ The primary narrative conceit of such approaches belongs to the ancient Greeks: “[T]he rise of humanism also contains the seeds of its downfall,” Harari writes. “While the attempt to upgrade humans into gods takes humanism to its logical conclusion, it simultaneously exposes humanism’s inherent flaws” (65). For all its power, humanism possesses intrinsic flaws, blindnesses and vulnerabilities, that will eventually lead it to ruin. In a sense, Harari is offering us a ‘big history’ version of negative dialectic, attempting to show how the internal logic of humanism runs afoul the very power it enables.

But that logic is also the very logic animating Harari’s encyclopedic account. For all its syncretic innovations, Homo Deus uses the vocabularies of academic or theoretical humanism to chronicle the rise and fall of popular or practical humanism. In this sense, the difference between Harari’s approach to the problem of the future and my own could not be more pronounced. On my account, academic humanism, far from enjoying critical or analytical immunity, is best seen as a crumbling bastion of pre-scientific belief, the last gasp of traditional apologia, the cognitive enterprise most directly imperilled by the rising technological tide, while we can expect popular humanism to linger for some time to come (if not indefinitely).

Homo Deus, in fact, exemplifies the quandary presently confronting humanists such as Harari, how the ‘creeping delegitimization’ of their theoretical vocabularies is slowly robbing them of any credible discursive voice. Harari sees the problem, acknowledging that “[w]e won’t be able to grasp the full implication of novel technologies such as artificial intelligence if we don’t know what minds are” (107). But the fact remains that “science knows surprisingly little about minds and consciousness” (107). We presently have no consensus-commanding, natural account of thought and experience—in fact, we can’t even agree on how best to formulate semantic and phenomenal explananda.

Humanity as yet lacks any workable, thoroughly naturalistic, theory of meaning or experience. For Harari this means the bastion of academic humanism, though besieged, remains intact, at least enough for him to advance his visions of the future. Despite the perplexity and controversies occasioned by our traditional vocabularies, they remain the only game in town, the very foundation of countless cognitive activities. “[T]he whole edifice of modern politics and ethics is built upon subjective experiences,” Harari writes, “and few ethical dilemmas can be solved by referring strictly to brain activities” (116). Even though his posits lie nowhere in the natural world, they nevertheless remain subjective realities, the necessary condition of solving countless problems. “If any scientist wants to argue that subjective experiences are irrelevant,” Harari writes, “their challenge is to explain why torture or rape are wrong without reference to any subjective experience” (116).

This is the classic humanistic challenge posed to naturalistic accounts, of course, the demand that they discharge the specialized functions of intentional cognition the same way intentional cognition does. This demand amounts to little more than a canard, of course, once we appreciate the heuristic nature of intentional cognition. The challenge intentional cognition poses to natural cognition is to explain, not replicate, its structure and dynamics. We clearly evolved our intentional cognitive capacities, after all, to solve problems natural cognition could not reliably solve. This combination of power, economy, and specificity is the very thing that a genuinely naturalistic theory of meaning (such as my own) must explain.

 

“… fiction might thereby become the most potent force on earth, surpassing even wayward asteroids and natural selection. Hence if we want to understand our future, cracking genomes and crunching numbers is hardly enough. We must decipher the fictions that give meaning to the world.”

 

So moving forward it is important to understand how his theoretical approach elides the very possibility of a genuinely post-intentional future. Because he has no natural theory of meaning, he has no choice but to take the theoretical adequacy of his intentional idioms for granted. But if his intentional idioms possess the resources he requires to theorize the future, they must somehow remain out of play; his discursive ‘subject position’ must possess some kind of immunity to the scientific tsunami climbing our horizons. His very choice of tools limits the radicality of the story he tells. No matter how profound, how encompassing, the transformational deluge, Harari must somehow remain dry upon his theoretical ark. And this, as we shall see, is what ultimately swamps his conclusions.

But if the Hard Problem exempts his theoretical brand of intentionality, one might ask why it doesn’t exempt all intentionality from scientific delegitimation. What makes the scientific knowledge of nature so tremendously disruptive to humanity is the fact that human nature is, when all is said and down, just more nature. Conceding general exceptionalism, the thesis that humans possess something miraculous distinguishing them from nature more generally, would undermine the very premise of his project.

Without any way out of this bind, Harari fudges, basically. He remains silent on his own intentional (even humanistic) theoretical commitments, while attacking exceptionalism by expanding the franchise of meaning and consciousness to include animals: whatever intentional phenomena consist in, they are ultimately natural to the extent that animals are natural.

But now the problem has shifted. If humans dwell on a continuum with nature more generally, then what explains the Anthropocene, our boggling dominion of the earth? Why do humans stand so drastically apart from nature? The capacity that most distinguishes humans from their nonhuman kin, Harari claims (in line with contemporary theories), is the capacity to cooperate. He writes:

“the crucial factor in our conquest of the world was our ability to connect many humans to one another. Humans nowadays completely dominate the planet not because the individual human is far more nimble-fingered than the individual chimp or wolf, but because Homo sapiens is the only species on earth capable of cooperating flexibly in large numbers.” 131

He poses a ‘shared fictions’ theory of mass social coordination (unfortunately, he doesn’t engage research on groupishness, which would have provided him with some useful, naturalistic tools, I think). He posits an intermediate level of existence between the objective and subjective, the ‘intersubjective,’ consisting of our shared beliefs in imaginary orders, which serve to distribute authority and organize our societies. “Sapiens rule the world,” he writes, “because only they can weave an intersubjective web of meaning; a web of laws, forces, entities and places that exist purely in their common imagination” (149). This ‘intersubjective web’ provides him with theoretical level of description he thinks crucial to understanding our troubled cultural future.

He continues:

“During the twenty-first century the border between history and biology is likely to blur not because we will discover biological explanations for historical events, but rather because ideological fictions will rewrite DNA strands; political and economic interests will redesign the climate; and the geography of mountains and rivers will give way to cyberspace. As human fictions are translated into genetic and electronic codes, the intersubjective reality will swallow up the objective reality and biology will merge with history. In the twenty-first century fiction might thereby become the most potent force on earth, surpassing even wayward asteroids and natural selection. Hence if we want to understand our future, cracking genomes and crunching numbers is hardly enough. We must decipher the fictions that give meaning to the world.” 151

The way Harari sees it, ideology, far from being relegated to prescientific theoretical midden, is set to become all powerful, a consumer of worlds. This launches his extensive intellectual history of humanity, beginning with the algorithmic advantages afforded by numeracy, literacy, and currency, how these “broke the data-processing limitations of the human brain” (158). Where our hunter-gathering ancestors could at best coordinate small groups, “[w]riting and money made it possible to start collecting taxes from hundreds of thousands of people, to organise complex bureaucracies and to establish vast kingdoms” (158).

Harari then turns to the question of how science fits in with this view of fictions, the nature of the ‘odd couple,’ as he puts it:

“Modern science certainly changed the rules of the game, but it did not simply replace myths with facts. Myths continue to dominate humankind. Science only makes these myths stronger. Instead of destroying the intersubjective reality, science will enable it to control the objective and subjective realities more completely than ever before.” 179

Science is what renders objective reality compliant to human desire. Storytelling is what renders individual human desires compliant to collective human expectations, which is to say, intersubjective reality. Harari understands that the relationship between science and religious ideology is not one of straightforward antagonism: “science always needs religious assistance in order to create viable human institutions,” he writes. “Scientists study how the world functions, but there is no scientific method for determining how humans ought to behave” (188). Though science has plenty of resources for answering means type questions—what you ought to do to lose weight, for instance—it lacks resources to fix the ends that rationalize those means. Science, Harari argues, requires religion to the extent that it cannot ground the all important fictions enabling human cooperation (197).

Insofar as science is a cooperative, human enterprise, it can only destroy one form of meaning on the back of some other meaning. By revealing the anthropomorphism underwriting our traditional, religious accounts of the natural world, science essentially ‘killed God’—which is to say, removed any divine constraint on our actions or aspirations. “The cosmic plan gave meaning to human life, but also restricted human power” (199). Like stage-actors, we had a plan, but our role was fixed. Unfixing that role, killing God, made meaning into something each of us has to find for ourselves. Harari writes:

“Since there is no script, and since humans fulfill no role in any great drama, terrible things might befall us and no power will come to save us, or give meaning to our suffering. There won’t be a happy ending or a bad ending, or any ending at all. Things just happen, one after the other. The modern world does not believe in purpose, only in cause. If modernity has a motto, it is ‘shit happens.’” 200

The absence of a script, however, means that anything goes; we can play any role we want to. With the modern freedom from cosmic constraint comes postmodern anomie.

“The modern deal thus offers humans an enormous temptation, coupled with a colossal threat. Omnipotence is in front of us, almost within our reach, but below us yawns the abyss of complete nothingness. On the practical level, modern life consists of a constant pursuit of power within a universe devoid of meaning.” 201

Or to give it the Adornian spin it receives here on Three Pound Brain: the madness of a society that has rendered means, knowledge and capital, its primary end. Thus the modern obsession with the accumulation of the power to accumulate. And thus the Faustian nature of our present predicament (though Harari, curiously, never references Faust), the fact that “[w]e think we are smart enough to enjoy the full benefits of the modern deal without paying the price” (201). Even though physical resources such as material and energy are finite, no such limit pertains to knowledge. This is why “[t]he greatest scientific discovery was the discovery of ignorance.” (212): it spurred the development of systematic inquiry, and therefore the accumulation of knowledge, and therefore the accumulation of power, which, Harari argues, cuts against objective or cosmic meaning. The question is simply whether we can hope to sustain this process—defer payment—indefinitely.

“Modernity is a deal,” he writes, and for all its apparent complexities, it is very straightforward: “The entire contract can be summarised in a single phrase: humans agree to give up meaning in exchange for power” (199). For me the best way of thinking this process of exchanging power for meaning is in terms of what Weber called disenchantment: the very science that dispels our anthropomorphic fantasy worlds is the science that delivers technological power over the real world. This real world power is what drives traditional delegitimation: even believers acknowledge the vast bulk of the scientific worldview, as do the courts and (ideally at least) all governing institutions outside religion. Science is a recursive institutional ratchet (‘self-correcting’), leveraging the capacity to leverage ever more capacity. Now, after centuries of sheltering behind walls of complexity, human nature finds itself the intersection of multiple domains of scientific inquiry. Since we’re nothing special, just more nature, we should expect our burgeoning technological power over ourselves to increasingly delegitimate traditional discourses.

Humanism, on this account, amounts to an adaptation to the ways science transformed our ancestral ‘neglect structure,’ the landscape of ‘unknown unknowns’ confronting our prehistorical forebears. Our social instrumentalization of natural environments—our inclination to anthropomorphize the cosmos—is the product of our ancestral inability to intuit the actual nature of those environments. Information beyond the pale of human access makes no difference to human cognition. Cosmic meaning requires that the cosmos remain a black box: the more transparent science rendered that box, the more our rationales retreated to the black box of ourselves. The subjectivization of authority turns on how intentional cognition (our capacity to cognize authority) requires the absence of natural accounts to discharge ancestral functions. Humanism isn’t so much a grand revolution in thought as the result of the human remaining the last scientifically inscrutable domain standing. The rationalizations had to land somewhere. Since human meaning likewise requires that the human remain a black box, the vast industrial research enterprise presently dedicated to solving our nature does not bode well.

But this approach, economical as it is, isn’t available to Harari since he needs some enchantment to get his theoretical apparatus off the ground. As the necessary condition for human cooperation, meaning has to be efficacious. The ‘Humanist Revolution,’ as Harari sees it, consists in the migration of cooperative efficacy (authority) from the cosmic to the human. “This is the primary commandment humanism has given us: create meaning for a meaningless world” (221). Rather than scripture, human experience becomes the metric for what is right or wrong, and the universe, once the canvas of the priest, is conceded to the scientist. Harari writes:

“As the source of meaning and authority was relocated from the sky to human feelings, the nature of the entire cosmos changed. The exterior universe—hitherto teeming with gods, muses, fairies and ghouls—became empty space. The interior world—hitherto an insignificant enclave of crude passions—became deep and rich beyond measure” 234

This re-sourcing of meaning, Harari insists, is true whether or not one still believes in some omnipotent God, insofar as all the salient anchors of that belief lie within the believer, rather than elsewhere. God may still be ‘cosmic,’ but he now dwells beyond the canvas as nature, somewhere in the occluded frame, a place where only religious experience can access Him.

Man becomes ‘man the meaning maker,’ the trope that now utterly dominates contemporary culture:

“Exactly the same lesson is learned by Captain Kirk and Captain Jean-Luc Picard as they travel the galaxy in the starship Enterprise, by Huckleberry Finn and Jim as they sail down the Mississippi, by Wyatt and Billy as they ride their Harley Davidson’s in Easy Rider, and by countless other characters in myriad other road movies who leave their home town in Pennsylvannia (or perhaps New South Wales), travel in an old convertible (or perhaps a bus), pass through various life-changing experiences, get in touch with themselves, talk about their feelings, and eventually reach San Francisco (or perhaps Alice Springs) as better and wiser individuals.” 241

Not only is experience the new scripture, it is a scripture that is being continually revised and rewritten, a meaning that arises out of the process of lived life (yet somehow always managing to conserve the status quo). In story after story, the protagonist must find some ‘individual’ way to derive their own personal meaning out of an apparently meaningless world. This is a primary philosophical motivation behind The Second Apocalypse, the reason why I think epic fantasy provides such an ideal narrative vehicle for the critique of modernity and meaning. Fantasy worlds are fantastic, especially fictional, because they assert the objectivity of what we now (implicitly or explicitly) acknowledge to be anthropomorphic projections. The idea has always been to invert the modernist paradigm Harari sketches above, to follow a meaningless character through a meaningful world, using Kellhus to recapitulate the very dilemma Harari sees confronting us now:

“What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design, or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket?” 277

And so Harari segues to the future and the question of the ultimate fate of human meaning; this is where I find his steadfast refusal to entertain humanistic conceit most impressive. One need not ponder ‘designer experiences’ for long, I think, to get a sense of the fundamental rupture with the past it represents. These once speculative issues are becoming ongoing practical concerns: “These are not just hypotheses of philosophical speculations,” simply because ‘algorithmic man’ is becoming a technological reality (284). Harari provides a whirlwind tour of unnerving experiments clearly implying trouble for our intuitions, a discussion that transitions into a consideration of the ways we can already mechanically attenuate our experiences. A good number of the examples he adduces have been considered here, all of them underscoring the same, inescapable moral: “Free will exists in the imaginary stories we humans have invented” (283). No matter what your philosophical persuasion, our continuity with the natural world is an established scientific fact. Humanity is not exempt from the laws of nature. If humanity is not exempt from the laws of nature, then the human mastery of nature amounts to the human mastery of humanity.

He turns, at this point, to Gazzaniga’s research showing the confabulatory nature of human rationalization (via split brain patients), and Daniel Kahneman’s account of ‘duration neglect’—another favourite of mine. He offers an expanded version of Kahneman’s distinction between the ‘experiencing self,’ that part of us that actually undergoes events, and the ‘narrating self,’ the part of us that communicates—derives meaning from—these experiences, essentially using the dichotomy as an emblem for the dual process models of cognition presently dominating cognitive psychological research. He writes:

“most people identify with their narrating self. When they say, ‘I,’ the mean the story in their head, not the stream of experiences they undergo. We identify with the inner system that takes the crazy chaos of life and spins out of it seemingly logical and consistent yarns. It doesn’t matter that the plot is filled with lies and lacunas, and that it is rewritten again and again, so that today’s story flatly contradicts yesterday’s; the important thing is that we always retain the feeling that we have a single unchanging identity from birth to death (and perhaps from even beyond the grave). This gives rise to the questionable liberal belief that I am an individual, and that I possess a consistent and clear inner voice, which provides meaning for the entire universe.” 299

Humanism, Harari argues, turns on our capacity for self-deception, the ability to commit to our shared fictions unto madness, if need be. He writes:

“Medieval crusaders believed that God and heaven provided their lives with meaning. Modern liberals believe that individual free choices provide life with meaning. They are all equally delusional.” 305

Social self-deception is our birthright, the ability to believe what we need to believe to secure our interests. This is why the science, though shaking humanistic theory to the core, has done so little to interfere with the practices rationalized by that theory. As history shows, we are quite capable of shovelling millions into the abattoir of social fantasy. This delivers Harari to yet another big theme explored both here and Neuropath: the problems raised by the technological concretization of these scientific findings. As Harari puts it:

“However, once heretical scientific insights are translated into everyday technology, routine activities and economic structures, it will become increasingly difficult to sustain this double-game, and we—or our heirs—will probably require a brand new package of religious beliefs and political institutions. At the beginning of the third millennium, liberalism [the dominant variant of humanism] is threatened not by the philosophical idea that there are no free individuals but rather by concrete technologies. We are about to face a flood of extremely useful devices, tools and structures that make no allowance for the free will of individual humans. Can democracy, the free market and human rights survive this flood?” 305-6

harari

The first problem, as Harari sees it, is one of diminishing returns. Humanism didn’t become the dominant world ideology because it was true, it overran the collective imagination of humanity because it enabled. Humanistic values, Harari explains, afforded our recent ancestors with a wide variety of social utilities, efficiencies turning on the technologies of the day. Those technologies, it turns out, require human intelligence and the consciousness that comes with it. To depart from Harari, they are what David Krakauer calls ‘complementary technologies,’ tools that extend human capacity, as opposed to ‘competitive technologies,’ which render human capacities redundant).

Making humans redundant, of course, means making experience redundant, something which portends the systematic devaluation of human experience, or the collapse of humanism. Harari calls this process the ‘Great Decoupling’:

“Over the last decades there has been an immense advance in computer intelligence, but there has been exactly zero advance in computer consciousness. As far as we know, computers in 2016 are no more conscious than their prototypes in the 1950s. However, we are on the brink of a momentous revolution. Humans are in danger of losing their value, because intelligence is decoupling from consciousness.” 311

He’s quick to acknowledge all the problems yet confronting AI researchers, insisting that the trend unambiguously points toward every expanding capacities As he writes, “these technical problems—however difficult—need only be solved once” (317). The ratchet never stops clicking.

He’s also quick to block the assumption that humans are somehow exceptional: “The idea that humans will always have a unique ability beyond the reach of non-conscious algorithms is just wishful thinking” (319). He provides the (I think) terrifying example of David Cope, the University of California at Santa Cruz musicologist who has developed algorithms whose compositions strike listeners as more authentically human than compositions by humans such as J.S. Bach.

The second problem is the challenge of what (to once again depart from Harari) Neil Lawrence calls ‘System Zero,’ the question of what happens when our machines begin to know us better than we know ourselves. As Harari notes, this is already the case: “The shifting of authority from humans to algorithms is happening all around us, not as a result of some momentous governmental decision, but due to a flood of mundane choices” (345). Facebook can now guess your preferences better than your friends, your family, your spouse—and in some instances better than you yourself! He warns the day is coming when political candidates can receive real-time feedback via social media, when people can hear everything said about them always and everywhere. Projecting this trend leads him to envision something very close to Integration, where we become so embalmed in our information environments that “[d]isconnection will mean death” (344).

He writes:

“The individual will not be crushed by Big Brother; it will disintegrate from within. Today corporations and governments pay homage to my individuality and promise to provide medicine, education and entertainment customized to my unique needs and wishes. But in order to do so, corporations and governments first need to break me up into biochemical subsystems, monitor these subsystems with ubiquitous sensors and decipher their workings with powerful algorithms. In the process, the individual will transpire to be nothing but a religious fantasy.” 345

This is my own suspicion, and I think the process of subpersonalization—the neuroscientifically informed decomposition of consumers into economically relevant behaviours—is well underway. But I think it’s important to realize that as data accumulates, and researchers and their AIs find more and more ways to instrumentalize those data sets, what we’re really talking about are proliferating heuristic hacks (that happen to turn on neuroscientific knowledge). They need decipher us only so far as we comply. Also, the potential noise generated by a plethora of competing subpersonal communications seems to constitute an important structural wrinkle. It could be the point most targeted by subpersonal hacking will at least preserve the old borders of the ‘self,’ fantasy that it was. Post-intentional ‘freedom’ could come to reside in the noise generated by commercial competition.

The third problem he sees for humanism lies in the almost certainly unequal distribution of the dividends of technology, a trope so well worn in narrative that we scarce need consider it here. It follows that liberal humanism, as an ideology committed to the equal value of all individuals, has scant hope of squaring the interests of the redundant masses against those of a technologically enhanced superhuman elite.

 

… this isn’t any mere cultural upheaval or social revolution, this is an unprecedented transformation in the history of life on this planet, the point when the evolutionary platform of behaviour, morphology, becomes the product of behaviour.

 

Under pretty much any plausible scenario you can imagine, the shared fiction of popular humanism is doomed. But as Harari has already argued, shared fictions are the necessary condition of social coordination. If humanism collapses, some kind of shared fiction has to take its place. And alas, this is where my shared journey with Harari ends. From this point forward, I think his analysis is largely an artifact of his own, incipient humanism.

Harari uses the metaphor of ‘vacuum,’ implying that humans cannot but generate some kind of collective narrative, some way of making their lives not simply meaningful to themselves, but more importantly, meaningful to one another. It is the mass resemblance of our narrative selves, remember, that makes our mass cooperation possible. [This is what misleads him, the assumption that ‘mass cooperation’ need be human at all by this point.] So he goes on to consider what new fiction might arise to fill the void left by humanism. The first alternative is ‘technohumanism’ (transhumanism, basically), which is bent on emancipating humanity from the authority of nature much as humanism was bent on emancipating humanity from the authority of tradition. Where humanists are free to think anything in their quest to actualize their desires, technohumanists are free to be anything in their quest to actualize their desires.

The problem is that the freedom to be anything amounts to the freedom to reengineer desire. So where the objective meaning, following one’s god (socialization), gave way to subjective meaning, following one’s heart (socialization), it remains entirely unclear what the technohumanist hopes to follow or to actualize. As soon as we gain power over our cognitive being the question becomes, ‘Follow which heart?’

Or as Harari puts it,

“Techno-humanism faces an impossible dilemma here. It considers human will the most important thing in the universe, hence it pushes humankind to develop technologies that can control and redesign our will. After all, it’s tempting to gain control over the most important thing in the world. Yet once we have such control, techno-humanism will not know what to do with it, because the sacred human will would become just another designer product.” 366

Which is to say, something arbitrary. Where humanism aims ‘to loosen the grip of the past,’ transhumanism aims to loosen the grip of biology. We really see the limits of Harari’s interpretative approach here, I think, as well as why he falls short a definitive account of the Semantic Apocalypse. The reason that ‘following your heart’ can substitute for ‘following the god’ is that they amount to the very same claim, ‘trust your socialization,’ which is to say, your pre-existing dispositions to behave in certain ways in certain contexts. The problem posed by the kind of enhancement extolled by transhumanists isn’t that shared fictions must be ‘sacred’ to be binding, but that something neglected must be shared. Synchronization requires trust, the ability to simultaneously neglect others (and thus dedicate behaviour to collective problem solving) and yet predict their behaviour nonetheless. Absent this shared background, trust is impossible, and therefore synchronization is impossible. Cohesive, collective action, in other words, turns on a vast amount of evolutionary and educational stage-setting, common cognitive systems stamped with common forms of training, all of it ancestrally impervious to direct manipulation. Insofar as transhumanism promises to place the material basis of individual desire within the compass of individual desire, it promises to throw our shared background to the winds of whimsy. Transhumanism is predicated on the ever-deepening distortion of our ancestral ecologies of meaning.

Harari reads transhumanism as a reductio of humanism, the point where the religion of individual empowerment unravels the very agency it purports to empower. Since he remains, at least residually, a humanist, he places ideology—what he calls the ‘intersubjective’ level of reality—at the foundation of his analysis. It is the mover and shaker here, what Harari believes will stamp objective reality and subjective reality both in its own image.

And the fact of the matter is, he really has no choice, given he has no other way of generalizing over the processes underwriting the growing Whirlwind that has us in its grasp. So when he turns to digitalism (or what he calls ‘Dataism’), it appears to him to be the last option standing:

“What might replace desires and experiences as the source of all meaning and authority? As of 2016, only one candidate is sitting in history’s reception room waiting for the job interview. This candidate is information.” 366

Meaning has to be found somewhere. Why? Because synchronization requires trust requires shared commitments to shared fictions, stories expressing those values we hold in common. As we have seen, science cannot determine ends, only means to those ends. Something has to fix our collective behaviour, and if science cannot, we will perforce turn to be some kind of religion…

But what if we were to automate collective behaviour? There’s a second candidate that Harari overlooks, one which I think is far, far more obvious than digitalism (which remains, for all its notoriety, an intellectual position—and a confused one at that, insofar as it has no workable theory of meaning/cognition). What will replace humanism? Atavism… Fantasy. For all the care Harari places in his analyses, he overlooks how investing AI with ever increasing social decision-making power simultaneously divests humans of that power, thus progressively relieving us of the need for shared values. The more we trust to AI, the less trust we require of one another. We need only have faith in the efficacy of our technical (and very objective) intermediaries; the system synchronizes us automatically in ways we need not bother knowing. Ideology ceases to become a condition of collective action. We need not have any stories regarding our automated social ecologies whatsoever, so long as we mind the diminishing explicit constraints the system requires of us.

Outside our dwindling observances, we are free to pursue whatever story we want. Screw our neighbours. And what stories will those be? Well, the kinds of stories we evolved to tell, which is to say, the kinds of stories our ancestors told to each other. Fantastic stories… such as those told by George R. R. Martin, Donald Trump, myself, or the Islamic state. Radical changes in hardware require radical changes in software, unless one has some kind of emulator in place. You have to be sensible to social change to ideologically adapt to it. “Islamic fundamentalists may repeat the mantra that ‘Islam is the answer,’” Harari writes, “but religions that lose touch with the technological realities of the day lose their ability even to understand the questions being asked” (269). But why should incomprehension or any kind of irrationality disqualify the appeal of Islam, if the basis of the appeal primarily lies in some optimization of our intentional cognitive capacities?

Humans are shallow information consumers by dint of evolution, and deep information consumers by dint of modern necessity. As that necessity recedes, it stands to reason our patterns of consumption will recede with it, that we will turn away from the malaise of perpetual crash space and find solace in ever more sophisticated simulations of worlds designed to appease our ancestral inclinations. As Harari himself notes, “Sapiens evolved in the African savannah tens of thousands of years ago, and their algorithms are just not built to handle twenty-first century data flows” (388). And here we come to the key to understanding the profundity, and perhaps even the inevitability of the Semantic Apocalypse: intentional cognition turns on cues which turn on ecological invariants that technology is even now rendering plastic. The issue here, in other words, isn’t so much a matter of ideological obsolescence as cognitive habitat destruction, the total rewiring of the neglected background upon which intentional cognition depends.

The thing people considering the future impact of technology need to pause and consider is that this isn’t any mere cultural upheaval or social revolution, this is an unprecedented transformation in the history of life on this planet, the point when the evolutionary platform of behaviour, morphology, becomes the product of behaviour. Suddenly a system that leveraged cognitive capacity via natural selection will be leveraging that capacity via neural selection—behaviourally. A change so fundamental pretty clearly spells the end of all ancestral ecologies, including the cognitive. Humanism is ‘disintegrating from within’ because intentional cognition itself is beginning to founder. The tsunami of information thundering above the shores of humanism is all deep information, information regarding what we evolved to ignore—and therefore trust. Small wonder, then, that it scuttles intentional problem-solving, generates discursive crash spaces that only philosophers once tripped into.

The more the mechanisms behind learning impediments are laid bare, the less the teacher can attribute performance to character, the more they are forced to adopt a clinical attitude. What happens when every impediment to learning is laid bare? Unprecedented causal information is flooding our institutions, removing more and more behaviour from the domain of character, why? Because character judgments always presume individuals could have done otherwise, and presuming individuals could have done otherwise presumes that we neglect the actual sources of behaviour. Harari brushes this thought on a handful occasions, writing, most notably:

“In the eighteenth century Homo sapiens was like a mysterious black box, whose inner workings were beyond our grasp. Hence when scholars asked why a man drew a knife and stabbed another to death, an acceptable answer said: ‘Because he chose to…” 282

But he fails to see the systematic nature of the neglect involved, and therefore the explanatory power it affords. Our ignorance of ourselves, in other words, determines not simply the applicability, but the solvency of intentional cognition as well. Intentional cognition allowed our ancestors to navigate opaque or ‘black box’ social ecologies. The role causal information plays in triggering intuitions of exemption is tuned to the efficacy of this system overall. By and large our ancestors exempted those individuals in those circumstances that best served their tribe as a whole. However haphazardly, moral intuitions involving causality served some kind of ancestral optimization. So when actionable causal information regarding our behaviour becomes available, we have no choice but to exempt those behaviours, no matter what kind of large scale distortions result. Why? Because it is the only moral thing to do.

Welcome to crash space. We know this is crash space as opposed to, say, scientifically informed enlightenment (the way it generally feels) simply by asking what happens when actionable causal information regarding our every behaviour becomes available. Will moral judgment become entirely inapplicable? For me, the free will debate has always been a paradigmatic philosophical crash space, a place where some capacity always seems to apply, yet consistently fails to deliver solutions because it does not. We evolved to communicate behaviour absent information regarding the biological sources of behaviour: is it any wonder that our cause-neglecting workarounds cannot square with the causes they work around? The growing institutional challenges arising out of the medicalization of character turns on the same cognitive short-circuit. How can someone who has no choice be held responsible?

Even as we drain the ignorance intentional cognition requires from our cognitive ecologies, we are flooding them with AI, what promises to be a deluge of algorithms trained to cue intentional cognition, impersonate persons, in effect. The evidence is unequivocal: our intentional cognitive capacities are easily cued out of school—in a sense, this is the cornerstone of their power, the ability to assume so much on the basis of so little information. But in ecologies designed to exploit intentional intuitions, this power and versatility becomes a tremendous liability. Even now litigators and lawmakers find themselves beset with the question of how intentional cognition should solve for environments flooded with artifacts designed to cue human intentional cognition to better extract various commercial utilities. The problems of the philosophers dwell in ivory towers no more.

First we cloud the water, then we lay the bait—we are doing this to ourselves, after all. We are taking our first stumbling steps into what is becoming a global social crash space. Intentional cognition is heuristic cognition. Since heuristic cognition turns on shallow information cues, we have good reason to assume that our basic means of understanding ourselves and our projects will be incompatible with deep information accounts. The more we learn about cognition, the more apparent this becomes, the more our intentional modes of problem-solving will break down. I’m not sure there’s anything much to be done at this point save getting the word out, empowering some critical mass of people with a notion of what’s going on around them. This is what Harari does to a remarkable extent with Homo Deus, something which we may all have cause to thank him.

Science is steadily revealing the very sources intentional cognition evolved to neglect. Technology is exploiting these revelations, busily engineering emulators to pander to our desires, allowing us to shelter more and more skin from the risk and toil of natural and social reality. Designer experience is designer meaning. Thus the likely irony: the end of meaning will appear to be its greatest blooming, the consumer curled in the womb of institutional matrons, dreaming endless fantasies, living lives of spellbound delight, exploring worlds designed to indulge ancestral inclinations.

To make us weep and laugh for meaning, never knowing whether we are together or alone.

The Death of Wilson: How the Academic Left Created Donald Trump

by rsbakker

Tim and Wilson 2

 

People need to understand that things aren’t going to snap back into magical shape once Trump becomes archive footage. The Economist had a recent piece on all the far-right demagoguery in the past, and though they stress the impact that politicians like Goldwater have had subsequent to their electoral losses, they imply that Trump is part of a cyclical process, essentially more of the same. Perhaps this might have been the case were this anything but the internet age. For all we know, things could skid madly out of control.

Society has been fundamentally rewired. This is a simple fact. Remember Home Improvement, how Tim would screw something up, then wander into the backyard to lay his notions and problems on his neighbour Wilson, who would only ever appear as a cap over the fence line? Tim was hands on, but interpersonally incompetent, while Wilson was bookish and wise to the ways of the human heart—as well as completely obscured save for his eyes and various caps by the fence between them.

This is a fantastic metaphor for the communication of ideas before the internet and its celebrated ability to ‘bring us together.’ Before, when you had chauvinist impulses, you had to fly them by whoever was available. Pre-internet, extreme views were far more likely to be vetted by more mainstream attitudes. Simple geography combined with the limitations of analogue technology had the effect of tamping the prevalence of such views down. But now Tim wouldn’t think of hassling Wilson over the fence, not when he could do a simple Google and find whatever he needed to confirm his asinine behaviour. Our chauvinistic impulses no longer need to run any geographically constrained social gauntlet to find articulation and rationalization. No matter how mad your beliefs, evidence of their sanity is only ever a few keystrokes away.

This has to have some kind of aggregate, long-term effect–perhaps a dramatic one. The Trump phenomenon isn’t the manifestation of an old horrific contagion following the same old linear social vectors; it’s the outbreak of an old horrific contagion following new nonlinear social vectors. Trump hasn’t changed anything, save identifying and exploiting an ecological niche that was already there. No one knows what happens next. Least of all him.

What’s worse, with the collapse of geography comes the collapse of fences. Phrases like “cretinization of the masses” is simply one Google search away as well. Before, Wilson would have been snickering behind that fence, hanging with his friends and talking about his moron neighbour, who really is a nice guy, you know, but needs help to think clearly all the same. Now the fence is gone, and Tim can finally see Wilson for the condescending, self-righteous bigot he has always been.

Did I just say ‘bigot’? Surely… But this is what Trump supporters genuinely think. They think ‘liberal cultural elites’ are bigoted against them. As implausible as his arguments are, Murray is definitely tracking a real social phenomenon in Coming Apart. A good chunk of white America feels roundly put upon, attacked economically and culturally. No bonus this Christmas. No Christmas tree at school. Why should a minimum wage retail worker think they somehow immorally benefit by dint of blue eyes and pale skin? Why should they listen to some bohemian asshole who’s both morally and intellectually self-righteous? Why shouldn’t they feel aggrieved on all sides, economically and culturally disenfranchised?

Who celebrates them? Aside from Donald Trump.

Trump

 

You have been identified as an outgroup competitor.

Last week, Social Psychological and Personality Science published a large study conducted by William Chopik, a psychologist out of Michigan State University, showing the degree to which political views determine social affiliations: it turns out that conservatives generally don’t know Clinton supporters and liberals generally don’t know any Trump supporters. Americans seem to be spontaneously segregating along political lines.

Now I’m Canadian, which, although it certainly undermines the credibility of my observations on the Trump phenomenon in some respects, actually does have its advantages. The whole thing is curiously academic, for Canadians, watching our cousins to the south play hysterical tug-o-war with their children’s future. What’s more, even though I’m about as academically institutionalized as a human can be, I’m not an academic, and I have steadfastly resisted the tendency of the highly educated to surround themselves with people who are every bit as institutionalized—or at least smitten—by academic culture.

I belong to no tribe, at least not clearly. Because of this, I have Canadian friends who are, indeed, Trump supporters. And I’ve been whaling on them, asking questions, posing arguments, and they have been whaling back. Precisely because we are Canadian, the whole thing is theatre for us, allowing, I like to think, for a brand of honesty that rancour and defensiveness would muzzle otherwise.

When I get together with my academic friends, however, something very curious happens whenever I begin reporting these attitudes: I get interrupted. “But-but, that’s just idiotic/wrong/racist/sexist!” And that’s when I begin whaling on them, not because I don’t agree with their estimation, but because, unlike my academic confreres, I don’t hold Trump supporters responsible. I blame them, instead. Aren’t they the ‘critical thinkers’? What else did they think the ‘cretins’ would do? Magically seize upon their enlightened logic? Embrace the wisdom of those who openly call them fools?

Fact is, you’re the ones who jumped off the folk culture ship.

The Trump phenomenon falls into the wheelhouse of what has been an old concern of mine. For more than a decade now, I’ve been arguing that the social habitat of intellectual culture is collapsing, and that the persistence of the old institutional organisms is becoming more and more socially pernicious. Literature professors, visual artists, critical theorists, literary writers, cultural critics, intellectual historians and so on all continue acting and arguing as though this were the 20th century… as if they were actually solving something, instead of making matters worse.

See before, when a good slice of media flushed through bottlenecks that they mostly controlled, the academic left could afford to indulge in the same kind of ingroup delusions that afflict all humans. The reason I’m always interrupted in the course of reporting the attitudes of my Trump supporting friends is simply that, from an ingroup perspective, they do not matter.

More and more research is converging upon the notion that the origins of human cooperation lie in human enmity. Think Band of Brothers only in an evolutionary context. In the endless ‘wars before civilization’ one might expect those groups possessing members willing to sacrifice themselves for the good of their fellows would prevail in territorial conflicts against groups possessing members inclined to break and run. Morality has been cut from the hip of murder.

This thesis is supported by the radical differences in our ability to ‘think critically’ when interacting with ingroup confederates as opposed to outgroup competitors. We are all but incapable of listening, and therefore responding rationally, to those we perceive as threats. This is largely why I think literature, minimally understood as fiction that challenges assumptions, is all but dead. Ask yourself: Why is it so easy to predict that so very few Trump supporters have read Underworld? Because literary fiction caters to the likeminded, and now, thanks to the precision of the relationship between buyer and seller, it is only read by the likeminded.

But of course, whenever you make these kinds of arguments to academic liberals you are promptly identified as an outgroup competitor, and you are assumed to have some ideological or psychological defect preventing genuine critical self-appraisal. For all their rhetoric regarding ‘critical thinking,’ academic liberals are every bit as thin-skinned as Trump supporters. They too feel put upon, besieged. I gave up making this case because I realized that academic liberals would only be able to hear it coming from the lips of one of their own, and even then, only after something significant enough happened to rattle their faith in their flattering institutional assumptions. They know that institutions are self-regarding, they admit they are inevitably tarred by the same brush, but they think knowing this somehow makes them ‘self-critical’ and so less prone to ingroup dysrationalia. Like every other human on the planet, they agree with themselves in ways that flatter themselves. And they direct their communication accordingly.

I knew it was only a matter of time before something happened. Wilson was dead. My efforts to eke out a new model, to surmount cultural balkanization, motivated me to engage in ‘blog wars’ with two very different extremists on the web (both of whom would be kind enough to oblige my predictions). This experience vividly demonstrated to me how dramatically the academic left was losing the ‘culture wars.’ Conservative politicians, meanwhile, were becoming more aggressively regressive in their rhetoric, more willing to publicly espouse chauvinisms that I had assumed safely buried.

The academic left was losing the war for the hearts and minds of white America. But so long as enrollment remained steady and book sales remained strong, they remained convinced that nothing fundamental was wrong with their model of cultural engagement, even as technology assured a greater match between them and those largely approving of them. Only now, with Trump, are they beginning to realize the degree to which the technological transformation of their habitat has rendered them culturally ineffective. As George Saunders writes in “Who Are All These Trump Supporters?” in The New Yorker:

Intellectually and emotionally weakened by years of steadily degraded public discourse, we are now two separate ideological countries, LeftLand and RightLand, speaking different languages, the lines between us down. Not only do our two subcountries reason differently; they draw upon non-intersecting data sets and access entirely different mythological systems. You and I approach a castle. One of us has watched only “Monty Python and the Holy Grail,” the other only “Game of Thrones.” What is the meaning, to the collective “we,” of yon castle? We have no common basis from which to discuss it. You, the other knight, strike me as bafflingly ignorant, a little unmoored. In the old days, a liberal and a conservative (a “dove” and a “hawk,” say) got their data from one of three nightly news programs, a local paper, and a handful of national magazines, and were thus starting with the same basic facts (even if those facts were questionable, limited, or erroneous). Now each of us constructs a custom informational universe, wittingly (we choose to go to the sources that uphold our existing beliefs and thus flatter us) or unwittingly (our app algorithms do the driving for us). The data we get this way, pre-imprinted with spin and mythos, are intensely one-dimensional.

The first, most significant thing to realize about this passage is that it’s written by George Saunders for The New Yorker, a premier ingroup cultural authority on a premier ingroup cultural podium. On the view given here, Saunders pretty much epitomizes the dysfunction of literary culture, an academic at Syracuse University, the winner of countless literary awards (which is to say, better at impressing the likeminded than most), and, I think, clearly a genius of some description.

To provide some rudimentary context, Saunders attends a number of Trump rallies, making observations and engaging Trump supporters and protesters alike (but mostly the former) asking gentle questions, and receiving, for the most part, gentle answers. What he describes observation-wise are instances of ingroup psychology at work, individuals, complete strangers in many cases, making forceful demonstrations of ingroup solidarity and resolve. He chronicles something countless humans have witnessed over countless years, and he fears for the same reasons all those generations have feared. If he is puzzled, he is unnerved more.

He isolates two culprits in the above passage, the ‘intellectual and emotional weakening brought about by degraded public discourse,’ and more significantly, the way the contemporary media landscape has allowed Americans to ideologically insulate themselves against the possibility of doubt and negotiation. He blames, essentially, the death of Wilson.

As a paradigmatic ‘critical thinker,’ he’s careful to throw his own ‘subject position’ into mix, to frame the problem in a manner that distributes responsibility equally. It’s almost painful to read, at times, watching him walk the tightrope of hypocrisy, buffeted by gust after gust of ingroup outrage and piety, trying to exemplify the openness he mistakes for his creed, but sounding only lyrically paternalistic in the end–at least to ears not so likeminded. One can imagine the ideal New Yorker reader, pursing their lips in empathic concern, shaking their heads with wise sorrow, thinking…

But this is the question, isn’t it? What do all these aspirational gestures to openness and admissions of vague complicity mean when the thought is, inevitably, fools? Is this not the soul of bad faith? To offer up portraits of tender humanity in extremis as proof of insight and impartiality, then to end, as Saunders ends his account, suggesting that Trump has been “exploiting our recent dullness and aversion to calling stupidity stupidity, lest we seem too precious.”

Academics… averse to calling stupidity stupid? Trump taking advantage of this aversion? Lordy.

This article, as beautiful as it is, is nothing if not a small monument to being precious, to making faux self-critical gestures in the name of securing very real ingroup imperatives. We are the sensitive ones, Saunders is claiming. We are the light that lets others see. And these people are the night of American democracy.

He blames the death of Wilson and the excessive openness of his ingroup, the error of being too open, too critically minded…

Why not just say they’re jealous because he and his friends are better looking?

If Saunders were at all self-critical, anything but precious, he would be asking questions that hurt, that cut to the bone of his aggrandizing assumptions, questions that become obvious upon asking them. Why not, for instance, ask Trump supporters what they thought of CivilWarLand in Bad Decline? Well, because the chances of any of them reading any of his work aside from “CommComm” (and only then because it won the World Fantasy Award in 2010) were virtually nil.

So then why not ask why none of these people has read anything written by him or any of his friends or their friends? Well, he’s already given us a reason for that: the death of Wilson.

Okay, so Wilson is dead, effectively rendering your attempts to reach and challenge those who most need to be challenged with your fiction toothless. And so you… what? Shrug your shoulders? Continue merely entertaining those whom you find the least abrasive?

If I’m right, then what we’re witnessing is so much bigger than Trump. We are tender. We are beautiful. We are vicious. And we are capable of believing anything to secure what we perceive as our claim. What matters here is that we’ve just plugged billions of stone-age brains chiselled by hundreds of millions of years of geography into a world without any. We have tripped across our technology and now we find ourselves in crash space, a domain where the transformation of our problems has rendered our traditional solutions obsolete.

It doesn’t matter if you actually are on their side or not, whatever that might mean. What matters is that you have been identified as an outgroup competitor, and that none of the authority you think your expertise warrants will be conceded to you. All the bottlenecks that once secured your universal claims are melting away, and you need to find some other way to discharge your progressive, prosocial aspirations. Think of all the sensitive young talent sifting through your pedagogical fingers. What do you teach them? How to be wise? How to contribute to their community? Or how to play the game? How to secure the approval of those just like you—and so, how to systematically alienate them from their greater culture?

So. Much. Waste. So much beauty, wisdom, all of it aimed at nowhere… tossed, among other places, into the heap of crumpled Kleenexes called The New Yorker.

Who would have thunk it? The best way to pluck the wise from the heart of our culture was to simply afford them the means to associate almost exclusively with one another, then trust to human nature, our penchant for evolving dialects and values in isolation. The edumacated no longer have the luxury of speaking among themselves for the edification of those servile enough to listen of their own accord. The ancient imperative to actively engage, to have the courage to reach out to the unlikeminded, to write for someone else, has been thrust back upon the artist. In the days of Wilson, we could trust to argument, simply because extreme thoughts had to run a gamut of moderate souls. Not so anymore.

If not art, then argument. If not argument, then art. Invade folk culture. Glory in delighting those who make your life possible–and take pride in making them think.

Sometimes they’re the idiot and sometimes we’re the idiot–that seems to be the way this thing works. To witness so many people so tangled in instinctive chauvinisms and cartoon narratives is to witness a catastrophic failure of culture and education. This is what Trump is exploiting, not some insipid reluctance to call stupid stupid.

I was fairly bowled over a few weeks back when my neighbour told me he was getting his cousin in Florida to send him a Trump hat. I immediately asked him if he was crazy.

“Name one Donald Trump who has done right by history!” I demanded, attempting to play Wilson, albeit minus the decorum and the fence.

Shrug. Wild eyes and a genuine smile. “Then I hope he burns it down.”

“How could you mean that?”

“I dunno, brother. Can’t be any worse than this fucking shit.”

Nothing I could say could make him feel any different. He’s got the internet.*

 

*[Note to readers: This post is receiving a great deal of Facebook traffic, and relatively little critical comment, which tells me individuals are saving their comments for whatever ingroup they happen to belong to, thus illustrating the very dynamic critiqued in the piece. Sound off! Dare to dissent in ideologically mixed company, or demonstrate the degree to which you need others to agree before raising your voice.]

The Zombie Enlightenment

by rsbakker

rick zombie

Understanding what comes next depends on understanding what’s going on now, which is to say, cognizing modernity. The premise, recall, is that, due to metacognitive myopia, traditional intentional vocabularies lock us into perpetual conundrums. This means understanding modernity requires some kind of post-intentional explanatory framework—we need some way to understand it in naturalistic terms. Since cognizing modernity requires cognizing the Enlightenment, this puts us on the hook for an alternative, post-intentional explanation of the processes at work—a zombie Enlightenment story.

I say ‘zombie,’ of course, as much to keep the horror of the perspective in view as to underscore the naturalistic character of the explanations. What follows is a dry-run of sorts, an attempt to sketch what has brought about this extraordinary era of accelerating transformation. Keep in mind the ludicrous speculative altitudes involved, but also remember that all such attempts to theorize macrosocial phenomena suffer this liability. I don’t think it’s so important that the case be made as some alternative be proposed at this point. For one, the mere existence of such an account, the bare fact of its plausibility, requires the intentionalist account for the superiority of their approach, and this, as we shall see below, can have a transformative effect on cognitive ecologies.

In zombie terms, the Enlightenment, as we think we know it, had nothing to do with the ‘power of reason’ to ‘emancipate,’ to free us from the tyranny of Kant’s ‘tutelary natures.’ This is the Myth. Likewise, Nietzsche’s Gegenaufklarung had nothing to do with somehow emancipating us from the tyrannical consequences of this emancipation. The so-called Counter-Enlightenment, or ‘postmodernism’ as it has come to be called, was a completion, or a consummation, if you wish. The antagonism is merely a perspectival artifact. Postmodernism, if anything, represents the processes characteristic of the zombie Enlightenment colonizing and ultimately overcoming various specialized fields of cultural endeavour.

To understand this one needs to understand something crucial about human nature, namely, the way understanding, all understanding, is blind understanding. The eye cannot be seen. Olfaction has no smell, just as touch has no texture. To enable knowledge, in other words, is to stand outside the circuit of what is known. A great many thinkers have transformed this observation into something both extraordinary and occult, positing all manner of inexplicable things by way of explanation, everything from transparencies to transcendentals to trace structures. But the primary reason is almost painfully mundane: the seeing eye cannot be seen simply because it is mechanically indisposed.

Human beings suffer ‘cognitive indisposition’ or as I like to call it, medial neglect, a ‘brain blindness’ so profound as to escape them altogether, to convince them, at every stage of their ignorance, that they could see pretty much everything they needed to see.

Now according to the Myth, the hundred million odd souls populating Europe in the 18th century shuffled about in unconscious acquiescence to authority, each generation blindly repeating the chauvinisms of the generation prior. The Enlightenment institutionalized inquiry, the asking of questions, and the asking of questions, far from merely setting up ‘choice situations’ between assertions, makes cognitive incapacity explicit. The Enlightenment, in other words, institutionalized the erosion of traditional authority, thus ‘freeing’ individuals to pursue other possible answers. The great dividend of the Enlightenment was nothing less than autonomy, the personal, political, and material empowerment of the individual via knowledge. They were blind, but now they could see–or at least so they thought.

Postmodernism, on the other hand, arose out of the recognition that inquiry has no end, that the apparent rational verities of the Enlightenment were every bit as vulnerable to delegitimization (‘deconstruction’) as the verities of the tradition that it swept away. Enlightenment critique was universally applicable, every bit as toxic to successor as to traditional claims. Enlightenment reason, therefore, could not itself be the answer, a conviction that the increasingly profound technical rationalization of Western society only seemed to confirm. The cognitive autonomy promised by Kant and his contemporaries had proven too radical, missing the masses altogether, and stranding intellectuals in the humanities, at least, with relativistic guesses. The Enlightenment deconstruction of religious narrative—the ‘death of God’—was at once the deconstruction of all absolute narratives, all foundations. Autonomy had collapsed into anomie.

This is the Myth of the Enlightenment, at least in cartoon thumbnail.

But if we set aside our traditional fetish for ‘reason’ and think of post-Medieval European society as a kind of information processing system, a zombie society, the story actually looks quite different. Far from the death of authority and the concomitant birth of a frightening, ‘postmodern autonomy,’ the ‘death of God’ becomes the death of supervision. Supervised learning, of course, refers to one of the dominant learning paradigms in artificial neural networks, one where training converges on known targets, as opposed to unsupervised learning, where training converges on unknown targets. So long as supervised cognitive ecologies monopolized European society, European thinkers were bound to run afoul the ‘only-game-in-town effect,’ the tendency to assume claims true for the simple want of alternatives. There were gains in cognitive efficiency, certainly, but they arose adventitiously, and had to brave selection in generally unforgiving social ecologies. Pockets of unsupervised learning appear in every supervised society, in fact, but in the European case, the economic and military largesse provided by these isolated pockets assured they would be reproduced across the continent. The process was gradual, of course. What we call the ‘Enlightenment’ doesn’t so much designate the process as the point when the only-game-in-town effect could no longer be sustained among the learned classes. In all corners of society, supervised optima found themselves competing more and more with unsupervised optima—and losing. What Kant and his contemporaries called ‘Enlightenment’ simply made explicit an ecology that European society had been incubating for centuries, one that rendered cognitive processes responsive to feedback via empirical and communicative selection.

On an information processing view, in other words, the European Enlightenment did not so much free up individuals as cognitive capacity. Once again, we need to appreciate the zombie nature of this view, how it elides ethical dimensions. On this view, traditional chauvinisms represent maladaptive optima, old fixes that now generate more problems than they solve. Groups were not so much oppressed, on this account, as underutilized. What we are prone to call ‘moral progress’ in folk political terms amounts to the optimization of collective neurocomputational resources. These problematic ethical and political consequences, of course, have no bearing on the accuracy of the view. Any cultural criticism that makes ideological orthodoxy a condition of theoretical veracity is nothing more than apologia in the worst sense, self-serving rationalization. In fact, since naturalistic theories are notorious for the ways they problematize our moral preconceptions, you might even say this kind of problematization is precisely what we should expect. Pursuing hard questions can only be tendentious if you cannot countenance hard answers.

The transition from a supervised to an unsupervised learning ecology was at once a transition from a slow selecting to a rapid selecting ecology. One of the great strengths of unsupervised learning, it turns out, is blind source separation, something your brain wonderfully illustrates for you every time you experience the famed ‘cocktail party effect.’ Artificial unsupervised learning algorithms, of course, allow for the causal sourcing of signals in a wide variety of scientific contexts. Causal sourcing, of course, amounts to identifying causes, which is to say, mechanical cognition, which in turn amounts to behavioural efficacy, the ability to remake environments. So far as behavioural efficacy cues selection, then, we suddenly find ourselves with a social ecology (‘science’) dedicated to the accumulation of ever more efficacies—ever more power over themselves and their environments.

Power begets power; efficiency, efficiency. Human ecologies were not only transformed, they were transformed in ways that facilitated transformation. Each new optimization selected and incorporated generated ecological changes, social or otherwise, changes bearing on the efficiency of previous optimizations. And so the shadow of maladaptation, or obsolescence, fell across all existing adaptations, be they behavioural or technological.

The inevitability of maladaptation, of course, merely expresses the contingency of ecology, the fact that all ecologies change over time. In ancestral (slow selecting) ecologies, the information required to cognize this process was scarce to nonexistent: the only game in town effect—the assumption of sufficiency in the absence of alternatives—was all but inevitable. Given the way cognitive invariance cues cognitive stability, the fact that we can trust our inheritance, the spectre of accelerating obsolescence could only represent a threat.

“Expect the unexpected,” a refrain that only modernity could abide, wonderfully recapitulates, I think, the inevitability of postmodernism. Cognitive instability became the only cognitive stability, the only humanistic ‘principle’ remaining. And thus the great (perhaps even perverse) irony of philosophical modernity: the search for stability in difference, and the development, across the humanities, of social behaviours (aesthetic or theoretical) bent on making obsolete.

Rather than wait for obsolescence to arise out ecological transformation, many began forcing the issue, isolating instances of the only game in town effect in various domains of aesthetic and theoretical behaviour, and adducing alternatives in an attempt to communicate their obsolescence. Supervised or ‘traditional’ ecologies readily broke down. Unsupervised learning ecologies, quickly became synonymous with cognitive stability—and more attractive for it. The scientific fetish for innovation found itself replicated in humanistic guise. Despite the artificial nature of this process, the lack of any alternative account of semantic instability gave rise to a new series of only game in town effects. What had begun as an unsupervised exploration of solution spaces, quickly lapsed into another supervised ecology. Avante garde and post-structuralist zombies adapted to exploit microsocial ecologies they themselves had fashioned.

The so-called ‘critique of Enlightenment reason,’ whether implicit in aesthetic behaviour or explicit in theoretical behaviour, demonstrates the profundity of medial neglect, the blindness of zombie components to the greater machinery compelling them. The Gegenaufklarung merely followed through on the actual processes of ‘ratcheting ecological innovation’ responsible, undermining, as it did, the myths that had been attached to those processes in lieu of actual understanding. In communicating the performative dimension of ‘reason’ and the irrationality of Enlightenment rationality, postmodernism cleared a certain space for post-intentional thinking, but little more. Otherwise it is best viewed as an inadvertent consummation of a logic it can only facilitate and never ‘deconstruct.’

Our fetish for knowledge and innovation remain. We have been trained to embrace an entirely unknown eventuality, and that training has been supervised.

The Discursive Meanie

by rsbakker

So I went to see Catherine Malabou speak on the relation between deep history, consciousness and neuroscience last night. As she did in her Critical Inquiry piece, she argued that some new conceptuality was required to bridge the natural historical and the human, a conceptuality that neuroscience could provide. When I introduced myself to her afterward, she recognized my name, said that she had read my post, “Malabou, Continentalism, and New Age Philosophy.” When I asked her what she thought, she blushed and told me that she thought it was mean.

I tried to smooth things over, but for most people, I think, expressing aggression in interpersonal exchanges is like throwing boulders tied to their waist. Hard words rewrite communicative contexts, and it takes the rest of the brain several moments to catch up. Once she tossed her boulder it was only a matter of time before the rope yanked her away. Discussion over.

I appreciate that I’m something of an essayistic asshole, and that academics, adapted to genteel communicative contexts as they are, generally have little experience with, let alone stomach for, the more bruising environs of the web. But then the near universal academic tendency to take the path of least communicative resistance, to foster discursive ingroups, is precisely the tendency Three Pound Brain is dedicated to exposing. The problem, of course, is that cuing people to identify you as a threat pretty much guarantees they will be unable to engage you rationally, as was the case here. Malabou had dismissed me, and so my arguments simply followed.

How does one rattle ingroup assumptions as an outgroup competitor, short disguising oneself as an ingroup sympathizer, that is? Interesting conundrum, that. I suppose if I had more notoriety, they would feel compelled to engage me…

Is it time to rethink my tactics?

More Disney than Disney World: Semiotics as Theoretical Make-believe (II)

by rsbakker

III: The Gilded Stage

We are one species among 8.7 million, organisms embedded in environments that will select us the way they have our ancestors for 3.8 billion years running. Though we are (as a matter of empirical fact) continuous with our environments, the information driving our environmental behaviour is highly selective. The selectivity of our environmental sensitivities means that we are encapsulated, both in terms of the information available to our brain, and in terms of the information available for consciousness. Encapsulation simply follows from the finite, bounded nature of cognition. Human cognition is the product of ancestral human environments, a collection of good enough fixes for whatever problems those environments regularly posed. Given the biological cost of cognition, we should expect that our brains have evolved to derive as much information as possible from whatever signals available, to continually jump to reproductively advantageous conclusions. We should expect to be insensitive to the vast majority of information in our environments, to neglect everything save information that had managed to get our ancestors born.

As it turns out, shrewd guesswork carried the cognitive day. The correlate of encapsulated information access, in other words, is heuristic cognitive processing, a tendency to always see more than there really is.

So consider the streetscape from above once again:

Southwest Orange-20150421-00452

This looks like a streetscape only because the information provided generally cues the existence of hidden dimensions, which in this case simply do not exist. Since the cuing is always automatic and implicit, you just are looking down a street. Change your angle of access and the illusion of hidden dimensions—which is to say, reality—abruptly evaporates. The impossible New York skyline is revealed as counterfeit.

Southwest Orange-20150421-00453

Let’s call a stage any environment that reliably cues the cognition of alternate environments. On this definition, a stage could be the apparatus of a trapdoor spider, say, or a nest parasitized by a cuckoo, or a painting, or an epic poem, or yes, Disney World—any environment that reliably triggers the cognition of some environment other than the environment actually confronting some organism.

As the inclusion of the spider and the cuckoo should suggest, a stage is a biological phenomenon, the result of some organism cognizing one environment as another environment. Stages, in other words, are not semantic. It is simply the case that beetles sensing environments absent spiders will blunder into trapdoor spiders. It’s simply the case that some birds, sensing chicks, will feed those chicks, even if one of them happens to be a cuckoo. It is simply the case that various organisms exploit the cognitive insensitivities of various other organisms. One need not ascribe anything so arcane as ‘false beliefs’ to birds and beetles to make sense of their exploitation. All they need do is function in a way typically cued by one family of (often happy) environments in a different (often disastrous) environment.

Stages are rife throughout the natural world simply because biological cognition is so expensive. All cognition can be exploited because all cognition is bounded, dependant on taking innumerable factors for granted. Probabilistic guesses have to be made always and everywhere, such are the exigencies of survival and reproduction. Competing species need only happen upon ways to trigger those guesses in environments reproductively advantageous to them, and selection will pace out a new niche, a position in what might be called manipulation space.

The difficulty with qualifying a stage as a biological phenomenon, however, is that I included intentional artifacts such as narratives, paintings, and amusement parks as examples of stages above. The problem with this is that no one knows how to reconcile the biological with the intentional, how to fit meaning into the machinery of life.

And yet, as easy as it is to anthropomorphize the cuckoo’s ‘treachery’ or the trapdoor spider’s ‘cunning’—to infuse our biological examples with meaning—it seems equally easy to ‘zombify’ narrative or painting or Disney World. Hearing the Iliad, for instance, is a prodigious example of staging, insofar as it involves the serial cognition of alternate environments via auditory cues embedded in an actual, but largely neglected, environment. One can easily look at the famed cave paintings of Chauvet, say, as a manipulation of visual cues that automatically triggers the cognition of absent things, in this case, horses:

chauvet horses

But if narrative and painting are stages so far as ‘cognizing alternate environments’ goes, the differences between things like the Iliad or Chauvet and things like trapdoor spiders and cuckoos are nothing less than astonishing. For one, the narrative and pictorial cuing of alternative environments is only partial; the ‘alternate environment’ is entertained as opposed to experienced. For another, the staging involved in the former is communicative, whereas the staging involved in the latter is not. Narratives and paintings mean things, they possess ‘symbolic significance,’ or ‘representational content,’ whereas the predatory and parasitic stages you find in the natural world do not. And since meaning resists biological explanation, this strongly suggests that communicative staging resists biological explanation.

But let’s press on, daring theorists that we are, and see how far our ‘zombie stage’ can take us. The fact is, the ‘manipulation space’ intrinsic to bounded cognition affords opportunities as well as threats. In the case of Chauvet, for instance, you can almost feel the wonder of those first artists discovering the relations between technique and visual effect, ways to trick the eye into seeing what was not there there. Various patterns of visual information cue cognitive machinery adapted to solve environments absent those environments. Flat surfaces become windows.

Let’s divvy things up differently, look at cognition and metacognition in terms of multiple channels of information availability versus cognitive capacity. On this account, staging need not be complete: as with Chauvet, the cognition of alternate environments can be partial, localized within the present environment. And as with Chauvet, this embedded staging can be instrumentalized, exploited for various kinds of effects. Just how the cave paintings at Chauvet were used will always be a matter of archaeological speculation, but this in itself tells us something important about the kind of stage we’re now talking about: namely, their specificity. We share the same basic cognitive mechanisms as the original creators and consumers of the Horses, for instance, but we share nothing of their individual histories. This means the stage we step onto encountering them is bound to differ, perhaps radically, from the stage they stepped onto encountering them in the Upper Paleolithic. Since no individuals share precisely the same history, this means that all embedded stages are unique in some respect.

The potential evolutionary value of embedded stages, the kind of ‘cognitive double-vision’ peculiar to humans, seems relatively clear. If you can draw a horse you can show a fellow hunter what to look for, what direction to approach it, where to strike with a spear, how to carve the joints for efficient transportation, and so on. Embedding, in other words, allows organisms to communicate cognitive relationships to actual environments by cuing the cognition of that environment absent that environment. Embedding also allows organisms to communicate cognitive relationships to nonexistent environments as well. If you can draw a cave bear, you can just as easily deceive as teach a potential competitor. And lastly, embedding allows organisms to game their own cognitive systems. By experimenting with patterns of visual information, they can trigger a wide variety of different responses, triggering wonder, lust, fear, amusement, and so on. The cave paintings at Chauvet include what is perhaps the oldest example of pictorial ‘porn’ (in this case, a vulva formed by a bull overlapping a lion) for a reason.

chauvet vulva

Humans, you could say, are the staging animal, the animal capable of reorganizing and coordinating their cognitive comportments via the manipulation of available information into cues, those patterns prone to trigger various heuristic systems ‘out of school.’ Research into episodic memory reveals an intimate relation between the constructive (as opposed to veridical) nature of episodic memory and the ability to imagine future environments. Apparently the brain does not so much record events as it ransacks them, extracting information strategic to solving future environments. Nothing demonstrates the profound degree to which the brain is invested in strategic staging as the default or task-negative network. Whenever we find ourselves disengaged from some ongoing task, our brains, far from slowing down, switch modes and begin processing alternate, typically social, environments. We ‘daydream,’ or ‘ruminate,’ or ‘fantasize,’ activities almost as metabolically expensive as performing focussed tasks. The resting brain is a staging brain—a story-telling brain. It has literally evolved to cue and manipulate its own cognitive systems, to ‘entertain’ alternate environments, laying down priors in the absence of genuine experience to better manage surprise.

Language looms large over all this, of course, as the staging device par excellence. Language allows us to ‘paint a picture,’ or cue various cognitive systems, at any time. Via language, multiple humans can coordinate their behaviours to provide a single solution; they can engage their environments at ever more strategic joints, intervene in ways that reliably generate advantageous outcomes. Via language, environmental comportments can be compared, tested as embedded stages, which is to say, on the biological cheap. And the list goes on. The upshot is that language, like cave paintings, puts human cognition at the disposal of human cognition

And—here’s the thing—while remaining utterly blind to the structure and dynamics of human cognition.

The reason for this is simple: the biological complexity required to cognize environments is simply too great to be cognized as environmental. We see the ash and pigment smeared across the stone, we experience (the illusion of) horses, and we have no access whatsoever to the machinery in between. Or to phrase it in zombie terms, humans access environmental information, ash and pigment, which cues cognitive comportments to different environmental information, horses, in the absence of any cognitive comportment to this process. In fact, all we see are horses, effortlessly and automatically; it actually requires effort to see the ash and pigment! The activated environment crowds the actual environment from the focus to the fringe. The machinery that makes all this possible doesn’t so much as dimple the margin. We neglect it. And accordingly, what inklings we have strike us as all there is.

The question of signification is as old as philosophy: how the hell do nonexistent horses leap from patterns of light or sound? Until recently, all attempts to answer this question relied on observations regarding environmental cues, the resulting experience, and the environment cued. The sign, the soul, and the signified anchored our every speculative analysis simply because, short baffling instances of neuropathology, the machinery responsible never showed its hand.

Our cognitive comportment to signification, in other words, looked like:

Southwest Orange-20150421-00452

Which is to say, a stage.

Because we’re quite literally ‘hardwired’ into this position, we have no way of intuiting the radically impoverished (because specialized) nature of the information made available. We cannot trudge on the perpendicular to see what the stage looks like from different angles—we cannot alter our existing cognitive comportments. Thus, what might be called the semiotic stage strikes us as the environment, or anything but a stage. So profound is the illusion that the typical indicators of informatic insufficiency, the inability to leverage systematically effective behaviour, the inability to command consensus, are habitually overlooked by everyone save the ‘folk’ (ironically enough). Sign, soul, and signified could only take us so far. Despite millennia of philosophical and psychological speculation, despite all the myriad regimentations of syntax and semantics, language remains a mystery. Controversy reigns—which is to say, we as yet lack any decisive scientific account of language.

But then science has only begun the long trudge on the perpendicular. The project of accessing and interpreting the vast amounts of information neglected by the semiotic stage is just getting underway.

Since all the various competing semiotic theories are based on functions posited absent any substantial reference to the information neglected, the temptation is to assume that those functions operate autonomously, somehow ‘supervene’ upon the higher dimensional story coming out cognitive neuroscience. This has a number of happy dialectical consequences beyond simply proofing domains against cognitive scientific encroachments. Theoretical constraints can even be mapped backward, with the assumption that neuroscience will vindicate semiotic functions, or that semiotic functions actually help clarify neuroscience. Far from accepting any cognitive scientific constraints, they can assert that at least one of their multiple stabs in the dark pierces the mystery of language in the heart, and is thus implicitly presupposed in all communicative acts. Heady stuff.

Semiotics, in other words, would have you believe that either this

Southwest Orange-20150421-00452

is New York City as we know it, and will be vindicated by the long cognitive neuroscientific trudge on the perpendicular, or that it’s a special kind of New York City, one possessing no perpendicular to trudge—not unlike, surprise-surprise, assumptions regarding the first-person or intentionality in general.

On this account, the functions posited are sometimes predictive, sometimes not, and even when they are predictive (as opposed to merely philosophical), they are clearly heuristic, low-dimensional ways of tracking extremely complicated systems. As such, there’s no reason to think them inexplicably—magically—‘autonomous,’ and good reason to suppose why it might seem that way. Sign, soul, and signified, the blinkered channels that have traditionally informed our understanding of language, appear inviolable precisely because they are blinkered—since we cognize via those channels, the limits of those channels cannot be cognized: the invisibility of the perpendicular becomes its impossibility.

These are precisely the kinds of errors we should expect speaking animals to make in the infancy of their linguistic self-understanding. You might even say that humans were doomed to run afoul ‘theoretical hyperrealities’ like semiotics, discursive Disney Worlds…

Except that in Disney World, of course, the stages are advertised as stages, not inescapable or fundamental environments. Aside from policy level stuff, I have no idea how Disney World or Disney corporation systematically contributes to the subversion of social justice, and neither, I would submit, does any semiotician living. But I do think I know how to fit Disney into a far larger, and far more disturbing set of trends that have seized society more generally. To see this, we have to leave semiotics behind…