The Zuckerberg Illusion
So the special issue of Wired Magazine edited by Barack Obama has just come out, and I wanted to draw attention to Mark Zuckerberg’s response to the President’s challenge to “ensure that artificial intelligence helps rather than hurts us.” Somehow, someway, this issue has to move away from the ‘superintelligence’ debate and toward a collective conversation on the impact of AI on human cognitive ecology. Zuckerberg’s response betrays a tragic lack of understanding from the man who, arguably, has already transformed our social cognitive ecologies more radically than any other individual in the history of the human race. Anyone knowing some way of delivering this message from steerage up to the bridge, forward the bloody thing, because the combination of this naivete with the growing ubiquity of AI is becoming, ahem, a little scary. The more baked-in the existing trends become, the harder the hard decisions will become.
Zuckerberg begins his response to Obama’s challenge sounding very much like a typical American industrialist: only the peculiarity of his product makes his claim remarkable.
“People have always used technology as a lever to improve lives and increase productivity. But at the beginning of every cycle of invention, there’s a temptation to focus on the risks that come with a new technology instead of the benefits it will bring.
Today we are seeing that happen with artificial intelligence.”
What he wants to do in this short piece is allay the fears that have arisen regarding AI. His strategy for doing so is to show how our anxieties are the same overblown anxieties that always occasion the introduction of some new technology. These too, he assures us, will pass in time. Ultimately, he writes:
“When people come up with doomsday scenarios about AI, it’s important to remember that these are hypothetical. There’s little basis outside science fiction to believe they will come true.”
Of course, one need only swap out ‘AI’ with ‘industrialization’ to appreciate that not all ‘doomsday scenarios’ are equal. By any comparison, the Anthropocene already counts as one of the great extinction events to befall the planet, an accomplished ‘doomsday’ for numerous different species, and an ongoing one for many others. The reason for this ongoing extinction has to do with the supercomplicated systems of interdependency comprising our environments. Everything is adapted to everything else. Like pouring sand into a gas tank, introducing unprecedented substances and behaviours (such as farming) into existing ecologies progressively perturbs these systems, until eventually they collapse, often taking down other systems depending on them.
Malthus is the first on record predicting the possibility of natural environmental collapse in the 18th century, but the environmental movement only really got underway as the consequences of industrialization became evident in the 19th century. The term pollution, which during the Middle-ages meant defilement, took on its present meaning as “unnatural substance in natural systems” at the turn of the 20th century.
Which begs the question: Why were our ancestors so long in seeing the peril presented by industrialization? Well, for one, the systems comprising ecologies are all, in some way or another, survivors of prior ecological collapses. Ecologies are themselves adaptive systems, exhibiting remarkable resilience in many cases—until they don’t. The supercomplicated networks of interdependence constituting environments only became obvious to our forebears when they began really breaking down. Once one understands the ecological dimension of natural environments, the potentially deleterious impact of ecologically unprecedented behaviours and materials becomes obvious. If the environmental accumulation of industrial by-products constitutes an accelerating trend, then far from a science fiction premise, the prospect of accelerating ecological degradation of environments becomes a near certainty, and the management of ecological consequences an absolute necessity.
Which begs a different, less obvious question: Why would these networks of ecological interdependence only become visible to our ancestors after they began breaking down? Why should humans initially atomize their environments, and only develop complex, relational schemes after long, hard experience? The answer lies in the ecological nature of human cognition, the fact that we evolved to take as much ‘for granted’ as possible. The sheer complexity of the deep connectivity underwriting our surrounding environments renders them computationally intractable, and thus utterly invisible to us. (This is why the ecology question probably seemed like such an odd thing to ask: it quite literally goes without saying that we had to discover natural ecology). So cognition exploits the systematic correlations between what information is available and the systems requiring solution to derive ecologically effective behaviours. The human penchant for atomizing and essentializing their environments enables them to cognize ecology despite remaining blind to it.
What does any of this have to do with Zuckerberg’s optimistic argument for plowing more resources into the development of AI? Well, because I think it’s pretty clear he’s labouring under the very same illusion as the early industrialists, the illusion of acting in a grand, vacant arena, a place where unintended consequences magically dissipate instead of radiate.
The question, recall, is whether doomsday scenarios about AI warrant widespread alarm. It seems pretty clear, and I’m sure Zuckerberg would agree, that doomsday scenarios about industrialization do warrant widespread alarm. So what if what Zuckerberg and everyone else is calling ‘AI’ actually constitutes a form of cognitive industrialization? What will be the cognitive ecological impact of such an event?
We know that human cognition is thoroughly heuristic, so we know that human cognition is thoroughly ecological. The reason Sherry Turkle and Deidre Barrett and others worry about the ease with which human social cognition can be hacked turns on the fact that human social cognition is ecological through and through, dependent on the stable networks of interdependence. The fact is human sociocognition evolved to cope with other human intelligences, to solve on the basis of cues systematically correlated to other human brains, not to supercomputers mining vast data sets. Take our love of flattery. We evolved in ecologies where our love for flattery is balanced against the inevitability of criticism. Ancestrally, pursuing flattery amounts to overcoming—i.e., answering—criticism. We generally hate criticism, but given our cognitive ecology, we had no choice but ‘to take our medicine.’
And this is but one of countless examples.
The irony is that Zuckerberg is deeply invested in researching human cognitive ecology: computer scientists (like Hector Levesque) can rail against ‘bag of tricks’ approaches to cognition, but they will continue to be pursued because behaviour cuing behaviour is all that’s required (for humans or machines, I think). Now Zuckerberg, I’m sure, sees himself exclusively in the business of providing value for consumers, but he needs to understand how his dedication to enable and delight automatically doubles as a ruthless quest to demolish human cognitive ecology. Rewriting environments ‘to make the user experience more enjoyable’ is the foundation all industrial enterprise, all ecological destruction, and the AI onslaught is nothing if not industrial.
Deploying systems designed to cue human social cognition in the absence of humans is pretty clearly a form of deception. Soon, every corporate website will be a friend… soulful, sympathetic, utterly devoted to our satisfaction, as well as inhuman, designed to exploit, and knowing us better than any human could hope to, including ourselves. And as these inhuman friends become cheaper and cheaper, we will be deluged by them, ‘junk intelligences,’ each of them so much wittier, so much wiser, than any mundane human can hope to appear.
“At a very basic level, I think AI is good and not something we should be afraid of,” Zuckerberg concludes. “We’re already seeing examples of how AI can unlock value and improve the world. If we can choose hope over fear—and if we advance the fundamental science behind AI—then this is only the beginning.”