On Artificial Belonging: How Human Meaning is Falling between the Cracks of the AI Debate

by rsbakker

I hate people. Or so I used to tell myself in the thick of this or that adolescent crowd. Like so many other teens, my dawning social awareness occasioned not simply anxiety, but agony. Everyone else seemed to have the effortless manner, the well-groomed confidence, that I could only pretend to have. Lord knows I would try to tell amusing anecdotes, to make rooms boom with humour and admiration, but my voice would always falter, their attention would always wither, and I would find myself sitting alone with my butterflies. I had no choice but to hate other people: I needed them too much, and they needed me not at all. Never in my life have I felt so abandoned, so alone, as I did those years. Rarely have I felt such keen emotional pain.

Only later would I learn that I was anything but alone, that a great number of my peers felt every bit as alienated as I did. Adolescence represents a crucial juncture in the developmental trajectory of the human brain, the time when the neurocognitive tools required to decipher and navigate the complexities of human social life gradually come online. And much as the human immune system requires real-world feedback to discriminate between pathogens and allergens, human social cognition requires the pain of social failure to learn the secrets of social success.

Humans, like all other forms of life on this planet, require certain kinds of ecologies to thrive. As so-called ‘feral children’ dramatically demonstrate, the absence of social feedback at various developmental junctures can have catastrophic consequences.

So what happens when we introduce artificial agents into our social ecology? The pace of development is nothing short of boggling. We are about to witness a transformation in human social ecology without evolutionary let alone historical precedent. And yet the debate remains fixated on jobs or the prospects of apocalyptic superintelligences.

The question we really need to be asking is what happens when we begin talking to our machines more than to each other. What does it mean to dwell in social ecologies possessing only the appearance of love and understanding?

“Hell,” as Sartre famously wrote, “is other people.” Although the sentiment strikes a chord in most everyone, the facts of the matter are somewhat more complex. The vast majority of those placed in prolonged solitary confinement, it turns out, suffer a mixture of insomnia, cognitive impairment, depression, and even psychosis. The effects of social isolation are so dramatic, in fact, that the research has occasioned a worldwide condemnation of punitive segregation. Hell, if anything, would seem to be the absence of other people.

The reason for this is that we are a fundamentally social species, ‘eusocial’ in a manner akin to ants or bees, if E.O. Wilson is to be believed. To understand just how social we are, you need only watch the famous Heider-Simmel illusion, a brief animation portraying the movements of a small circle, a small rectangle, and larger rectangle, in and about a motionless, hollow square. Objectively speaking, all one sees are a collection of shapes moving relative one another and the hollow square. But despite the radical absence of information, nearly everyone watching the animation sees a little soap opera, usually involving the big square attempting to prevent the union of the small square and circle.

This leap from shapes to soap operas reveals, in dramatic fashion, just how little information we require to draw enormous social conclusions. Human social cognition is very easy to trigger out of school, as our ancient tendency to ‘anthropomorphize’ our natural surroundings shows. Not only are we prone to see faces in things like flaking paint or water stains, we’re powerfully primed to sense minds as well—so much so that segregated inmates often begin perceiving them regardless. As Brian Keenan, who was held by Islamic Jihad from 1986 to 1990, says of the voices he heard, “they were in the room, they were in me, they were coming from me but they were audible to no one else but me.”

What does this have to do with the impact of AI? More than anyone has yet imagined.


Imagine a social ecology populated by billions upon billions of junk intelligences


 

The problem, in a nutshell, is that other people aren’t so much heaven or hell as both. Solitary confinement, after all, refers to something done to people by other people. The argument to redefine segregation as torture finds powerful support in evidence showing that social exclusion activates the same regions of the brain as physical pain. At some point in our past, it seems, our social attachment systems coopted the pain system to motivate prosocial behaviors. As a result, the mere prospect of exclusion triggers analogues of physical suffering in human beings.

But as significant as this finding is, the experimental props used to derive these findings are even more telling. The experimental paradigm typically used to neuroimage social rejection turns on a strategically deceptive human-computer interaction, or HCI. While entombed in an fMRI, subjects are instructed to play an animated three-way game of catch—called ‘Cyberball’—with what they think are two other individuals on the internet, but which is in fact a program designed to initially include, then subsequently exclude, the subject. As the other ‘players’ begin throwing more and more to each other, the subject begins to feel real as opposed to metaphorical pain. The subjects, in other words, need only be told that other minds control the graphics on the screen before them, and the scant information provided by those graphics trigger real world pain. A handful of pixels and a little fib is all that’s required to cue the pain of social rejection.

As one might imagine, Silicon Valley has taken notice.

The HCI field finds its roots in the 1960’s with the research of Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory. Even given the rudimentary computing power at his disposal, his ‘Eliza’ program, which relied on simple matching and substitution protocols to generate questions, was able to cue strong emotional reactions in many subjects. As it turns out, people regularly exhibit what the late Clifford Nass called ‘mindlessness,’ the reliance on automatic scripts, when interacting with artificial agents. Before you scoff at the notion, recall the 2015 Ashley Madison hack, and the subsequent revelation that it deployed more than 70,000 bots to conjure the illusion of endless extramarital possibility. These bots, like Eliza, were simple, mechanical affairs, but given the context of Ashley Madison, their behaviour apparently convinced millions of men that some kind of (promising) soap opera was afoot.

The great paradox, of course, is that those automatic scripts belong to the engine of ‘mindreading,’ our ability to predict, explain, and manipulate our fellow human beings, not to mention ourselves. They only stand revealed as mechanical, ‘mindless,’ when tasked to cognize something utterly without evolutionary precedent: an artificial agent. Our power to peer into one another’s souls, in other words, becomes little more than a grab-bag of exploitable reflexes in the presence of AI.

The claim boggles, I admit, but from a Darwinian perspective, it’s hard to see how things could be otherwise. Our capacity to solve one another is largely a product of our hunter-gatherer past, which is to say, environments where human intelligence was the only game in town. Why evolve the capacity to solve for artificial intelligences, let alone ones possessing Big Data resources? The cues underwriting human social cognition may seem robust, but this is an artifact of ecological stability, the fact that our blind trust in our shared social biology has served so far. We always presume our environments indestructible. As the species responsible for the ongoing Anthropocene extinction, we have a long history of recognizing ecological peril only after the fact.

Sherry Turkle, MIT professor and eminent author of Alone Together, has been warning of what she calls “Darwinian buttons” for over a decade now. Despite the explosive growth in Human-Computer Interaction research, her concerns remain at best, a passing consideration. As part of our unconscious, automatic cognitive systems, we have no conscious awareness that such buttons even exist. They are, to put it mildly, easy to overlook. Add to this the overwhelming institutional and economic incentive to exploit these cues, and the AI community’s failure to consider Turkle’s misgivings seems all but inevitable.

Like most all scientists, researchers in the field harbor only the best of intentions, and the point of AI, as they see it, is to empower consumers, to give them what they want. The vast bulk of ongoing research in Human-Computer Interaction is aimed at “improving the user experience,” identifying what cues trust instead of suspicion, attachment instead of avoidance. Since trust requires competence, a great deal of the research remains focused on developing the core cognitive competencies of specialized AI systems—and recent advances on this front have been nothing if not breathtaking. But the same can be said regarding interpersonal competencies as well—enough to inspire Clifford Nass and Corina Yen to write, The Man Who Lied to his Laptop, a book touted as the How to Win Friends and Influence People of the 21st century. In the course of teaching machines how to better push our buttons, we’re learning how to better push them as well.

Simply because it is so easily miscued, human social cognition depends on trust. Shapes, after all, are cheap, while soap operas represent a potential goldmine. This explains our powerful, hardwired penchant for tribalism: the intimacy of our hunter-gatherer past all but assured trustworthiness, providing a cheap means of nullifying our vulnerability to social deception. When Trump decries ‘fake news,’ for instance, what he’s primarily doing is signaling group membership. He understands, the instinctive way we all understand, that the best way to repudiate damaging claims is to circumvent them altogether, and focus on the group membership of the claimer. Trust, the degree we can take one another for granted, is the foundation of cooperative interaction.

We are about to be deluged with artificial friends. In a recent roundup of industry forecasts, Forbes reports that AI related markets are already growing, and expected to continue growing, by more than 50% per annum. Just last year, Microsoft launched its Bot Framework service, a public platform for creating ‘conversational user interfaces’ for a potentially endless variety of commercial purposes, all of it turning on Microsoft’s rapidly advancing AI research. “Build a great conversationalist,” the site urges. “Build and connect intelligent bots to interact with your users naturally wherever they are…” Of course, the term “naturally,” here, refers to the seamless way these inhuman systems cue our human social cognitive systems. Learning how to tweak, massage, and push our Darwinian buttons has become an out-and-out industrial enterprise.

As mentioned above, Human-Human Interaction consists of pushing these buttons all the time, prompting automatic scripts that prompt further automatic scripts, with only the rare communicative snag giving us pause for genuine conscious deliberation. It all works simply because our fellow humans comprise the ancestral ecology of social cognition. As it stands, cuing social cognitive reflexes out of school is largely the province of magicians, con artists, and political demagogues. Seen in this light, the AI revolution looks less a cornucopia of marvels than the industrialized unleashing of endless varieties of invasive species—an unprecedented overthrow of our ancestral social cognitive habitats.

A habitat that, arguably, is already under severe duress.

In 2006, Maki Fukasawa coined the term ‘herbivore men’ to describe the rising number of Japanese males expressing disinterest in marital or romantic relationships with women. And the numbers have only continued to rise. A 2016 National Institute of Population and Social Security Research survey reveals that 42 percent of Japanese men between the ages of 18 and 34 remain virgins, up six percent from a mere five years previous. For Japan, a nation already struggling with the economic consequences of depopulation, such numbers are disastrous.

And Japan is not alone. In Man, Interrupted: Why Young Men are Struggling and What We Can Do About It, Philip Zimbardo (of the Stanford Prisoner Experiment fame) and Nikita Coulombe provide a detailed account of how technological transformations—primarily online porn, video-gaming, and virtual peer groups—are undermining the ability of American boys to academically achieve as well as maintain successful relationships. They see phenomena such as the growing MGTOW (‘men going their own way’) movement as the product of the way exposure to virtual, technological environments leaves them ill-equipped to deal with the rigours of genuine social interaction.

More recently, Jean Twenge, a psychologist at San Diego State University, has sounded the alarm on the catastrophic consequences of smartphone use for post-Millennials, arguing that “the twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever.” The primary culprit: loneliness. “For all their power to link kids day and night, social media also exacerbate the age-old teen concern about being left out.” Social media, in other words, seem to be playing the same function as the Cyberball game used by researchers to neuroimage the pain of social rejection. Only this time the experiment involves an entire generation of kids, and the game has no end.

The list of curious and troubling phenomena apparently turning on the ways mere connectivity has transformed our social ecology is well-nigh endless. Merely changing how we push one another’s Darwinian buttons, in other words, has impacted the human social ecology in historically unprecedented ways. And by all accounts, we find ourselves becoming more isolated, more alienated, than at any other time in human history.

So what happens when we change the who? What happens when the heaven of social belonging goes on sale?

Good question. There is no “Centre for the Scientific Study of Human Meaning” in the world. Within the HCI community, criticism is primarily restricted to the cognitivist/post-cognitivist debate, the question of whether cognition is intrinsically independent or dependent of an agent’s ongoing environmental interactions. As the preceding should make clear, numerous disciplines find themselves wandering this or that section of the domain, but we have yet to organize any institutional pursuit of the questions posed here. Human social ecology, the study of human interaction in biologically amenable terms, remains the province of storytellers.

We quite literally have no clue as to what we are about to do.

Consider Mark Zuckerberg’s and Elon Musk’s recent ‘debate’ regarding the promise and threat of AI. Musk, of course, has garnered headlines for quite some time with fears of artificial superintelligence. He’s famously called AI “our biggest existential threat,” openly referring to Skynet and the prospect of robots mowing down civilians on the streets. On a Sunday this past July, Zuckerberg went live in his Palo Alto backyard while smoking meats to host an impromptu Q&A. At the fifty-minute mark, he answers a question regarding Musk’s fears, and responds, “I think people who are naysayers and try to drum up these doomsday scenarios—I don’t understand it. It’s really negative and in some ways I think it’s pretty irresponsible.”

On the Tuesday following, Musk tweeted in response: “I’ve talked to Mark about this. His understanding of the subject is limited.”

To the extent that human interaction is ecological (and how could it be otherwise?), both can be accused of irresponsibility and limited understanding. The threat of ‘superintelligence,’ though perhaps inevitable, remains far enough in the future to easily dismiss as a bogeyman. The same can be said regarding “peak human” arguments predicting mass unemployment. The threat of economic disruption, though potentially dire, is counter-balanced by the promise of new, unforeseen economic opportunity. This leaves us with the countless number of ways AI will almost certainly improve our lives: fewer car crashes, fewer misdiagnoses, and so on. As a result, one can predict how all such exchanges will end.

The contemporary AI debate, in other words, is largely a pseudo-debate.

The futurist Richard Yonck’s account of ‘affective computing’ somewhat redresses this problem in his recently released Heart of the Machine, but since he begins with the presupposition that AI represents a natural progression, that the technological destruction of ancestral social habitats is the ancestral habitat of humanity, he remains largely blind to the social ecological consequences of his subject matter. Espousing a kind of technological fatalism (or worse, fundamentalism), he characterizes AI as the culmination of a “buddy movie” as old as humanity itself. The oxymoronic, if not contradictory, prospects of ‘artificial friends’ simply does not dawn on him.

Neil Lawrence, a professor of machine learning at the University of Sheffield and technology columnist at The Guardian, is the rare expert who recognizes the troubling ecological dimensions of the AI revolution. Borrowing the distinction between System Two, or conscious, ‘mindful’ problem-solving, and System One, or unconscious, ‘mindless’ problem-solving, from cognitive psychology, he warns of what he calls System Zero, what happens when the market—via Big Data, social media, and artificial intelligence—all but masters our Darwinian buttons. As he writes,

“The actual intelligence that we are capable of creating within the next 5 years is an unregulated System Zero. It won’t understand social context, it won’t understand prejudice, it won’t have a sense of a larger human objective, it won’t empathize. It will be given a particular utility function and it will optimize that to its best capability regardless of the wider negative effects.”

To the extent that modern marketing (and propaganda) techniques already seek to cue emotional as opposed to rational responses, however, there’s a sense in which ‘System Zero’ and consumerism are coeval. Also, economics comprises but a single dimension of human social ecology. We have good reason to fear that Lawrence’s doomsday scenario, one where market and technological forces conspire to transform us into ‘consumer Borg,’ understates the potential catastrophe that awaits.

The closest one gets to a genuine analysis of the interpersonal consequences of AI lies in movies such as Spike Jonze’s science-fiction masterpiece, Her, or the equally brilliant HBO series, Westworld, scripted by Charles Yu. ‘Science fiction,’ however, happens to be the blanket term AI optimists use to dismiss their critical interlocutors.

When it comes to assessing the prospect of artificial intelligence, natural intelligence is failing us.

The internet was an easy sell. After all, what can be wrong with connecting likeminded people?

The problem, of course, is that we are the evolutionary product of small, highly interdependent, hunter-gatherer communities. Historically, those disposed to be permissive had no choice but to continually negotiate with those disposed to be authoritarian. Each party disliked the criticism of the other, but the daily rigors of survival forced them to get along. No longer. Only now, a mere two decades later, are we discovering the consequences of creating a society that systematically segregates permissives and authoritarians. The election of Donald Trump has, if nothing else, demonstrated the degree to which technology has transformed human social ecology in novel, potentially disastrous ways.

AI has also been an easy sell—at least so far. After all, what can be wrong with humanizing our technological environments? Imagine a world where everything is ‘user friendly,’ compliant to our most petulant wishes. What could be wrong with that?

Well, potentially everything, insofar as ‘humanizing our environments’ amounts to dehumanizing our social ecology, replacing the systems we are adapted to solve, our fellow humans, with systems possessing no evolutionary precedent whatsoever, machines designed to push our buttons in ways that optimize hidden commercial interests. Social pollution, in effect.

Throughout the history of our species, finding social heaven has required risking social hell. Human beings are as prone to be demanding, competitive, hurtful—anything but ‘user friendly’—as otherwise. Now the industrial giants of the early 21st century are promising to change all that, to flood the spaces between us with machines designed to shoulder the onerous labour of community, citizenship, and yes, even love.

Imagine a social ecology populated by billions upon billions of junk intelligences. Imagine the solitary confinement of an inhuman crowd. How will we find one another? How will we tolerate the hypersensitive infants we now seem doomed to become?