Three Pound Brain

No bells, just whistling in the dark…

Month: August, 2017

On Artificial Belonging: How Human Meaning is Falling between the Cracks of the AI Debate

by rsbakker

I hate people. Or so I used to tell myself in the thick of this or that adolescent crowd. Like so many other teens, my dawning social awareness occasioned not simply anxiety, but agony. Everyone else seemed to have the effortless manner, the well-groomed confidence, that I could only pretend to have. Lord knows I would try to tell amusing anecdotes, to make rooms boom with humour and admiration, but my voice would always falter, their attention would always wither, and I would find myself sitting alone with my butterflies. I had no choice but to hate other people: I needed them too much, and they needed me not at all. Never in my life have I felt so abandoned, so alone, as I did those years. Rarely have I felt such keen emotional pain.

Only later would I learn that I was anything but alone, that a great number of my peers felt every bit as alienated as I did. Adolescence represents a crucial juncture in the developmental trajectory of the human brain, the time when the neurocognitive tools required to decipher and navigate the complexities of human social life gradually come online. And much as the human immune system requires real-world feedback to discriminate between pathogens and allergens, human social cognition requires the pain of social failure to learn the secrets of social success.

Humans, like all other forms of life on this planet, require certain kinds of ecologies to thrive. As so-called ‘feral children’ dramatically demonstrate, the absence of social feedback at various developmental junctures can have catastrophic consequences.

So what happens when we introduce artificial agents into our social ecology? The pace of development is nothing short of boggling. We are about to witness a transformation in human social ecology without evolutionary let alone historical precedent. And yet the debate remains fixated on jobs or the prospects of apocalyptic superintelligences.

The question we really need to be asking is what happens when we begin talking to our machines more than to each other. What does it mean to dwell in social ecologies possessing only the appearance of love and understanding?

“Hell,” as Sartre famously wrote, “is other people.” Although the sentiment strikes a chord in most everyone, the facts of the matter are somewhat more complex. The vast majority of those placed in prolonged solitary confinement, it turns out, suffer a mixture of insomnia, cognitive impairment, depression, and even psychosis. The effects of social isolation are so dramatic, in fact, that the research has occasioned a worldwide condemnation of punitive segregation. Hell, if anything, would seem to be the absence of other people.

The reason for this is that we are a fundamentally social species, ‘eusocial’ in a manner akin to ants or bees, if E.O. Wilson is to be believed. To understand just how social we are, you need only watch the famous Heider-Simmel illusion, a brief animation portraying the movements of a small circle, a small rectangle, and larger rectangle, in and about a motionless, hollow square. Objectively speaking, all one sees are a collection of shapes moving relative one another and the hollow square. But despite the radical absence of information, nearly everyone watching the animation sees a little soap opera, usually involving the big square attempting to prevent the union of the small square and circle.

This leap from shapes to soap operas reveals, in dramatic fashion, just how little information we require to draw enormous social conclusions. Human social cognition is very easy to trigger out of school, as our ancient tendency to ‘anthropomorphize’ our natural surroundings shows. Not only are we prone to see faces in things like flaking paint or water stains, we’re powerfully primed to sense minds as well—so much so that segregated inmates often begin perceiving them regardless. As Brian Keenan, who was held by Islamic Jihad from 1986 to 1990, says of the voices he heard, “they were in the room, they were in me, they were coming from me but they were audible to no one else but me.”

What does this have to do with the impact of AI? More than anyone has yet imagined.


Imagine a social ecology populated by billions upon billions of junk intelligences


 

The problem, in a nutshell, is that other people aren’t so much heaven or hell as both. Solitary confinement, after all, refers to something done to people by other people. The argument to redefine segregation as torture finds powerful support in evidence showing that social exclusion activates the same regions of the brain as physical pain. At some point in our past, it seems, our social attachment systems coopted the pain system to motivate prosocial behaviors. As a result, the mere prospect of exclusion triggers analogues of physical suffering in human beings.

But as significant as this finding is, the experimental props used to derive these findings are even more telling. The experimental paradigm typically used to neuroimage social rejection turns on a strategically deceptive human-computer interaction, or HCI. While entombed in an fMRI, subjects are instructed to play an animated three-way game of catch—called ‘Cyberball’—with what they think are two other individuals on the internet, but which is in fact a program designed to initially include, then subsequently exclude, the subject. As the other ‘players’ begin throwing more and more to each other, the subject begins to feel real as opposed to metaphorical pain. The subjects, in other words, need only be told that other minds control the graphics on the screen before them, and the scant information provided by those graphics trigger real world pain. A handful of pixels and a little fib is all that’s required to cue the pain of social rejection.

As one might imagine, Silicon Valley has taken notice.

The HCI field finds its roots in the 1960’s with the research of Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory. Even given the rudimentary computing power at his disposal, his ‘Eliza’ program, which relied on simple matching and substitution protocols to generate questions, was able to cue strong emotional reactions in many subjects. As it turns out, people regularly exhibit what the late Clifford Nass called ‘mindlessness,’ the reliance on automatic scripts, when interacting with artificial agents. Before you scoff at the notion, recall the 2015 Ashley Madison hack, and the subsequent revelation that it deployed more than 70,000 bots to conjure the illusion of endless extramarital possibility. These bots, like Eliza, were simple, mechanical affairs, but given the context of Ashley Madison, their behaviour apparently convinced millions of men that some kind of (promising) soap opera was afoot.

The great paradox, of course, is that those automatic scripts belong to the engine of ‘mindreading,’ our ability to predict, explain, and manipulate our fellow human beings, not to mention ourselves. They only stand revealed as mechanical, ‘mindless,’ when tasked to cognize something utterly without evolutionary precedent: an artificial agent. Our power to peer into one another’s souls, in other words, becomes little more than a grab-bag of exploitable reflexes in the presence of AI.

The claim boggles, I admit, but from a Darwinian perspective, it’s hard to see how things could be otherwise. Our capacity to solve one another is largely a product of our hunter-gatherer past, which is to say, environments where human intelligence was the only game in town. Why evolve the capacity to solve for artificial intelligences, let alone ones possessing Big Data resources? The cues underwriting human social cognition may seem robust, but this is an artifact of ecological stability, the fact that our blind trust in our shared social biology has served so far. We always presume our environments indestructible. As the species responsible for the ongoing Anthropocene extinction, we have a long history of recognizing ecological peril only after the fact.

Sherry Turkle, MIT professor and eminent author of Alone Together, has been warning of what she calls “Darwinian buttons” for over a decade now. Despite the explosive growth in Human-Computer Interaction research, her concerns remain at best, a passing consideration. As part of our unconscious, automatic cognitive systems, we have no conscious awareness that such buttons even exist. They are, to put it mildly, easy to overlook. Add to this the overwhelming institutional and economic incentive to exploit these cues, and the AI community’s failure to consider Turkle’s misgivings seems all but inevitable.

Like most all scientists, researchers in the field harbor only the best of intentions, and the point of AI, as they see it, is to empower consumers, to give them what they want. The vast bulk of ongoing research in Human-Computer Interaction is aimed at “improving the user experience,” identifying what cues trust instead of suspicion, attachment instead of avoidance. Since trust requires competence, a great deal of the research remains focused on developing the core cognitive competencies of specialized AI systems—and recent advances on this front have been nothing if not breathtaking. But the same can be said regarding interpersonal competencies as well—enough to inspire Clifford Nass and Corina Yen to write, The Man Who Lied to his Laptop, a book touted as the How to Win Friends and Influence People of the 21st century. In the course of teaching machines how to better push our buttons, we’re learning how to better push them as well.

Simply because it is so easily miscued, human social cognition depends on trust. Shapes, after all, are cheap, while soap operas represent a potential goldmine. This explains our powerful, hardwired penchant for tribalism: the intimacy of our hunter-gatherer past all but assured trustworthiness, providing a cheap means of nullifying our vulnerability to social deception. When Trump decries ‘fake news,’ for instance, what he’s primarily doing is signaling group membership. He understands, the instinctive way we all understand, that the best way to repudiate damaging claims is to circumvent them altogether, and focus on the group membership of the claimer. Trust, the degree we can take one another for granted, is the foundation of cooperative interaction.

We are about to be deluged with artificial friends. In a recent roundup of industry forecasts, Forbes reports that AI related markets are already growing, and expected to continue growing, by more than 50% per annum. Just last year, Microsoft launched its Bot Framework service, a public platform for creating ‘conversational user interfaces’ for a potentially endless variety of commercial purposes, all of it turning on Microsoft’s rapidly advancing AI research. “Build a great conversationalist,” the site urges. “Build and connect intelligent bots to interact with your users naturally wherever they are…” Of course, the term “naturally,” here, refers to the seamless way these inhuman systems cue our human social cognitive systems. Learning how to tweak, massage, and push our Darwinian buttons has become an out-and-out industrial enterprise.

As mentioned above, Human-Human Interaction consists of pushing these buttons all the time, prompting automatic scripts that prompt further automatic scripts, with only the rare communicative snag giving us pause for genuine conscious deliberation. It all works simply because our fellow humans comprise the ancestral ecology of social cognition. As it stands, cuing social cognitive reflexes out of school is largely the province of magicians, con artists, and political demagogues. Seen in this light, the AI revolution looks less a cornucopia of marvels than the industrialized unleashing of endless varieties of invasive species—an unprecedented overthrow of our ancestral social cognitive habitats.

A habitat that, arguably, is already under severe duress.

In 2006, Maki Fukasawa coined the term ‘herbivore men’ to describe the rising number of Japanese males expressing disinterest in marital or romantic relationships with women. And the numbers have only continued to rise. A 2016 National Institute of Population and Social Security Research survey reveals that 42 percent of Japanese men between the ages of 18 and 34 remain virgins, up six percent from a mere five years previous. For Japan, a nation already struggling with the economic consequences of depopulation, such numbers are disastrous.

And Japan is not alone. In Man, Interrupted: Why Young Men are Struggling and What We Can Do About It, Philip Zimbardo (of the Stanford Prisoner Experiment fame) and Nikita Coulombe provide a detailed account of how technological transformations—primarily online porn, video-gaming, and virtual peer groups—are undermining the ability of American boys to academically achieve as well as maintain successful relationships. They see phenomena such as the growing MGTOW (‘men going their own way’) movement as the product of the way exposure to virtual, technological environments leaves them ill-equipped to deal with the rigours of genuine social interaction.

More recently, Jean Twenge, a psychologist at San Diego State University, has sounded the alarm on the catastrophic consequences of smartphone use for post-Millennials, arguing that “the twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever.” The primary culprit: loneliness. “For all their power to link kids day and night, social media also exacerbate the age-old teen concern about being left out.” Social media, in other words, seem to be playing the same function as the Cyberball game used by researchers to neuroimage the pain of social rejection. Only this time the experiment involves an entire generation of kids, and the game has no end.

The list of curious and troubling phenomena apparently turning on the ways mere connectivity has transformed our social ecology is well-nigh endless. Merely changing how we push one another’s Darwinian buttons, in other words, has impacted the human social ecology in historically unprecedented ways. And by all accounts, we find ourselves becoming more isolated, more alienated, than at any other time in human history.

So what happens when we change the who? What happens when the heaven of social belonging goes on sale?

Good question. There is no “Centre for the Scientific Study of Human Meaning” in the world. Within the HCI community, criticism is primarily restricted to the cognitivist/post-cognitivist debate, the question of whether cognition is intrinsically independent or dependent of an agent’s ongoing environmental interactions. As the preceding should make clear, numerous disciplines find themselves wandering this or that section of the domain, but we have yet to organize any institutional pursuit of the questions posed here. Human social ecology, the study of human interaction in biologically amenable terms, remains the province of storytellers.

We quite literally have no clue as to what we are about to do.

Consider Mark Zuckerberg’s and Elon Musk’s recent ‘debate’ regarding the promise and threat of AI. Musk, of course, has garnered headlines for quite some time with fears of artificial superintelligence. He’s famously called AI “our biggest existential threat,” openly referring to Skynet and the prospect of robots mowing down civilians on the streets. On a Sunday this past July, Zuckerberg went live in his Palo Alto backyard while smoking meats to host an impromptu Q&A. At the fifty-minute mark, he answers a question regarding Musk’s fears, and responds, “I think people who are naysayers and try to drum up these doomsday scenarios—I don’t understand it. It’s really negative and in some ways I think it’s pretty irresponsible.”

On the Tuesday following, Musk tweeted in response: “I’ve talked to Mark about this. His understanding of the subject is limited.”

To the extent that human interaction is ecological (and how could it be otherwise?), both can be accused of irresponsibility and limited understanding. The threat of ‘superintelligence,’ though perhaps inevitable, remains far enough in the future to easily dismiss as a bogeyman. The same can be said regarding “peak human” arguments predicting mass unemployment. The threat of economic disruption, though potentially dire, is counter-balanced by the promise of new, unforeseen economic opportunity. This leaves us with the countless number of ways AI will almost certainly improve our lives: fewer car crashes, fewer misdiagnoses, and so on. As a result, one can predict how all such exchanges will end.

The contemporary AI debate, in other words, is largely a pseudo-debate.

The futurist Richard Yonck’s account of ‘affective computing’ somewhat redresses this problem in his recently released Heart of the Machine, but since he begins with the presupposition that AI represents a natural progression, that the technological destruction of ancestral social habitats is the ancestral habitat of humanity, he remains largely blind to the social ecological consequences of his subject matter. Espousing a kind of technological fatalism (or worse, fundamentalism), he characterizes AI as the culmination of a “buddy movie” as old as humanity itself. The oxymoronic, if not contradictory, prospects of ‘artificial friends’ simply does not dawn on him.

Neil Lawrence, a professor of machine learning at the University of Sheffield and technology columnist at The Guardian, is the rare expert who recognizes the troubling ecological dimensions of the AI revolution. Borrowing the distinction between System Two, or conscious, ‘mindful’ problem-solving, and System One, or unconscious, ‘mindless’ problem-solving, from cognitive psychology, he warns of what he calls System Zero, what happens when the market—via Big Data, social media, and artificial intelligence—all but masters our Darwinian buttons. As he writes,

“The actual intelligence that we are capable of creating within the next 5 years is an unregulated System Zero. It won’t understand social context, it won’t understand prejudice, it won’t have a sense of a larger human objective, it won’t empathize. It will be given a particular utility function and it will optimize that to its best capability regardless of the wider negative effects.”

To the extent that modern marketing (and propaganda) techniques already seek to cue emotional as opposed to rational responses, however, there’s a sense in which ‘System Zero’ and consumerism are coeval. Also, economics comprises but a single dimension of human social ecology. We have good reason to fear that Lawrence’s doomsday scenario, one where market and technological forces conspire to transform us into ‘consumer Borg,’ understates the potential catastrophe that awaits.

The closest one gets to a genuine analysis of the interpersonal consequences of AI lies in movies such as Spike Jonze’s science-fiction masterpiece, Her, or the equally brilliant HBO series, Westworld, scripted by Charles Yu. ‘Science fiction,’ however, happens to be the blanket term AI optimists use to dismiss their critical interlocutors.

When it comes to assessing the prospect of artificial intelligence, natural intelligence is failing us.

The internet was an easy sell. After all, what can be wrong with connecting likeminded people?

The problem, of course, is that we are the evolutionary product of small, highly interdependent, hunter-gatherer communities. Historically, those disposed to be permissive had no choice but to continually negotiate with those disposed to be authoritarian. Each party disliked the criticism of the other, but the daily rigors of survival forced them to get along. No longer. Only now, a mere two decades later, are we discovering the consequences of creating a society that systematically segregates permissives and authoritarians. The election of Donald Trump has, if nothing else, demonstrated the degree to which technology has transformed human social ecology in novel, potentially disastrous ways.

AI has also been an easy sell—at least so far. After all, what can be wrong with humanizing our technological environments? Imagine a world where everything is ‘user friendly,’ compliant to our most petulant wishes. What could be wrong with that?

Well, potentially everything, insofar as ‘humanizing our environments’ amounts to dehumanizing our social ecology, replacing the systems we are adapted to solve, our fellow humans, with systems possessing no evolutionary precedent whatsoever, machines designed to push our buttons in ways that optimize hidden commercial interests. Social pollution, in effect.

Throughout the history of our species, finding social heaven has required risking social hell. Human beings are as prone to be demanding, competitive, hurtful—anything but ‘user friendly’—as otherwise. Now the industrial giants of the early 21st century are promising to change all that, to flood the spaces between us with machines designed to shoulder the onerous labour of community, citizenship, and yes, even love.

Imagine a social ecology populated by billions upon billions of junk intelligences. Imagine the solitary confinement of an inhuman crowd. How will we find one another? How will we tolerate the hypersensitive infants we now seem doomed to become?

Unkempt Nation, Disheveled Soul

by rsbakker

So this has been a mad summer in pretty much every respect. The first week of May, my hard-drive died, and I lost pretty much everything I had written the previous six months. My wife was in Venezuela at the time, marching, so I had a hard time wrapping my head around the psychological enormity of the event. It’s not every day you turn on the news to watch events embroiling your loved ones.

Anyway, I’m still pulling the pieces together. I had occasion to revisit some of my first blog posts, and I thought I would post a few snippets from way back in 2010, when we could still pretend technology wasn’t driving the world insane. Rather than get angry all over again at the lack of reviews, or fret for the future of democratic society in the technological age, I thought I would let my younger, less well-groomed self do the ranting.

I’ll be back with things more substantial soon.

 

September 14, 2010 – So why are so many writers heros? Aside from good old human psychology, I blame it on the old ‘Write What You Know’ literary maxim.

Like so many literary maxims it sounds appealing at first blush. After all, how can you be honest–authentic–unless you write ‘what you know.’ But like all maxims it has a flip side: Telling practitioners what they should do is at once telling them what they should not do. Telling writers to only write what they know is telling them to studiously avoid all the things their lives lack–adventure, romance, spectacle–which is to say, the very things that regular people crave.

So this maxim has the happy side-effect of policing who gets to communicate to whom, and so securing the institutional boundaries of the literary specialist. Not only is real culture left to its own naive devices, it becomes the unflagging foil, a kind of self-congratulatory resource, one that can be tapped over and over again to confirm the literary writer’s sense of superiority. Thus all the writerly heros, stranded in seas of absurdity.

September 16, 2010 – The pigeonhole has no bottom, believe you me. I used to be so naive as to think I could climb out, but now I’m starting to think that it swallows everyone in the end. I wonder about all the other cranks and crackpots out there, about all the other sparks that have been snuffed by relentless inattention. It’s no accident that eulogies are so filled with cliches.

After all, it’s neurophysiology that I’m up against more than any passing cultural bigotry. The brain pigeonholes everything it encounters to better lower its caloric load, to economize. We sort far more than we ponder. Novelty, when we encounter it, is either confused for something old and stupid or comes across as errant noise. Things were this way long before corporations and capital.

So I find myself wondering what I should do. Maybe I should just resign myself to my fate, numb the pain, mellow those revenge fantasies. Become a fatalist.

But then there’s nothing like bitterness to keep that fire scorching your belly. And there’s nothing I fear more than becoming old and complacent. Only the well-groomed don’t have chips on their shoulders.

September 18, 2010 – What really troubles me is the way this hypocrisy has been institutionalized. So long as you treat ‘culture’ as a what, which is to say, as a abstract construct, a formalism, then you can congratulate yourself for all the myriad ways in which your abstractions disrupt those abstractions. But as soon as you treat ‘culture’ as a who, which is to say, as a cartoon we use to generalize over millions of living, breathing people, the notion of ‘disruption’ becomes pretty ridiculous pretty quick. All it takes is one simple question: “Who is disrupted?” and the illusion of criticality is dispelled. One little question.

The conceit is so weak. And yet somehow we’ve managed to raise a veritable landfill of illusory subversion upon it. ‘Literature,’ we call it.

Says a lot about the power of vanity, if you think about it.

As well as why I’m probably doomed to fail.

September 20, 2010 – But our culture has become frightfully compartmentalized. The web, which was supposed to blow open the doors of culture–to ‘flatten everything’–seems to have had the opposite effect. Since we’re hardwired to reflexively seek out affirmation and confirmation, rendering everything equally available has meant our paths of least resistence no longer take us across unfamiliar territory. We can get what we want and need without taking detours through things we didn’t realize we wanted or needed. We can make an expedient bastion out of our parochial tastes.

February 27, 2011 – These people, it seems to me, have to be engaged, have to be challenged, if only so that the masses don’t succumb to their own weaknesses for self-serving chauvinism. These people are appealing simply because they are so adept at generating ‘reasons’ for self-serving intuitions that we all share. That we and our ways are special, exempt, and that Others are a threat to us. That our high-school is, like, really the greatest high-school on the planet. Confirmation bias, my-side bias, the list goes on. And given that humans have evolved to be easily and almost irrevocably programmed, it seems to me that the most important place to wage this battle is in classroom. To begin teaching doubt as the highest virtue, as opposed to the madness of belief.

The prevailing madness.

Funny, huh? It’s the lapse in belief that these guys typically see as symptomatic of modern societal decline. But really what they’re talking about is a lapse in agreement. Belief is as pervasive as ever, but as a principle rather than any specific consensual canon. It stands to reason that the lack of ‘moral and cognitive solidarity’ would make us uncomfortable, considering the kinds of scarcity and competition faced by our ancestors.

January 13, 2011 – The problem is that human nature is adapted to environments where the access to information was geographically indexed, where its accumulation exacted a significant caloric toll. We don’t call private investigators ‘gumshoes’ for no reason. We are adapted to environments where the info-gathering workload continually forced us to ‘settle,’ which is to say, make due with something other than what we originally desired, when it comes to information.

This is what makes the ‘global village’ such a deceptive misnomer. In the preindustrial village, where everyone depended upon one another, our cognitive selfishness made quite a bit of adaptive sense: in environments where scarcity and interdependency force cognitive compromise, you can see how cognitive selfishness–finding ways to justify oneself while impugning potential competitors–might pay real dividends in terms of in-group prestige. Where the circumstantial leash is tight, it pays to pull and pull, and perhaps reach those morsels that escape others.

In the industrial village, however, the leash is far longer. But even still, if you want pursue your views, geographical constraints force you to engage individuals who do not share them. Who knows what Bob across the road believes? (My Bob was an evangelical Christian, and I count myself lucky for having endlessly argued with him).

In the information village the leash is cut altogether. The likeminded can effortlessly congregate in innumerable echo chambers. Of course, they can effortlessly congregate with those they disagree with as well, but… The tendency, by and large, is not only to seek confirmation, but to confuse it with intelligence and truth–which is why right-wingers tend to watch more Fox than PBS.

Now, enter all these specialized programs, which are bent on moulding your information environment into something as pleasing as possible. Don’t like the N-word? Well, we can make sure you never need to encounter it again–ever.

The world is sycophantic, and it’s becoming more so all the time. This, I think, is a far better cartoon generalization than ‘flat,’ insofar as it references the user, the intermediary, as well as the information environment.

The contemporary (post-posterity) writer has to incorporate this radically different social context into their practice (if that practice is to be considered even remotely self-critical). If you want to produce literary effects, then you have to write for a sycophantic world, find ways not simply to subvert the ideological defences of readers, but to trick the inhuman, algorithmic gate-keepers as well.

This means being strategically sycophantic. To give people what they want, sure, but with something more as well.