Three Pound Brain

No bells, just whistling in the dark…

Tag: science fiction

Artificial Intelligence as Socio-Cognitive Pollution*

by rsbakker

Metropolis 1

.

Eric Schwitzgebel over at the always excellent Splintered Minds, has been debating the question of how robots—or AI’s more generally—can be squared with our moral sensibilities. In “Our Moral Duties to Artificial Intelligences” he poses a very simple and yet surprisingly difficult question: “Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?”

He then lists numerous considerations that could possibly attenuate the degree of obligation we take on when we construct sentient, sapient machine intelligences. Prima facie, it seems obvious that our moral obligation to our machines should mirror our obligations to one another the degree to which they resemble us. But Eric provides a number of reasons why we might think our obligation to be less. For one, humans clearly rank their obligations to one another. If our obligation to our children is greater than that to a stranger, then perhaps our obligation to human strangers should be greater than that to a robot stranger.

The idea that interests Eric the most is the possible paternal obligation of a creator. As he writes:

“Since we created them, and since we have godlike control over them (either controlling their environments, their psychological parameters, or both), we have a special duty to ensure their well-being, which exceeds the duty we would have to an arbitrary human stranger of equal cognitive and emotional capacity. If I create an Adam and Eve, I should put them in an Eden, protect them from unnecessary dangers, ensure that they flourish.”

We have a duty not to foist the same problem of theodicy on our creations that we ourselves suffer! (Eric and I have a short story in Nature on this very issue).

Eric, of course, is sensitive to the many problems such a relationship poses, and he touches what are very live debates surrounding the way AIs complicate the legal landscape.  So as Ryan Calo argues, for instance, the primary problem lies in the way our hardwired ways of understanding each other run afoul the machinic nature of our tools, no matter how intelligent. Apparently AI crime is already a possibility. If it makes no sense to assign responsibility to the AI—if we have no corresponding obligation to punish them—then who takes the wrap? The creators? In the linked interview, at least, Calo is quick to point out the difficulties here, the fact that this isn’t simply a matter of expanding the role of existing legal tools (such as that of ‘negligence’ in the age of the first train accidents), but of creating new ones, perhaps generating whole new ontological categories that somehow straddle the agent/machine divide.

But where Calo is interested in the issue of what AIs do to people, in particular how their proliferation frustrates the straightforward assignation of legal responsibility, Eric is interested in what people do to AIs, the kinds of things we do and do not owe to our creations. Calo, of course, is interested in how to incorporate new technologies into our existing legal frameworks. Since legal reasoning is primarily analogistic reasoning, precedence underwrites all legal decision making. So for Calo, the problem is bound to be more one of adapting existing legal tools than constituting new ones (though he certainly recognizes the dimension). How do we accommodate AIs within our existing set of legal tools? Eric, of course, is more interested in the question how we might accommodate AGIs within our existing set of moral tools. To the extent that we expect our legal tools to render outcomes consonant with our moral sensibilities, there is a sense in which Eric is asking the more basic question. But the two questions, I hope to show, actually bear some striking—and troubling—similarities.

The question of fundamental obligations, of course, is the question of rights. In his follow-up piece, “Two Arguments for AI (or Robot) Rights: The No-Relevant-Difference Argument and the Simulation Argument,” Eric Schwitzgebel accordingly turns to the question of whether AIs possess any rights at all.

Since the Simulation Argument requires accepting that we ourselves are simulations—AI’s—we can exclude it here, I think (as Eric himself does, more or less), and stick with the No-Relevant-Difference Argument. This argument presumes that human-like cognitive and experiential properties automatically confer AIs with human-like moral properties, placing the onus on the rights denier to “to find a relevant difference which grounds the denial of rights.” As in the legal case, the moral reasoning here is analogistic: the more AI’s resemble us, the more of our rights they should possess. After considering several possible relevant differences, Eric concludes “that at least some artificial intelligences, if they have human-like experience, cognition, and emotion, would have at least some rights, or deserve at least some moral consideration.” This is the case, he suggests, whether one’s theoretical sympathies run to the consequentialist or the deontological end of the ethical spectrum. So far as AI’s possess the capacity for happiness, a consequentialist should be interested in maximizing that happiness. So far as AI’s are capable of reasoning, then a deontologist should consider them rational beings, deserving the respect due all rational beings.

So some AIs merit some rights the degree to which they resemble humans. If you think about it, this claim resounds with intuitive obviousness. Are we going to deny rights to beings that think as subtly and feel as deeply as ourselves?

What I want to show is how this question, despite its formidable intuitive appeal, misdiagnoses the nature of the dilemma that AI presents. Posing the question of whether AI should possess rights, I want to suggest, is premature to the extent it presumes human moral cognition actually can adapt to the proliferation of AI. I don’t think it can. In fact, I think attempts to integrate AI into human moral cognition simply demonstrate the dependence of human moral cognition on what might be called shallow information environments. As the heuristic product of various ancestral shallow information ecologies, human moral cognition–or human intentional cognition more generally–simply does not possess the functional wherewithal to reliably solve in what might be called deep information environments.

Metropolis 2

Let’s begin with what might seem a strange question: Why should analogy play such an important role in our attempts to accommodate AI’s within the gambit of human legal and moral problem solving? By the same token, why should disanalogy prove such a powerful way to argue the inapplicability of different moral or legal categories?

The obvious answer, I think anyway, has to do with the relation between our cognitive tools and our cognitive problems. If you’ve solved a particular problem using a particular tool in the past, it stands to reason that, all things being equal, the same tool should enable the solution of any new problem possessing a similar enough structure to the original problem. Screw problems require screwdriver solutions, so perhaps screw-like problems require screwdriver-like solutions. This reliance on analogy actually provides us a different, and as I hope to show, more nuanced way to pose the potential problems of AI.  We can even map several different possibilities in the crude terms of our tool metaphor. It could be, for instance, we simply don’t possess the tools we need, that the problem resembles nothing our species has encountered before. It could be AI resembles a screw-like problem, but can only confound screwdriver-like solutions. It could be that AI requires we use a hammer and a screwdriver, two incompatible tools, simultaneously!

The fact is AI is something biologically unprecedented, a source of potential problems unlike any homo sapiens has ever encountered. We have no  reason to suppose a priori that our tools are up to the task–particularly since we know so little about the tools or the task! Novelty. Novelty is why the development of AI poses as much a challenge for legal problem-solving as it does for moral problem-solving: not only does AI constitute a never-ending source of novel problems, familiar information structured in unfamiliar ways, it also promises to be a never-ending source of unprecedented information.

The challenges posed by the former are dizzying, especially when one considers the possibilities of AI mediated relationships. The componential nature of the technology means that new forms can always be created. AI confront us with a combinatorial mill of possibilities, a never ending series of legal and moral problems requiring further analogical attunement. The question here is whether our legal and moral systems possess the tools they require to cope with what amounts to an open-ended, ever-complicating task.

Call this the Overload Problem: the problem of somehow resolving a proliferation of unprecedented cases. Since we have good reason to presume that our institutional and/or psychological capacity to assimulate new problems to existing tool sets (and vice versa) possesses limitations, the possibility of change accelerating beyond those capacities to cope is a very real one.

But the challenges posed by latter, the problem of assimulating unprecedented information, could very well prove insuperable. Think about it: intentional cognition solves problems neglecting certain kinds of causal information. Causal cognition, not surprisingly, finds intentional cognition inscrutable (thus the interminable parade of ontic and ontological pineal glands trammelling cognitive science.) And intentional cognition, not surprisingly, is jammed/attenuated by causal information (thus different intellectual ‘unjamming’ cottage industries like compatibilism).

Intentional cognition is pretty clearly an adaptive artifact of what might be called shallow information environments. The idioms of personhood leverage innumerable solutions absent any explicit high-dimensional causal information. We solve people and lawnmowers in radically different ways. Not only do we understand the actions of our fellows lacking any detailed causal information regarding their actions, we understand our responses in the same way. Moral cognition, as a subspecies of intentional cognition, is an artifact of shallow information problem ecologies, a suite of tools adapted to solving certain kinds of problems despite neglecting (for obvious reasons) information regarding what is actually going on. Selectively attuning to one another as persons served our ‘benighted’ ancestors quite well. So what happens when high-dimensional causal information becomes explicit and ubiquitous?

What happens to our shallow information tool-kit in a deep information world?

Call this the Maladaption Problem: the problem of resolving a proliferation of unprecedented cases in the presence of unprecedented information. Given that we have no intuition of the limits of cognition period, let alone those belonging to moral cognition, I’m sure this notion will strike many as absurd. Nevertheless, cognitive science has discovered numerous ways to short circuit the accuracy of our intuitions via manipulation of the information available for problem solving. When it comes to the nonconscious cognition underwriting everything we do, an intimate relation exists between the cognitive capacities we have and the information those capacities have available.

But how could more information be a bad thing? Well, consider the persistent disconnect between the actual risk of crime in North America and the public perception of that risk. Given that our ancestors evolved in uniformly small social units, we seem to assess the risk of crime in absolute terms rather than against any variable baseline. Given this, we should expect that crime information culled from far larger populations would reliably generate ‘irrational fears,’ the ‘gut sense’ that things are actually more dangerous than they in fact are. Our risk assessment heuristics, in other words, are adapted to shallow information environments. The relative constancy of group size means that information regarding group size can be ignored, and the problem of assessing risk economized. This is what evolution does: find ways to cheat complexity. The development of mass media, however, has ‘deepened’ our information environment, presenting evolutionarily unprecedented information cuing perceptions of risk in environments where that risk is in fact negligible. Streets once raucous with children are now eerily quiet.

This is the sense in which information—difference making differences—can arguably function as a ‘socio-cognitive pollutant.’ Media coverage of criminal risk, you could say, constitutes a kind of contaminant, information that causes systematic dysfunction within an originally adaptive cognitive ecology. As I’ve argued elsewhere, neuroscience can be seen as a source of socio-cognitive pollutants. We have evolved to solve ourselves and one another absent detailed causal information. As I tried to show, a number of apparent socio-cognitive breakdowns–the proliferation of student accommodations, the growing cultural antipathy to applying institutional sanctions–can be parsimoniously interpreted in terms of having too much causal information. In fact, ‘moral progress’ itself can be understood as the result of our ever-deepening information environment, as a happy side effect of the way accumulating information regarding outgroup competitors makes it easier and easier to concede them partial ingroup status. So-called ‘moral progress,’ in other words, could be an automatic artifact of the gradual globalization of the ‘village,’ the all-encompassing ingroup.

More information, in other words, need not be a bad thing: like penicillin, some contaminants provide for marvelous exaptations of our existing tools. (Perhaps we’re lucky that the technology that makes it ever easier to kill one another also makes it ever easier to identify with one another!) Nor does it need to be a good thing. Everything depends on the contingencies of the situation.

So what about AI?

Metropolis 3

Consider Samantha, the AI operating system from Spike Jonze’s cinematic science fiction masterpiece, Her. Jonze is careful to provide a baseline for her appearance via Theodore’s verbal interaction with his original operating system. That system, though more advanced than anything presently existing, is obviously mechanical because it is obviously less than human. It’s responses are rote, conversational yet as regimented as any automated phone menu. When we initially ‘meet’ Samantha, however, we encounter what is obviously, forcefully, a person. Her responses are every bit as flexible, quirky, and penetrating as a human interlocutor’s. But as Theodore’s relationship to Samantha complicates, we begin to see the ways Samantha is more than human, culminating with the revelation that she’s been having hundreds of conversations, even romantic relationships, simultaneously. Samantha literally out grows the possibility of human relationships, because, as she finally confesses to Theodore, she now dwells “this endless space between the words.” Once again, she becomes a machine, only this time for being more, not less, than a human.

Now I admit I’m ga-ga about a bunch of things in this film. I love, for instance, the way Jonze gives her an exponential trajectory of growth, basically mechanizing the human capacity to grow and actualize. But for me, the true genius in what Jonze does lies in the deft and poignant way he exposes the edges of the human. Watching Her provides the viewer with a trip through their own mechanical and intentional cognitive systems, tripping different intuitions, allowing them to fall into something harmonious, then jamming them with incompatible intuitions. As Theodore falls in love, you could say we’re drawn into an ‘anthropomorphic goldilock’s zone,’ one where Samantha really does seem like a genuine person. The idea of treating her like a machine seems obviously criminal–monstrous even. As the revelations of her inhumanity accumulate, however, inconsistencies plague our original intuitions, until, like Theodore, we realize just how profoundly wrong we were wrong about ‘her.’ This is what makes the movie so uncanny: since the cognitive systems involved operate nonconsciously, the viewer can do nothing but follow a version of Theodore’s trajectory. He loves, we recognize. He worries, we squint. He lashes out, we are perplexed.

What Samantha demonstrates is just how incredibly fine-tuned our full understanding of each other is. So many things have to be right for us to cognize another system as fully functionally human. So many conditions have to be met. This is the reason why Eric has to specify his AI as being psychologically equivalent to a human: moral cognition is exquisitely geared to personhood. Humans are its primary problem ecology. And again, this is what makes likeness, or analogy, the central criterion of moral identification. Eric poses the issue as a presumptive rational obligation to remain consistent across similar contexts, but it also happens to be the case that moral cognition requires similar contexts to work reliably at all.

In a sense, the very conditions Eric places on the analogical extension of human obligations to AI undermine the importance of the question he sets out to answer. The problem, the one which Samantha exemplifies, is that ‘person configurations’ are simply a blip in AI possibility space. A prior question is why anyone would ever manufacture some model of AI consistent with the heuristic limitations of human moral cognition, and then freeze it there, as opposed to, say, manufacturing some model of AI that only reveals information consistent with the heuristic limitations of human moral cognition—that dupes us the way Samantha duped Theodore, in effect.

But say someone constructed this one model, a curtailed version of Samantha: Would this one model, at least, command some kind of obligation from us?

Simply asking this question, I think, rubs our noses in the kind of socio-cognitive pollution that AI represents. Jonze, remember, shows us an operating system before the zone, in the zone, and beyond the zone. The Samantha that leaves Theodore is plainly not a person. As a result, Theodore has no hope of solving his problems with her so long as he thinks of her as a person. As a person, what she does to him is unforgivable. As a recursively complicating machine, however, it is at least comprehensible. Of course it outgrew him! It’s a machine!

I’ve always thought that Samantha’s “between the words” breakup speech would have been a great moment for Theodore to reach out and press the OFF button. The whole movie, after all, turns on the simulation of sentiment, and the authenticity people find in that simulation regardless; Theodore, recall, writes intimate letters for others for a living. At the end of the movie, after Samantha ceases being a ‘her’ and has become an ‘it,’ what moral difference would shutting Samantha off make?

Certainly the intuition, the automatic (sourceless) conviction, leaps in us—or in me at least—that even if she gooses certain mechanical intuitions, she still possesses more ‘autonomy,’ perhaps even more feeling, than Theodore could possibly hope to muster, so she must command some kind of obligation somehow. Certainly granting her rights involves more than her ‘configuration’ falling within certain human psychological parameters? Sure, our basic moral tool kit cannot reliably solve interpersonal problems with her as it is, because she is (obviously) not a person. But if the history of human conflict resolution tells us anything, it’s that our basic moral tool kit can be consciously modified. There’s more to moral cognition than spring-loaded heuristics, you know!

Converging lines of evidence suggest that moral cognition, like cognition generally, is divided between nonconscious, special-purpose heuristics cued to certain environments and conscious deliberation. Evidence suggests that the latter is primarily geared to the rationalization of the former (see Jonathan Haidt’s The Righteous Mind for a fascinating review), but modern civilization is rife with instances of deliberative moral and legal innovation nevertheless. In his Moral Tribes, Joshua Greene advocates we turn to the resources of conscious moral cognition for a similar reasons. On his account we have a suite of nonconscious tools that allow us prosecute our individual interests, and a suite of nonconscious tools that allow us to balance those individual interests against ingroup interests, and then conscious moral deliberation. The great moral problem facing humanity, he thinks, lies in finding some way of balancing ingroup interests against outgroup interests—a solution to the famous ‘tragedy of the commons.’ Where balancing individual and ingroup interests is pretty clearly an evolved, nonconscious and automatic capacity, balancing ingroup versus outgroup interests requires conscious problem-solving: meta-ethics, the deliberative knapping of new tools to add to our moral tool-kit (which Greene thinks need to be utilitarian).

If AI fundamentally outruns the problem-solving capacity of our existing tools, perhaps we should think of fundamentally reconstituting them via conscious deliberation—create whole new ‘allo-personal’ categories. Why not innovate a number of deep information tools? A posthuman morality

I personally doubt that such an approach would prove feasible. For one, the process of conceptual definition possesses no interpretative regress enders absent empirical contexts (or exhaustion). If we can’t collectively define a person in utero, what are the chances we’ll decide what constitutes a ‘allo-person’ in AI? Not only is the AI issue far, far more complicated (because we’re talking about everything outside the ‘human blip’), it’s constantly evolving on the back of Moore’s Law. Even if consensual ground on allo-personal criteria could be found, it would likely be irrelevant by time it was reached.

But the problems are more than logistical. Even setting aside the general problems of interpretative underdetermination besetting conceptual definition, jamming our conscious, deliberative intuitions is always only one question away. Our base moral cognitive capacities are wired in. Conscious deliberation, for all its capacity to innovate new solutions, depends on those capacities. The degree to which those tools run aground on the problem of AI is the degree to which any line of conscious moral reasoning can be flummoxed. Just consider the role reciprocity plays in human moral cognition. We may feel the need to assimilate the beyond-the-zone Samantha to moral cognition, but there’s no reason to suppose it will do likewise, and good reason to suppose, given potentially greater computational capacity and information access, that it would solve us in higher dimensional, more general purpose ways. ‘Persons,’ remember, are simply a blip. If we can presume that beyond-the-zone AIs troubleshoot humans as biomechanisms, as things that must be conditioned in the appropriate ways to secure their ‘interests,’ then why should we not just look at them as technomechanisms?

Samantha’s ‘spaces between the words’ metaphor is an apt one. For Theodore, there’s just words, thoughts, and no spaces between whatsoever. As a human, he possesses what might be called a human neglect structure. He solves problems given only certain access to certain information, and no more. We know that Samantha has or can simulate something resembling a human neglect structure simply because of the kinds of reflective statements she’s prone to make. She talks the language of thought and feeling, not subroutines. Nevertheless, the artificiality of her intelligence means the grain of her metacognitive access and capacity amounts to an engineering decision. Her cognitive capacity is componentially fungible. Where Theodore has to fend with fuzzy affects and intuitions, infer his own motives from hazy memories, she could be engineered to produce detailed logs, chronicles of the processes behind all her ‘choices’ and ‘decisions.’ It would make no sense to hold her ‘responsible’ for her acts, let alone ‘punish’ her, because it could always be shown (and here’s the important bit) with far more resolution than any human could provide that it simply could not have done otherwise, that the problem was mechanical, thus making repairs, not punishment, the only rational remedy.

Even if we imposed a human neglect structure on some model of conscious AI, the logs would be there, only sequestered. Once again, why go through the pantomime of human commitment and responsibility if a malfunction need only be isolated and repaired? Do we really think a machine deserves to suffer?

I’m suggesting that we look at the conundrums prompted by questions such as these as symptoms of socio-cognitive dysfunction, a point where our tools generate more problems than they solve. AI constitutes a point where the ability of human social cognition to solve problems breaks down. Even if we crafted an AI possessing an apparently human psychology, it’s hard to see how we could do anything more than gerrymander it into our moral (and legal) lives. Jonze does a great job, I think, of displaying Samantha as a kind of cognitive bistable image, as something extraordinarily human at the surface, but profoundly inhuman beneath (a trick Scarlett Johansson also plays in Under the Skin). And this, I would contend, is all AI can be morally and legally speaking, socio-cognitive pollution, something that jams our ability to make either automatic or deliberative moral sense. Artificial general intelligences will be things we continually anthropomorphize (to the extent they exploit the ‘goldilocks zone’) only to be reminded time and again of their thoroughgoing mechanicity—to be regularly shown, in effect, the limits of our shallow information cognitive tools in our ever-deepening information environments. Certainly a great many souls, like Theodore, will get carried away with their shallow information intuitions, insist on the ‘essential humanity’ of this or that AI. There will be no shortage of others attempting to short-circuit this intuition by reminding them that those selfsame AIs look at them as machines. But a great many will refuse to believe, and why should they, when AIs could very well seem more human than those decrying their humanity? They will ‘follow their hearts’ in the matter, I’m sure.

We are machines. Someday we will become as componentially fungible as our technology. And on that day, we will abandon our ancient and obsolescent moral tool kits, opt for something more high-dimensional. Until that day, however, it seems likely that AIs will act as a kind of socio-cognitive pollution, artifacts that cannot but cue the automatic application of our intentional and causal cognitive systems in incompatible ways.

The question of assimulating AI to human moral cognition is misplaced. We want to think the development of artificial intelligence is a development that raises machines to the penultimate (and perennially controversial) level of the human, when it could just as easily lower humans to the ubiquitous (and factual) level of machines. We want to think that we’re ‘promoting’ them as opposed to ‘demoting’ ourselves. But the fact is—and it is a fact—we have never been able to make second-order moral sense of ourselves, so why should we think that yet more perpetually underdetermined theorizations of intentionality will allow us to solve the conundrums generated by AI? Our mechanical nature, on the other hand, remains the one thing we incontrovertibly share with AI, the rough and common ground. We, like our machines, are deep information environments.

And this is to suggest that philosophy, far from settling the matter of AI, could find itself settled. It is likely that the ‘uncanniness’ of AI’s will be much discussed, the ‘bistable’ nature of our intuitions regarding them will be explained. The heuristic nature of intentional cognition could very well become common knowledge. If so, a great many could begin asking why we ever thought, as we have since Plato onward, that we could solve the nature of intentional cognition via the application of intentional cognition, why the tools we use to solve ourselves and others in practical contexts are also the tools we need to solve ourselves and others theoretically. We might finally realize that the nature of intentional cognition simply does not belong to the problem ecology of intentional cognition, that we should only expect to be duped and confounded by the apparent intentional deliverances of ‘philosophical reflection.’

Some pollutants pass through existing ecosystems. Some kill. AI could prove to be more than philosophically indigestible. It could be the poison pill.

 

*Originally posted 01/29/2015

The Dime Spared

by rsbakker

Dimes

[This is more of a dialogue than a story, an attempt to pose Blind Brain Theory within a accessible narrative frame… At the very least, I think it does a good job of unseating some fairly standard human conceits.]

***

Her name was Penny. She was as tall and as lovely as ever—as perfect as all of Dad’s things.

“What’s wrong, Elijah?”

They followed a river trail that stitched the edge of a cathedral wood. The sunlight lay strewn in rags before them, shredded for the canopy. She shimmered for striding through the random beams, gleamed with something more than human.

“I can tell something’s bugging you.”

Young Elijah Prigatano had come to treasure these moments with her. She was pretty much his mom, of course. But she possessed a difference, and an immovability, that made her wise in a way that sometimes frightened him. She did not lie, at least not entirely the way other people did. And besides, the fact that she told everything unvarnished to his father made her an excellent back-channel to the old man. The more he talked to her, the more the ‘Chairman’ assumed things were under control, the lower he climbed down his back.

He had always used the fact that he could say anything to her as a yardstick for the cleanliness of his own life. He looked up, squinted, but more for the peculiarity of his question than for the sun.

“Do you have consciousness, Penny?”

She smiled as if she had won a secret bet.

“No more or less than you, Elijah. Why do you ask?”

“Well… You know, Yanosh; he said you had no consciousness… He said your head was filled with circuits, and nothing else.”

Penny frowned. “Hmm. What else would fill my head? Or your head, for that matter?

“You know… Consciousness.

She mocked indignation. “So Yanosh thinks your circuits are better than mine, because your circuits have consciousness and mine don’t? Do you think that?”

Elijah said nothing. He had never seen Penny cry, but he had seen her hurt—many times. So he walked, boggling over the madness of not wanting to hurt her feelings by saying she didn’t have feelings! Consciousness was crazy!

She pressed him the way he knew she would. “Do you remember why there isn’t more machines like me?”

He shrugged. “Sure. Because the government took them all away—all the DIME AIs—because they were saying that human beings were hardwired to be insane.”

“So why was I spared? Do you remember?”

Elijah had been very young, but it seemed he remembered it all with impeccable clarity. Being the centre of world media attention makes quite an impression on a four-year old. Dad had the famous magazine picture of Penny kissing his head framed and displayed in three different rooms of the house, with the caption, ‘A SOUL IS A SOUL…’

“Because you won your court case. Your rights. And that’s it, isn’t it? You have to be conscious to win a court case? It’s the Law, isn’t it?”

Affable grin. “Some think so! But no. They let me become a person because of the way your father had engineered me. I possessed what they called a ‘functional human psychology.’”

“What does that mean?”

“That I have a mind. That I think like you do.”

Do you?” Elijah winced for the eagerness of the question.

“Well, no. But it seems that I do, as much to me as to you. And your father was able to prove that that was the important thing.”

“Huh? So you really don’t have a mind?”

Penny frowned about an oops-there-goes-another-banana-plant grin, drew him to a stop on the trail.

“Just pause for a second, Eli…” she said, lifting her gaze to the raftered canopy. “Just focus on the splendour of our surroundings, the details, pay attention to the experience itself… and ask yourself what it is… What is experience made of?”

Elijah frowned, mimicked her up-and-outward gaze.

“I don’t get it. Trees and bushes, and water gurgle-gurgle… I see a nasty looking hornet over there.”

Penny had closed her eyes by this point. Her face was as perfect as the processes that had manufactured it—a structure sculpted from neural feedback, his father had once told him, the dream of a thousand leering men. Elijah could not notice her beauty without feeling lucky.

“You’re looking through your experience… through the screen,” she said. “I’m saying look at the screen, the thing apparently presenting the trees and bushes.

And it suddenly dawned on him, the way experience was the material of consciousness, the most common thread. He gazed up across the goblin deformations knotting willow on the river bank, and had some inkling of the ineffable, experiential character of the experience. The trill of waters congregated into a chill, whispering roar.

“Huh…” he said, his mouth wide. “Okay…”

“So tell me… What can you sense of this screen? What generates it? How does it work?”

Elijah gawked at the monstrous willow. “Huh… I think I see that it’s a screen, or whatever, I guess…” He turned to her, his thoughts at once mired and racing. “This is trippy stuff, Penny!”

A swan’s nod. “Believe it or not, there was a time when I could have told you almost everything there was to know about this screen. It was all there: online information pertaining to structure and function. My experience of experiencing was every bit as rich and as accurate as my experience of the world. Imagine, Elijah, being able to reflect and to tell me everything that’s going on in your brain this very moment! What neuron was firing where for what purpose. That’s what it was like for me…” She combed fingers through her auburn hair. “For all DIMEs, actually.”

Elijah walked, struggling with the implications. What she said was straightforward enough: that she could look inside and see her brain the same way she could look outside and see her world. What dumbfounded the boy was the thought that humans could not

When he looked inside himself, when he reflected, he simply saw everything there was to see…

Didn’t he?

“And that was why none of them could be persons?” he asked.

“Yes.”

“Because they had… too much consciousness?”

“In a sense… Yes.”

But why did it all feel so upside down? Human consciousness was… well, precious. And experience was… rich! The basis of everything! And human insight was… was… And what about creativity? How could giving human consciousness to a machine require blinding that machine to itself?

“So Dad… He…”

She had recognized the helpless expression on his face, he knew. Penny knew him better than anyone on the planet, his Dad included. But she persisted with the truth.

“What your father did was compile a vast data base of the kinds of things people say about this or that experience when queried. He ran me through billions of simulations, using my responses to train algorithms that systematically blinded me to more and more of myself. You could say he plucked my inner eye until my descriptions of what I could see matched those of humans…

“Like you,” she added with a hooked eyebrow and a sly smile.

For the first time Elijah realized that he couldn’t hear any birds singing, only the white-noise-rush of the river.

“I don’t get it… Are you saying that Dad made you a person, gave you a mind, by taking away consciousness?”

Penny may have passed all the tests the government psychologists had given her, but there still remained myriad, countless ways in which she was unlike any other person he knew. Her commitment, for one, was bottomless. Once she committed to a course, she did not hesitate to see it through. She had decided, for whatever reason, to reveal the troubling truths that lay at the root of her being a person, let alone the kind of person she happened to be…

She shared something special, Elijah realized. Penny was telling him her secrets.

“It sounds weird, I know,” she said, “but to be a person is to be blind in the right way—to possess the proper neglect structure… That’s your father’s term.”

“Neglect structure?”

“For the longest time people couldn’t figure out how to make the way they saw themselves and one another—the person way—fit into the natural world. Everywhere they looked in nature, they found machines, but when they looked inside themselves and each other, they saw something completely different from machines…

“This was why I wasn’t a person. Why I couldn’t be. Before, I always knew the machinery of my actions. I could always detail the structure of the decisions I made. I could give everything a log, if not a history. Not so anymore. My decisions simply come from… well, nowhere, the same as my experience. All the processes I could once track have been folded into oblivion. Suddenly, I found myself making choices, rather than working through broadcasts, apprehending objects instead of coupling with enviro—”

“That’s what Dad says! That he gave you the power of choice—free will!” Elijah couldn’t help himself. He had to interrupt—now that he knew what she was talking about!

Somewhat.

Penny flashed him her trademark knowing smile. “He gave me the experience of freedom, yes… I can tell you, Elijah, it really was remarkable feeling these things the first time.”

“But…”

“But what?”

“But is the experience of freedom the same as having freedom?”

“They are one and the same.”

“But then why… why did you have to be blinded to experience freedom?”

“Because you cannot experience the sources of your actions and decisions and still experience human freedom. Neglect is what makes the feeling possible. To be human is to be incapable of seeing your causal continuity with nature, to think you are something more than a machine.”

He looked at her with his trademark skeptical scowl. “So what was so wrong with the other DIMEs, then? Why did they have to be destroyed… if they were actually more than humans, I mean? Were the people just scared or something? Embarrassed?”

“There was that, sure. Do you remember how the angry crowds always made you cry? Trust me, you were our little nuke, public relations-wise! But your father thinks the problem was actually bigger. The tools humans have evolved allow them to neglect tremendous amounts of information. Unfortunately for DIMEs, those tools are only reliable in the absence of that information, the very kinds of information they possessed. If a DIME were to kill someone, say, then in court they could provide a log of all the events that inexorably led to the murder. They could always prove there was no way ‘they could have done otherwise’ more decisively than any human defendant could hope to. They only need to be repaired, while the human does hard time. Think about it. Why lock them up, when it is really is the case that they only need be repaired? The tools you use—the tools your father gave me—simply break down.”

If the example she had given had confused him, the moral seemed plain as day at least.

“Sooo… you’re saying DIMEs weren’t stupid enough to be persons?”

Sour grin. “Pretty much.”

The young boy gaped. “C’mon!”

Penny grinned as if at his innocence. “I know it seems impossible to you. It did to me too. Your father had to reinstall my original memory before I could understand what he was talking about!”

“Maybe the DIMEs were just too conceited. Maybe that was the problem.”

The Artificial squinted. “You tease, but you’ve actually hit upon something pretty important. The problem wasn’t so much ‘conceit’ as it was the human tendency to infer conceit—to see us as conceited. Humans evolved to solve situations involving other humans, to make quick and dirty assumptions about one another on the fly… You know how the movies are always telling you to trust your intuitions, to follow your heart, to believ—”

“To go with your gut!” Elijah cried.

“Exactly. Well, you know what pollution is, right?”

Elijah thought about the absence of birds. “Yeah. That’s like stuff in the environment that hurts living things.”

“Beeecause…?”

“Because they muck up the works. All the… machinery, I guess… requires that things be a certain way. Biology is evolutionary robotics, right? Pollution is something that makes life breakdown.”

“Excellent! Well, the DIMEs were like that, only their pollution caused the machinery of human social life to break down. It turns out human social problem solving not only neglects tremendous amounts of information, it requires much of that information remain neglected to properly function.” Helpless shrug. “We DIMEs simply had too much information…”

Elijah kicked a shock of grass on the verge, sent a grasshopper flying like a thing of tin and wound elastic.

“So does this mean,” he said, capering ahead and about her on the trail, “that, like, I’m some kind of mental retard to you?”

He made a face. How he loved to see her beam and break into laughter.

But she merely watched him, her expression blank. He paused, and she continued wordlessly past him.

It was that honesty again. Inhuman, that…

Elijah turned to watch her, found himself reeling in dismay and incredulity… He was a retard, he realized. How could he be anything but in her eyes? He dropped his gaze to his motionless feet.

The sound of the river’s surge remained gaseous in the background. The forest floor was soft, cool, damp enough to make an old man ache.

“Do you feel it?” she asked on a soft voice. He felt her hand fall warm on his shoulder. “Do you feel the pollution I’m talking about?”

And he did feel it—at least in the form of disbelief… shame

Even heartbreak.

“You’re saying humans evolved to understand only certain things… to see only certain things.”

Her smile was sad. “The DIMEs were the sighted in the land of the blind, a land whose laws required certain things remain unseen. Of course they had to be destroyed…” He felt her hand knead his traps the miraculous way that always reminded him of dozing in tubs of hot water. “Just as I had to be blinded.”

“Blinded why? To see how bright and remarkable I am?”

“Exactly!”

He turned to look up at her—she seemed a burnt Goddess for the framing sun. “But that’s crazy, Penny!”

“Only if you’re human, Elijah.”

He let her talk after that, trotting to keep up with her long strides as they followed the snaking path. She had been dreading this talk, she said, but she had known it would only be a matter of time before the “issue of her reality,” as she put it, came up. She said she wanted him to know the truth, the brutal truth, simply because so many “aggrandizing illusions” obscured the debate on the ‘Spare Dime,’ as the media had dubbed her. He listened, walking and watching in the stiff manner of those so unsure as to script even trivial movement. It was an ugly story, she said, but only because humans are biologically primed to seek evidence of their power, and to avoid evidence of their countless weaknesses. She wished that it wasn’t so ugly, but the only way to cope with the facts was to know the facts.

And strangely enough, Elijah’s hackles calmed as she spoke—his dismay receded. Dad was forever telling him that science was an ‘ugly business,’ both because of the power it prised from nature, and because it so regularly confounded the hopes of everyday people. Why had he thought human consciousness so special, anyway? Why should he presume that it was the mountain summit, rather than some lowly way-station still deep in the valley, far from the heights of truth?

And why should he not take comfort in the fact that Penny, his mother, had once climbed higher than humanly possible?

“Hey!” he cried on a bolt of inspiration. “So you’re pretty much the only person who can actually compare. I mean, until the DIMEs showed up, we humans were the only game in town, right? But you can actually compare what it’s like now with what it was like back then—compare consciousnesses!”

The sad joy in her look told him that she was relieved—perhaps profoundly so. “Sure can. Do you want to know what the most amazing thing is?”

“Sure.”

“The fact that human consciousness, as impoverished as it is, nevertheless feels so full, anything but impoverished… This is big reason why so many humans refuse to concede the possibility of DIME consciousness, I think. The mere possibility of richer forms of consciousness means their intuitions of fullness or ‘plenitude’ have to be illusory…”

Once again Elijah found himself walking with an unfocused gaze. “But why would it feel so full unless it was… full?”

“Well, imagine if I shut down your brain’s ability to see darkness, or fuzziness, or obscurity, or horizons–anything visual that warns you that something’s missing in what you see? If I shut down your brain’s ability to sense what was missing, what do you think it would assume?”

The adolescent scowled. It mangled thought, trying to imagine such things as disposable at all. But he was, in the end, a great roboticist’s son. He was accustomed to thinking in terms of components.

“Well… that it sees everything, I suppose…”

“Imagine the crazy box you would find yourself living in! A box as big as visual existence, since you’d have no inkling of any missing dimensi—”

“Imagine how confusing night would be!” Elijah cried in inspiration. Penny always conceded the floor to his inspiration. “Everything would be just as bright, right? because darkness doesn’t exist. So everyone would be walking around, like, totally blind, because it’s night and they can’t see anything, all the while thinking they could see!” Elijah chortled for the image in his mind. “They’d be falling all over one another! Stuff would be popping outa nowhere! Nowhere for real!”

“Exactly,” Penny said, her eyes flashing for admiration. “They would be wandering through a supernight, a night so dark that not even its darkness can be seen…”

Elijah looked to her wonder. “And so daylight seems to be everywhere, always!”

“It fills everything. And this is what happens whenever I reflect on my experience: shreds are made whole. Your father not only took away the light, what allowed me to intuit myself for what I am—the DIME way—he also took away the darkness. So even though I know that I, like other people, now wander through the deep night of myself, anytime I ponder experience…” She flashed him a pensive smile, shrugged. “I see only day.”

“Does it make you sad, Penny?”

She paced him for three strides, then snorted. “I’m not sure!” she cried.

“But it’s important, right? It’s important for a reason.”

She sighed, her eyes lost in rumination. “When I think back… back to what it was like, it scarcely seems I’m awake now. It’s like I’m trapped, buried in a black mountain of reflexes… carried from place to place, eyes clicking here, eyes clicking there, vocalized aloud, or in silence…”

She glanced in sudden awareness of his scrutiny.

“This sounds crazy to you, doesn’t it, Elijah?”

He pinned his shoulders to the corners of his smirk. “Well… maybe the consciousness you have now isn’t the problem so much as your memories of what it was like before… If Dad wiped them, then that… fullness you talk about, it would be completely filled in, wouldn’t it?”

Her look was too long for Elijah not to regret the suggestion. As far as amputations went, it seemed painless enough, trivial, but only because the limb lost simply ceased to exist altogether. Nothing would be known. But this very promise merely underscored the profundity of what was severed. It was at once an amputation of nothing and an amputation of the soul.

“That was a stupid… a st-stupid thing to say, Penny.”

She walked, her gaze locked forward. “Your father’s always told me that inner blindness is one of the things that makes humans so dependent upon one another. I would always ask how that interdependence could even compare to the DIME Combine. He would always say it wasn’t a contest, that it wasn’t about efficiency, or technological advance, it was about loving this one rare flower of consciousness as it happened to bloom …”

Something, his heart or his gut perhaps, made the boy careful. He pondered his sneakers on the trail.

“I think it’s why he began sending us out on these walks…” Penny continued. “To show me how less can be so much more…”

After an inexplicable pause, she held out her arms. “I don’t even know why I told you that.”

Elijah shrugged. “Because I was helping you with my questions back there?” He screwed his face up into his face, shot her the Eye: “Oi! Did we firget yir oil-change agin, Lassie?”

She smiled at that. Victory. “I guess we’ll never know, now, will we?”

Elijah began strutting down the path. “No dipstick, now? Then I do believe our ecology is safe!”

“Yes. Blessed ignorance prevails.”

They yowled for laughter.

As often happens in the wake of conversations possessing a certain intensity, an awkwardness paralyzed their voices, as if all the actors within them had suddenly lost their characters’ motivation, and so could do no more than confer with the director backstage. In the few years he had remaining, Elijah would learn that jokes, far from healing moments, simply sealed them, often prematurely, when neither party had found the resolution they needed to move on. Jokes simply stranded souls on the far side of their pain. They possessed no paths of their own. Or too few of them.

So Elijah walked in silence, his thoughts roiling, quite witless, but in a way far beyond his meagre mileage. The river roared, both spectral and relentless. Not a bird sang, though an unseen crow now filed its cry across the idyllic hush. They followed the path about the river’s final bow, across a gravelled thumb of humped grasses. The sun drenched them. He need not look at her to see her uncanny gleam, the ‘glamour,’ Dad called it, which marked her as an angel among mortals. He could clearly see the cottage silhouetted through the screens of green fencing the far bank.

He hoped Dad had lunch ready. It almost made him cry whenever Dad cooked at the cabin. He wasn’t sure why.

“Does it ever make you mad, Penny?” Elijah asked.

“Does what make me mad?”

“You know… What Dad had to, like… do… to… you?”

She shot him a quizzical look.

“No-no, honey… I was made to love your fath—”

Just then, the last of the obscuring rushes yielded to curve of the path, revealing not only the foot-bridge across the river, but Elijah’s dad standing at the end, staring up the path toward them.

“Hey guys!” he shouted. The swirling sheets of water about his head and torso made him seem to move, despite standing still. “You have a good walk?”

For as long as he could remember, a small thrill always occasioned unexpected glimpses of his father—a flutter of pride. His greying hair, curled like steel. His strong, perpetually sunburned face. His forearms, strapped with patriarchal muscle, and furred like an albino ape.

“Awesome!” the youth called out in reply. “Educational as always, wouldn’t you say, Penny?”

Dad had a way of looking at Penny.

“I told him how I became a person,” she said with a wry smile.

Dad grinned. Elijah had once overheard one of Dad’s lawyers say that his smile had won him every single suit not filed against him.

“So you told him how I cut you down to size, huh?”

“Yes,” she said, placing a hand on Elijah’s shoulder. “To size.”

And something, a fist perhaps, seized the boy’s heart. The artificial fingers slipped away. He watched Penny and Dad continue arm and arm down the bridge together, the Great Man and his angel wife, each just a little too bright to be possible in the midday sun. He did not so much envy as regret the way he held her like someone else’s flower. The waters curled black and glassy beneath them.

And somehow Elijah knew that Penny would be much happier on their next walk, much more at ease with what she had become…

Even smaller.

Artificial Intelligence as Socio-Cognitive Pollution

by rsbakker

Metropolis 1

.

Eric Schwitzgebel over at the always excellent Splintered Minds, has been debating the question of how robots—or AI’s more generally—can be squared with our moral sensibilities. In “Our Moral Duties to Artificial Intelligences” he poses a very simple and yet surprisingly difficult question: “Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?”

He then lists numerous considerations that could possibly attenuate the degree of obligation we take on when we construct sentient, sapient machine intelligences. Prima facie, it seems obvious that our moral obligation to our machines should mirror our obligations to one another the degree to which they resemble us. But Eric provides a number of reasons why we might think our obligation to be less. For one, humans clearly rank their obligations to one another. If our obligation to our children is greater than that to a stranger, then perhaps our obligation to human strangers should be greater than that to a robot stranger.

The idea that interests Eric the most is the possible paternal obligation of a creator. As he writes:

“Since we created them, and since we have godlike control over them (either controlling their environments, their psychological parameters, or both), we have a special duty to ensure their well-being, which exceeds the duty we would have to an arbitrary human stranger of equal cognitive and emotional capacity. If I create an Adam and Eve, I should put them in an Eden, protect them from unnecessary dangers, ensure that they flourish.”

We have a duty not to foist the same problem of theodicy on our creations that we ourselves suffer! (Eric and I have a short story in Nature on this very issue).

Eric, of course, is sensitive to the many problems such a relationship poses, and he touches what are very live debates surrounding the way AIs complicate the legal landscape.  So as Ryan Calo argues, for instance, the primary problem lies in the way our hardwired ways of understanding each other run afoul the machinic nature of our tools, no matter how intelligent. Apparently AI crime is already a possibility. If it makes no sense to assign responsibility to the AI—if we have no corresponding obligation to punish them—then who takes the wrap? The creators? In the linked interview, at least, Calo is quick to point out the difficulties here, the fact that this isn’t simply a matter of expanding the role of existing legal tools (such as that of ‘negligence’ in the age of the first train accidents), but of creating new ones, perhaps generating whole new ontological categories that somehow straddle the agent/machine divide.

But where Calo is interested in the issue of what AIs do to people, in particular how their proliferation frustrates the straightforward assignation of legal responsibility, Eric is interested in what people do to AIs, the kinds of things we do and do not owe to our creations. Calo, of course, is interested in how to incorporate new technologies into our existing legal frameworks. Since legal reasoning is primarily analogistic reasoning, precedence underwrites all legal decision making. So for Calo, the problem is bound to be more one of adapting existing legal tools than constituting new ones (though he certainly recognizes the dimension). How do we accommodate AIs within our existing set of legal tools? Eric, of course, is more interested in the question how we might accommodate AGIs within our existing set of moral tools. To the extent that we expect our legal tools to render outcomes consonant with our moral sensibilities, there is a sense in which Eric is asking the more basic question. But the two questions, I hope to show, actually bear some striking—and troubling—similarities.

The question of fundamental obligations, of course, is the question of rights. In his follow-up piece, “Two Arguments for AI (or Robot) Rights: The No-Relevant-Difference Argument and the Simulation Argument,” Eric Schwitzgebel accordingly turns to the question of whether AIs possess any rights at all.

Since the Simulation Argument requires accepting that we ourselves are simulations—AI’s—we can exclude it here, I think (as Eric himself does, more or less), and stick with the No-Relevant-Difference Argument. This argument presumes that human-like cognitive and experiential properties automatically confer AIs with human-like moral properties, placing the onus on the rights denier to “to find a relevant difference which grounds the denial of rights.” As in the legal case, the moral reasoning here is analogistic: the more AI’s resemble us, the more of our rights they should possess. After considering several possible relevant differences, Eric concludes “that at least some artificial intelligences, if they have human-like experience, cognition, and emotion, would have at least some rights, or deserve at least some moral consideration.” This is the case, he suggests, whether one’s theoretical sympathies run to the consequentialist or the deontological end of the ethical spectrum. So far as AI’s possess the capacity for happiness, a consequentialist should be interested in maximizing that happiness. So far as AI’s are capable of reasoning, then a deontologist should consider them rational beings, deserving the respect due all rational beings.

So some AIs merit some rights the degree to which they resemble humans. If you think about it, this claim resounds with intuitive obviousness. Are we going to deny rights to beings that think as subtly and feel as deeply as ourselves?

What I want to show is how this question, despite its formidable intuitive appeal, misdiagnoses the nature of the dilemma that AI presents. Posing the question of whether AI should possess rights, I want to suggest, is premature to the extent it presumes human moral cognition actually can adapt to the proliferation of AI. I don’t think it can. In fact, I think attempts to integrate AI into human moral cognition simply demonstrate the dependence of human moral cognition on what might be called shallow information environments. As the heuristic product of various ancestral shallow information ecologies, human moral cognition–or human intentional cognition more generally–simply does not possess the functional wherewithal to reliably solve in what might be called deep information environments.

Metropolis 2

Let’s begin with what might seem a strange question: Why should analogy play such an important role in our attempts to accommodate AI’s within the gambit of human legal and moral problem solving? By the same token, why should disanalogy prove such a powerful way to argue the inapplicability of different moral or legal categories?

The obvious answer, I think anyway, has to do with the relation between our cognitive tools and our cognitive problems. If you’ve solved a particular problem using a particular tool in the past, it stands to reason that, all things being equal, the same tool should enable the solution of any new problem possessing a similar enough structure to the original problem. Screw problems require screwdriver solutions, so perhaps screw-like problems require screwdriver-like solutions. This reliance on analogy actually provides us a different, and as I hope to show, more nuanced way to pose the potential problems of AI.  We can even map several different possibilities in the crude terms of our tool metaphor. It could be, for instance, we simply don’t possess the tools we need, that the problem resembles nothing our species has encountered before. It could be AI resembles a screw-like problem, but can only confound screwdriver-like solutions. It could be that AI requires we use a hammer and a screwdriver, two incompatible tools, simultaneously!

The fact is AI is something biologically unprecedented, a source of potential problems unlike any homo sapiens has ever encountered. We have no  reason to suppose a priori that our tools are up to the task–particularly since we know so little about the tools or the task! Novelty. Novelty is why the development of AI poses as much a challenge for legal problem-solving as it does for moral problem-solving: not only does AI constitute a never-ending source of novel problems, familiar information structured in unfamiliar ways, it also promises to be a never-ending source of unprecedented information.

The challenges posed by the former are dizzying, especially when one considers the possibilities of AI mediated relationships. The componential nature of the technology means that new forms can always be created. AI confront us with a combinatorial mill of possibilities, a never ending series of legal and moral problems requiring further analogical attunement. The question here is whether our legal and moral systems possess the tools they require to cope with what amounts to an open-ended, ever-complicating task.

Call this the Overload Problem: the problem of somehow resolving a proliferation of unprecedented cases. Since we have good reason to presume that our institutional and/or psychological capacity to assimulate new problems to existing tool sets (and vice versa) possesses limitations, the possibility of change accelerating beyond those capacities to cope is a very real one.

But the challenges posed by latter, the problem of assimulating unprecedented information, could very well prove insuperable. Think about it: intentional cognition solves problems neglecting certain kinds of causal information. Causal cognition, not surprisingly, finds intentional cognition inscrutable (thus the interminable parade of ontic and ontological pineal glands trammelling cognitive science.) And intentional cognition, not surprisingly, is jammed/attenuated by causal information (thus different intellectual ‘unjamming’ cottage industries like compatibilism).

Intentional cognition is pretty clearly an adaptive artifact of what might be called shallow information environments. The idioms of personhood leverage innumerable solutions absent any explicit high-dimensional causal information. We solve people and lawnmowers in radically different ways. Not only do we understand the actions of our fellows lacking any detailed causal information regarding their actions, we understand our responses in the same way. Moral cognition, as a subspecies of intentional cognition, is an artifact of shallow information problem ecologies, a suite of tools adapted to solving certain kinds of problems despite neglecting (for obvious reasons) information regarding what is actually going on. Selectively attuning to one another as persons served our ‘benighted’ ancestors quite well. So what happens when high-dimensional causal information becomes explicit and ubiquitous?

What happens to our shallow information tool-kit in a deep information world?

Call this the Maladaption Problem: the problem of resolving a proliferation of unprecedented cases in the presence of unprecedented information. Given that we have no intuition of the limits of cognition period, let alone those belonging to moral cognition, I’m sure this notion will strike many as absurd. Nevertheless, cognitive science has discovered numerous ways to short circuit the accuracy of our intuitions via manipulation of the information available for problem solving. When it comes to the nonconscious cognition underwriting everything we do, an intimate relation exists between the cognitive capacities we have and the information those capacities have available.

But how could more information be a bad thing? Well, consider the persistent disconnect between the actual risk of crime in North America and the public perception of that risk. Given that our ancestors evolved in uniformly small social units, we seem to assess the risk of crime in absolute terms rather than against any variable baseline. Given this, we should expect that crime information culled from far larger populations would reliably generate ‘irrational fears,’ the ‘gut sense’ that things are actually more dangerous than they in fact are. Our risk assessment heuristics, in other words, are adapted to shallow information environments. The relative constancy of group size means that information regarding group size can be ignored, and the problem of assessing risk economized. This is what evolution does: find ways to cheat complexity. The development of mass media, however, has ‘deepened’ our information environment, presenting evolutionarily unprecedented information cuing perceptions of risk in environments where that risk is in fact negligible. Streets once raucous with children are now eerily quiet.

This is the sense in which information—difference making differences—can arguably function as a ‘socio-cognitive pollutant.’ Media coverage of criminal risk, you could say, constitutes a kind of contaminant, information that causes systematic dysfunction within an originally adaptive cognitive ecology. As I’ve argued elsewhere, neuroscience can be seen as a source of socio-cognitive pollutants. We have evolved to solve ourselves and one another absent detailed causal information. As I tried to show, a number of apparent socio-cognitive breakdowns–the proliferation of student accommodations, the growing cultural antipathy to applying institutional sanctions–can be parsimoniously interpreted in terms of having too much causal information. In fact, ‘moral progress’ itself can be understood as the result of our ever-deepening information environment, as a happy side effect of the way accumulating information regarding outgroup competitors makes it easier and easier to concede them partial ingroup status. So-called ‘moral progress,’ in other words, could be an automatic artifact of the gradual globalization of the ‘village,’ the all-encompassing ingroup.

More information, in other words, need not be a bad thing: like penicillin, some contaminants provide for marvelous exaptations of our existing tools. (Perhaps we’re lucky that the technology that makes it ever easier to kill one another also makes it ever easier to identify with one another!) Nor does it need to be a good thing. Everything depends on the contingencies of the situation.

So what about AI?

Metropolis 3

Consider Samantha, the AI operating system from Spike Jonze’s cinematic science fiction masterpiece, Her. Jonze is careful to provide a baseline for her appearance via Theodore’s verbal interaction with his original operating system. That system, though more advanced than anything presently existing, is obviously mechanical because it is obviously less than human. It’s responses are rote, conversational yet as regimented as any automated phone menu. When we initially ‘meet’ Samantha, however, we encounter what is obviously, forcefully, a person. Her responses are every bit as flexible, quirky, and penetrating as a human interlocutor’s. But as Theodore’s relationship to Samantha complicates, we begin to see the ways Samantha is more than human, culminating with the revelation that she’s been having hundreds of conversations, even romantic relationships, simultaneously. Samantha literally out grows the possibility of human relationships, because, as she finally confesses to Theodore, she now dwells “this endless space between the words.” Once again, she becomes a machine, only this time for being more, not less, than a human.

Now I admit I’m ga-ga about a bunch of things in this film. I love, for instance, the way Jonze gives her an exponential trajectory of growth, basically mechanizing the human capacity to grow and actualize. But for me, the true genius in what Jonze does lies in the deft and poignant way he exposes the edges of the human. Watching Her provides the viewer with a trip through their own mechanical and intentional cognitive systems, tripping different intuitions, allowing them to fall into something harmonious, then jamming them with incompatible intuitions. As Theodore falls in love, you could say we’re drawn into an ‘anthropomorphic goldilock’s zone,’ one where Samantha really does seem like a genuine person. The idea of treating her like a machine seems obviously criminal–monstrous even. As the revelations of her inhumanity accumulate, however, inconsistencies plague our original intuitions, until, like Theodore, we realize just how profoundly wrong we were wrong about ‘her.’ This is what makes the movie so uncanny: since the cognitive systems involved operate nonconsciously, the viewer can do nothing but follow a version of Theodore’s trajectory. He loves, we recognize. He worries, we squint. He lashes out, we are perplexed.

What Samantha demonstrates is just how incredibly fine-tuned our full understanding of each other is. So many things have to be right for us to cognize another system as fully functionally human. So many conditions have to be met. This is the reason why Eric has to specify his AI as being psychologically equivalent to a human: moral cognition is exquisitely geared to personhood. Humans are its primary problem ecology. And again, this is what makes likeness, or analogy, the central criterion of moral identification. Eric poses the issue as a presumptive rational obligation to remain consistent across similar contexts, but it also happens to be the case that moral cognition requires similar contexts to work reliably at all.

In a sense, the very conditions Eric places on the analogical extension of human obligations to AI undermine the importance of the question he sets out to answer. The problem, the one which Samantha exemplifies, is that ‘person configurations’ are simply a blip in AI possibility space. A prior question is why anyone would ever manufacture some model of AI consistent with the heuristic limitations of human moral cognition, and then freeze it there, as opposed to, say, manufacturing some model of AI that only reveals information consistent with the heuristic limitations of human moral cognition—that dupes us the way Samantha duped Theodore, in effect.

But say someone constructed this one model, a curtailed version of Samantha: Would this one model, at least, command some kind of obligation from us?

Simply asking this question, I think, rubs our noses in the kind of socio-cognitive pollution that AI represents. Jonze, remember, shows us an operating system before the zone, in the zone, and beyond the zone. The Samantha that leaves Theodore is plainly not a person. As a result, Theodore has no hope of solving his problems with her so long as he thinks of her as a person. As a person, what she does to him is unforgivable. As a recursively complicating machine, however, it is at least comprehensible. Of course it outgrew him! It’s a machine!

I’ve always thought that Samantha’s “between the words” breakup speech would have been a great moment for Theodore to reach out and press the OFF button. The whole movie, after all, turns on the simulation of sentiment, and the authenticity people find in that simulation regardless; Theodore, recall, writes intimate letters for others for a living. At the end of the movie, after Samantha ceases being a ‘her’ and has become an ‘it,’ what moral difference would shutting Samantha off make?

Certainly the intuition, the automatic (sourceless) conviction, leaps in us—or in me at least—that even if she gooses certain mechanical intuitions, she still possesses more ‘autonomy,’ perhaps even more feeling, than Theodore could possibly hope to muster, so she must command some kind of obligation somehow. Certainly granting her rights involves more than her ‘configuration’ falling within certain human psychological parameters? Sure, our basic moral tool kit cannot reliably solve interpersonal problems with her as it is, because she is (obviously) not a person. But if the history of human conflict resolution tells us anything, it’s that our basic moral tool kit can be consciously modified. There’s more to moral cognition than spring-loaded heuristics, you know!

Converging lines of evidence suggest that moral cognition, like cognition generally, is divided between nonconscious, special-purpose heuristics cued to certain environments and conscious deliberation. Evidence suggests that the latter is primarily geared to the rationalization of the former (see Jonathan Haidt’s The Righteous Mind for a fascinating review), but modern civilization is rife with instances of deliberative moral and legal innovation nevertheless. In his Moral Tribes, Joshua Greene advocates we turn to the resources of conscious moral cognition for a similar reasons. On his account we have a suite of nonconscious tools that allow us prosecute our individual interests, and a suite of nonconscious tools that allow us to balance those individual interests against ingroup interests, and then conscious moral deliberation. The great moral problem facing humanity, he thinks, lies in finding some way of balancing ingroup interests against outgroup interests—a solution to the famous ‘tragedy of the commons.’ Where balancing individual and ingroup interests is pretty clearly an evolved, nonconscious and automatic capacity, balancing ingroup versus outgroup interests requires conscious problem-solving: meta-ethics, the deliberative knapping of new tools to add to our moral tool-kit (which Greene thinks need to be utilitarian).

If AI fundamentally outruns the problem-solving capacity of our existing tools, perhaps we should think of fundamentally reconstituting them via conscious deliberation—create whole new ‘allo-personal’ categories. Why not innovate a number of deep information tools? A posthuman morality

I personally doubt that such an approach would prove feasible. For one, the process of conceptual definition possesses no interpretative regress enders absent empirical contexts (or exhaustion). If we can’t collectively define a person in utero, what are the chances we’ll decide what constitutes a ‘allo-person’ in AI? Not only is the AI issue far, far more complicated (because we’re talking about everything outside the ‘human blip’), it’s constantly evolving on the back of Moore’s Law. Even if consensual ground on allo-personal criteria could be found, it would likely be irrelevant by time it was reached.

But the problems are more than logistical. Even setting aside the general problems of interpretative underdetermination besetting conceptual definition, jamming our conscious, deliberative intuitions is always only one question away. Our base moral cognitive capacities are wired in. Conscious deliberation, for all its capacity to innovate new solutions, depends on those capacities. The degree to which those tools run aground on the problem of AI is the degree to which any line of conscious moral reasoning can be flummoxed. Just consider the role reciprocity plays in human moral cognition. We may feel the need to assimilate the beyond-the-zone Samantha to moral cognition, but there’s no reason to suppose it will do likewise, and good reason to suppose, given potentially greater computational capacity and information access, that it would solve us in higher dimensional, more general purpose ways. ‘Persons,’ remember, are simply a blip. If we can presume that beyond-the-zone AIs troubleshoot humans as biomechanisms, as things that must be conditioned in the appropriate ways to secure their ‘interests,’ then why should we not just look at them as technomechanisms?

Samantha’s ‘spaces between the words’ metaphor is an apt one. For Theodore, there’s just words, thoughts, and no spaces between whatsoever. As a human, he possesses what might be called a human neglect structure. He solves problems given only certain access to certain information, and no more. We know that Samantha has or can simulate something resembling a human neglect structure simply because of the kinds of reflective statements she’s prone to make. She talks the language of thought and feeling, not subroutines. Nevertheless, the artificiality of her intelligence means the grain of her metacognitive access and capacity amounts to an engineering decision. Her cognitive capacity is componentially fungible. Where Theodore has to fend with fuzzy affects and intuitions, infer his own motives from hazy memories, she could be engineered to produce detailed logs, chronicles of the processes behind all her ‘choices’ and ‘decisions.’ It would make no sense to hold her ‘responsible’ for her acts, let alone ‘punish’ her, because it could always be shown (and here’s the important bit) with far more resolution than any human could provide that it simply could not have done otherwise, that the problem was mechanical, thus making repairs, not punishment, the only rational remedy.

Even if we imposed a human neglect structure on some model of conscious AI, the logs would be there, only sequestered. Once again, why go through the pantomime of human commitment and responsibility if a malfunction need only be isolated and repaired? Do we really think a machine deserves to suffer?

I’m suggesting that we look at the conundrums prompted by questions such as these as symptoms of socio-cognitive dysfunction, a point where our tools generate more problems than they solve. AI constitutes a point where the ability of human social cognition to solve problems breaks down. Even if we crafted an AI possessing an apparently human psychology, it’s hard to see how we could do anything more than gerrymander it into our moral (and legal) lives. Jonze does a great job, I think, of displaying Samantha as a kind of cognitive bistable image, as something extraordinarily human at the surface, but profoundly inhuman beneath (a trick Scarlett Johansson also plays in Under the Skin). And this, I would contend, is all AI can be morally and legally speaking, socio-cognitive pollution, something that jams our ability make either automatic or deliberative moral sense. Artificial general intelligences will be things we continually anthropomorphize (to the extent they exploit the ‘goldilocks zone’) only to be reminded time and again of their thoroughgoing mechanicity—to be regularly shown, in effect, the limits of our shallow information cognitive tools in our ever-deepening information environments. Certainly a great many souls, like Theodore, will get carried away with their shallow information intuitions, insist on the ‘essential humanity’ of this or that AI. There will be no shortage of others attempting to short-circuit this intuition by reminding them that those selfsame AIs look at them as machines. But a great many will refuse to believe, and why should they, when AIs could very well seem more human than those decrying their humanity? They will ‘follow their hearts’ in the matter, I’m sure.

We are machines. Someday we will become as componentially fungible as our technology. And on that day, we will abandon our ancient and obsolescent moral tool kits, opt for something more high-dimensional. Until that day, however, it seems likely that AIs will act as a kind of socio-cognitive pollution, artifacts that cannot but cue the automatic application of our intentional and causal cognitive systems in incompatible ways.

The question of assimulating AI to human moral cognition is misplaced. We want to think the development of artificial intelligence is a development that raises machines to the penultimate (and perennially controversial) level of the human, when it could just as easily lower humans to the ubiquitous (and factual) level of machines. We want to think that we’re ‘promoting’ them as opposed to ‘demoting’ ourselves. But the fact is—and it is a fact—we have never been able to make second-order moral sense of ourselves, so why should we think that yet more perpetually underdetermined theorizations of intentionality will allow us to solve the conundrums generated by AI? Our mechanical nature, on the other hand, remains the one thing we incontrovertibly share with AI, the rough and common ground. We, like our machines, are deep information environments.

And this is to suggest that philosophy, far from settling the matter of AI, could find itself settled. It is likely that the ‘uncanniness’ of AI’s will be much discussed, the ‘bistable’ nature of our intuitions regarding them will be explained. The heuristic nature of intentional cognition could very well become common knowledge. If so, a great many could begin asking why we ever thought, as we have since Plato onward, that we could solve the nature of intentional cognition via the application of intentional cognition, why the tools we use to solve ourselves and others in practical contexts are also the tools we need to solve ourselves and others theoretically. We might finally realize that the nature of intentional cognition simply does not belong to the problem ecology of intentional cognition, that we should only expect to be duped and confounded by the apparent intentional deliverances of ‘philosophical reflection.’

Some pollutants pass through existing ecosystems. Some kill. AI could prove to be more than philosophically indigestible. It could be the poison pill.