The Zuckerberg Illusion
by rsbakker
So the special issue of Wired Magazine edited by Barack Obama has just come out, and I wanted to draw attention to Mark Zuckerberg’s response to the President’s challenge to “ensure that artificial intelligence helps rather than hurts us.” Somehow, someway, this issue has to move away from the ‘superintelligence’ debate and toward a collective conversation on the impact of AI on human cognitive ecology. Zuckerberg’s response betrays a tragic lack of understanding from the man who, arguably, has already transformed our social cognitive ecologies more radically than any other individual in the history of the human race. Anyone knowing some way of delivering this message from steerage up to the bridge, forward the bloody thing, because the combination of this naivete with the growing ubiquity of AI is becoming, ahem, a little scary. The more baked-in the existing trends become, the harder the hard decisions will become.
Zuckerberg begins his response to Obama’s challenge sounding very much like a typical American industrialist: only the peculiarity of his product makes his claim remarkable.
“People have always used technology as a lever to improve lives and increase productivity. But at the beginning of every cycle of invention, there’s a temptation to focus on the risks that come with a new technology instead of the benefits it will bring.
Today we are seeing that happen with artificial intelligence.”
What he wants to do in this short piece is allay the fears that have arisen regarding AI. His strategy for doing so is to show how our anxieties are the same overblown anxieties that always occasion the introduction of some new technology. These too, he assures us, will pass in time. Ultimately, he writes:
“When people come up with doomsday scenarios about AI, it’s important to remember that these are hypothetical. There’s little basis outside science fiction to believe they will come true.”
Of course, one need only swap out ‘AI’ with ‘industrialization’ to appreciate that not all ‘doomsday scenarios’ are equal. By any comparison, the Anthropocene already counts as one of the great extinction events to befall the planet, an accomplished ‘doomsday’ for numerous different species, and an ongoing one for many others. The reason for this ongoing extinction has to do with the supercomplicated systems of interdependency comprising our environments. Everything is adapted to everything else. Like pouring sand into a gas tank, introducing unprecedented substances and behaviours (such as farming) into existing ecologies progressively perturbs these systems, until eventually they collapse, often taking down other systems depending on them.
Malthus is the first on record predicting the possibility of natural environmental collapse in the 18th century, but the environmental movement only really got underway as the consequences of industrialization became evident in the 19th century. The term pollution, which during the Middle-ages meant defilement, took on its present meaning as “unnatural substance in natural systems” at the turn of the 20th century.
Which begs the question: Why were our ancestors so long in seeing the peril presented by industrialization? Well, for one, the systems comprising ecologies are all, in some way or another, survivors of prior ecological collapses. Ecologies are themselves adaptive systems, exhibiting remarkable resilience in many cases—until they don’t. The supercomplicated networks of interdependence constituting environments only became obvious to our forebears when they began really breaking down. Once one understands the ecological dimension of natural environments, the potentially deleterious impact of ecologically unprecedented behaviours and materials becomes obvious. If the environmental accumulation of industrial by-products constitutes an accelerating trend, then far from a science fiction premise, the prospect of accelerating ecological degradation of environments becomes a near certainty, and the management of ecological consequences an absolute necessity.
Which begs a different, less obvious question: Why would these networks of ecological interdependence only become visible to our ancestors after they began breaking down? Why should humans initially atomize their environments, and only develop complex, relational schemes after long, hard experience? The answer lies in the ecological nature of human cognition, the fact that we evolved to take as much ‘for granted’ as possible. The sheer complexity of the deep connectivity underwriting our surrounding environments renders them computationally intractable, and thus utterly invisible to us. (This is why the ecology question probably seemed like such an odd thing to ask: it quite literally goes without saying that we had to discover natural ecology). So cognition exploits the systematic correlations between what information is available and the systems requiring solution to derive ecologically effective behaviours. The human penchant for atomizing and essentializing their environments enables them to cognize ecology despite remaining blind to it.
What does any of this have to do with Zuckerberg’s optimistic argument for plowing more resources into the development of AI? Well, because I think it’s pretty clear he’s labouring under the very same illusion as the early industrialists, the illusion of acting in a grand, vacant arena, a place where unintended consequences magically dissipate instead of radiate.
The question, recall, is whether doomsday scenarios about AI warrant widespread alarm. It seems pretty clear, and I’m sure Zuckerberg would agree, that doomsday scenarios about industrialization do warrant widespread alarm. So what if what Zuckerberg and everyone else is calling ‘AI’ actually constitutes a form of cognitive industrialization? What will be the cognitive ecological impact of such an event?
We know that human cognition is thoroughly heuristic, so we know that human cognition is thoroughly ecological. The reason Sherry Turkle and Deidre Barrett and others worry about the ease with which human social cognition can be hacked turns on the fact that human social cognition is ecological through and through, dependent on the stable networks of interdependence. The fact is human sociocognition evolved to cope with other human intelligences, to solve on the basis of cues systematically correlated to other human brains, not to supercomputers mining vast data sets. Take our love of flattery. We evolved in ecologies where our love for flattery is balanced against the inevitability of criticism. Ancestrally, pursuing flattery amounts to overcoming—i.e., answering—criticism. We generally hate criticism, but given our cognitive ecology, we had no choice but ‘to take our medicine.’
And this is but one of countless examples.
The irony is that Zuckerberg is deeply invested in researching human cognitive ecology: computer scientists (like Hector Levesque) can rail against ‘bag of tricks’ approaches to cognition, but they will continue to be pursued because behaviour cuing behaviour is all that’s required (for humans or machines, I think). Now Zuckerberg, I’m sure, sees himself exclusively in the business of providing value for consumers, but he needs to understand how his dedication to enable and delight automatically doubles as a ruthless quest to demolish human cognitive ecology. Rewriting environments ‘to make the user experience more enjoyable’ is the foundation all industrial enterprise, all ecological destruction, and the AI onslaught is nothing if not industrial.
Deploying systems designed to cue human social cognition in the absence of humans is pretty clearly a form of deception. Soon, every corporate website will be a friend… soulful, sympathetic, utterly devoted to our satisfaction, as well as inhuman, designed to exploit, and knowing us better than any human could hope to, including ourselves. And as these inhuman friends become cheaper and cheaper, we will be deluged by them, ‘junk intelligences,’ each of them so much wittier, so much wiser, than any mundane human can hope to appear.
“At a very basic level, I think AI is good and not something we should be afraid of,” Zuckerberg concludes. “We’re already seeing examples of how AI can unlock value and improve the world. If we can choose hope over fear—and if we advance the fundamental science behind AI—then this is only the beginning.”
Indeed.
Reblogged this on The Ratliff Notepad.
I more or less resigned myself to the brave new world when I read this: http://www.bbc.com/news/world-asia-china-34592186
Weave it in with all the fancy data-miners, NL interpreters and now proto AIs that will manage wayward citizens. How do we deal with something like that? Christ, it’s tough to stay cheery in this world. Seeing as elections are around the corner though, I will use this last opportunity to say: Thanks Obama.
Wow, that sure reminds me of Satoshi Itoh’s “Harmony”. Japan, 2060. Three girls talking about how it was like before the word “privacy” became obscene.
“…There was no need or means to display personal information at other times.”
“Why not?”
“Because privacy was so important back then.”
“Privacy?” Cian giggled. “Miach, you dog!”
“They didn’t have [augmented reality] like we do, y’know. There were physical limitations to how much information you could get out there.”
“That’s true,” I said, adding so that Cian could understand, “You would’ve had to walk around with a big sign around your neck if you wanted to do what we do today.””
Great book :)!
I think the nature of our business relationship with China is such that we will also eventually have social credit ratings. I can see a time in the not too distant future when having a social credit rating will be a mark of modernity. I can see a time when privacy will seem outmoded. I a funny way, this can seem like a reversion to the old days, when we live on the open grassland in small tribes or kinship groups and everybody knew everybody else’s business. Lack of privacy can generate powerful incentives to conform to group norms, and a world being driven to conform to a universally applicable set of cultural norms might be a more peaceful world than the one we have now, a sort of Hellstrom’s Hive without the biological engineering. Or you can think of it as soft totalitarianism. I don’t know what we meant by political liberty, but it was always over rated.
Is the social credit score the Mark of the Beast?
Insert smiley face here.
This conforming to group norms is what worries me. We’ve seen what internet job lynch mobs do to people for saying a thing that could be vaguely construed as offensive, irrespective of their competences at their jobs. Now to imagine a state or a corporation having that amount of insight and power is depressing. I think babies are stupid. I think the public transport is too crowded. I think people with more than two kids maybe need to stop and chill a little with their ‘be fruitful and multiply’ scheme. I don’t want to get sanctioned for that. So what happens? Minority groups band together and start lashing out at the ‘normal’ collective. It’s a such shitty lose-lose situation.
John,
The guy who denies climate change feels the same way. So who does get sanctioned?
As much as these people or hell probably even me are detrimental to survival of advanced civilization, I still don’t want him to be censored or influenced in any way. If you don’t let a man be stupid, is he really a man? Yeah, we still got societal tools of shaming, shuning and ignoring but what I see coming is too crude and too direct. I am living in the past I know. Some things are hard to let go.
Trump won just right now. Wow. I didn’t see that coming, so what the hell do I know. I am smoking my last cigarette now. Crazy, crazy world.
A more appropriate question here:
Has anybody tried one of these yet?
http://www.wsj.com/articles/companies-rally-to-build-chatbots-for-messaging-services-1478192346
Lol – you’re hitting an interesting crux with these lately.
While I don’t dispute your considerations, I’m a lot more worried about what people are going to make of big data before the algorithms inevitably become insanely more efficacious.
Look no further than Netflix, arguably the first major entertainment company to source its data for content cues. This is why no one seems to like Adam Sandler’s straight to Netflix movies, yet they are the most watched original Netflix content…
“This is why no one seems to like Adam Sandler’s straight to Netflix movies, yet they are the most watched original Netflix content…”
Truly the most nightmarish of situations for our species.
The insane efficacy is closer than many think! But I agree with your point: I’m simply serving a slender slice of a far larger cake, here.
Cheesecake ;).
https://digest.bps.org.uk/2016/09/16/ten-famous-psychology-findings-that-its-been-difficult-to-replicate/
To beg the question is to assume the truth of a proposition which remains to be established. Here’s an easy rule of thumb for spotting misuse of the phrase: A situation or set of circumstances cannot engage in question begging. Only people can do that.
Anytime you’re tempted to write the phrase “which begs the question” you are about to abuse the English language.
Other than the repeated misuse of “begs the question” I enjoyed your post.
Language evolves. The meaning of “begging the question” expanded long ago. Attempts by you, and other pedants, to fix it in stone will always fail.
The situations referred to seem saturated with people and their attitudes. If Scott had just been talking about a volcano erupting or an earth quake, I’d agree with you – those things couldn’t beg a question.
Nope. Still a misuse.
I realize the battle is all but lost at this point. The “colloquial use” which muddies the waters and erects a barrier to understanding the original meaning is now the dominant usage, but it’s still a distraction for people who understand the meaning of the phrase as it pertains to logic and argumentation.
The phrase literally can’t be aimed at more than one person at a time, by it’s imaginary rules? Oh well, if that’s how the RPG works, then it is such.
Agreed, which is why I opted to use the phrase colloquially. 😉
Your writing would be stronger if you avoided that particular colloquialism.
I realize that I’ve latched onto a point of style over substance. I’m discussing the content of your post with the Friends of the C-Realm here:
https://www.facebook.com/groups/c.realm/permalink/10154727078436777/
Reblogged this on synthetic zero.
I’m thinking Zuckerberg sees AI as a horse and he’s the one with the whip – ie, a control issue and one he’s thoroughly on top of. Not sure whether he’d take on the pollution angle, might be to esoteric (ie, unexpected). Instead, how would socio cognitive pollution undermine him being in control?
He’s just saying politically correct shit in the companies his representings best interest
Bring on the Strong AI. Bring on the technological singularity. Humanities bid for immortal offspring (and probably overlords, but whatever.
When we can’t even cope with existential angst ourselves and our children would be exquisitely attuned to the world – like birthing a child straight into an acid bath.
So our 55 year old Prez asked 32 year old Mark Zuckerberg his opinion on AI and he gives him one of the most stock answers you can give.
BTW Zuckerberg originated Facebook as the Harvard social network that allowed the Harvard boys to rate the physical appearance of the coeds. Are you listening liberal Hillary supporter Sheryl Sandberg?
Not quite as bad as President Jimmy Carter back in the 70’s asking his 12 year old daughter about nuclear weapons or Emperor Nero asking his nephew about instant Roman fire sticks. But it may be up there.
Ain’t that the truth. Is the article online anywhere?–never even dawned on me to look, old magazine toting dinosaur that I am!
I found this: https://www.wired.com/2016/10/obama-six-tech-challenges/
Seems to go right to ageism?
https://mathbabe.org/2016/11/07/fake-news-false-information-and-stupid-polls/
Amazing stuff. There has been a flurry of research on the topic, though, and it appears that the polarization and ingroup sorting is true, but that the ‘media bubble’ actually isn’t. I’ve nuanced my own argument in light of this… I hope to have something up soon.
http://www.uclalawreview.org/utopia-a-technologically-determined-world-of-frictionless-transactions-optimized-production-and-maximal-happiness/
Great post Scott. One note from my perspective astride my hobby horse, I’d contest your image of humans initially ‘atomising’ their worlds, only to ‘develop complex, relational schemes after long, hard experience’. It seems possible if not probable that our cultural systems embraced complex relational schemes for a long time before civilisation decomposed those cognitive ecologies with atomised models, and before industrialisation atomised even more radically while simultaneously rediscovering ecological thought. We need to be cautious projecting evidence from contemporary foragers into the deep past, as it’s entirely possible that some of the sophisticated cultural systems found among hunter-gatherers evolved more recently – and it’s just that these systems don’t correlate strongly or at all with material remains, meaning it can’t be tracked through the archaeological record. But I think the general move, of appreciating the sophisticated ecological nature of pre-civilised thought, and seeing certain modern ‘discoveries’ such as ecology as a form of re-emergence at a different level, has a lot of mileage in it (a ‘return of rather than a return to the Palaeolithic’).
This would go alongside the interesting overlaps between animistic belief systems and the ontological / social issues raised by AI. Of course the differences between the environments we’re now creating and those that we left when we began to domesticate nature and project other-than-human agency into the lofty heavens may be more significant than the overlaps. But it seems worth taking on board the many ways in which modernity is recapitulating pre-civilised modes – albeit in very new ways. I guess the main difference is that pre-civilised appreciation of ecology is largely experiential, sensual, immediately relational – perhaps governed by abstractions such as ‘spirits’, but still largely based on sociality. Modern ecology thought it (has to be) more abstract, to try to embrace the non-local and more complex relations.
Implicitly, of course, all human cognition is ‘complex and relational.’ Explicitly, atomization and essentialization are impossible to avoid, however, modern or preliterate–simply because it’s the cost of human cognition. We fetishize our environments every bit as much as our ancestors did, the difference being we have this vast reservoir of scientific (high-dimensional, relational) information available to spoil our intuitions. What I’m saying is that preliterate societies had no such reservoir available.
I guess I’ve never seen the warrant behind the notion that modern technical environments are ‘recapitulating’ premodern ones (you encounter this claim in the privacy debate all the time, for instance). Our shared humanity means shared cognitive systems, so something is always being ‘recapitulated,’ whether used in ancestral ways or radically repurposed. To say that some technical environment recapitulates some ancestral one, one other hand, requires that we ignore just about everything about that environment–apart from this or that aspect. Think ‘global village.’
That’s kind of the moral of the post: if it feels like its recapping something, then it’s deceiving you.
Assuming your interpretation of AI risk is correct, and we don’t all just end up as nuclear toast, batteries in The Matrix or assimilated as computronium (or paperclips, whatever) then Anabaptists are an interesting case- they don’t just completely reject technology… they have a very long and arduous process by which technology is vetted and allowed into their societies as long as it doesn’t undermine their core social organization (ie- the church). They are actually best positioned to be the cockroaches that inherit the earth after the Semantic Apocalypse, because they will still have meaning at the core of their social existence. As akratic society around them goes batshit due to your hypothesized semantic crash space, they’ll be in a position to continue to thrive, assuming environmental devastation doesn’t wipe the possibility of their agrarian existence.
Nuclear shelters? No! Invest in horse buggies now.
Great point. Great premise for a story, actually.
Except for seeing the broader picture, you’d never be of the people. You’d always be on the outer. Perhaps like a vampire amidst the crowd, drawn to the hot blood of their faith.
https://syntheticzero.net/2016/11/08/is-grip-the-new-action-oriented-representation/
[…] Diigo https://rsbakker.wordpress.com/2016/11/05/the-zuckerberg-illusion/ Tagged: artificial_intelligence, ai, ett, etfb, etl, atwblog via […]
And the Trump illusion? It’s an illusion, right?
Watch Escape from LA. Welcome to the human race.
These guys are always trying to “unlock value”. What a ridiculous phrase. Blinded by pecuniary interest, techno-capitalists can’t see that their “value” is just expropriation, with ecological exhaustion built in long term. Great post.