AI and the Coming Cognitive Ecological Collapse: A Reply to David Krakauer
by rsbakker
Thanks to Dirk and his tireless linking generosity, I caught “Will AI Harm Us?” in Nautilus by David Krakauer, the President of the Santa Fe Institute, on the potential dangers posed by AI on this side of the Singularity. According to Krakauer, the problem lies in the fact that AI’s are competitive as opposed to complementary cognitive artifacts of the kind we have enjoyed until now. Complementary cognitive artifacts, devices such as everything from mnemonics to astrolabes to mathematical notations, allow us to pull up the cognitive ladder behind us in some way—to somehow do without the tool. “In almost every use of an ancient cognitive artifact,” he writes, “after repeated practice and training, the artifact itself could be set aside and its mental simulacrum deployed in its place.”
Competitive cognitive artifacts, however, things like calculators, GPS’s, and pretty much anything AI-ish, don’t let us kick away the ladder. We lose the artifact, and we lose the ability. As Krakauer writes:
In the case of competitive artifacts, when we are deprived of their use, we are no better than when we started. They are not coaches and teachers—they are serfs. We have created an artificial serf economy where incremental and competitive artificial intelligence both amplifies our productivity and threatens to diminish organic and complementary artificial intelligence…
So where complementary cognitive artifacts teach us how to fish, competitive cognitive artifacts simply deliver the fish, rendering us dependent. Krakauer’s complaint against AI, in other words, is the same as Plato’s complaint against writing, and I think fares just as well argumentatively. As Socrates famously claims in The Phaedrus,
For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.
The problem with writing is that it is competitive precisely in Krakauer’s sense: it’s a ladder we cannot kick away. What Plato could not foresee, of course, was the way writing would fundamentally transform human cognitive ecology. He was a relic of the preliterate age, just as Krakauer (like us) is a relic of the pre-AI age. The problem for Krakauer, then, is that the distinction between complementary and competitive cognitive artifacts—the difference between things like mnemonics and things like writing—possesses no reliable evaluative force. All tools involve trade-offs. Since Krakauer has no way of knowing how AI will transform our cognitive ecology he has no way of evaluating the kinds of trade-offs they will force upon us.
This is the problem with all ‘excess dependency arguments’ against technology, I think: they have no convincing way of assessing the kind of cognitive ecology that will result, aside from the fact that it involves dependencies. No one likes dependencies, ergo…
But I like to think I’ve figured the naturalistic riddle of cognition out,* and as a result I think I can make a pretty compelling case why we should nevertheless accept that AI poses a very grave threat this side of the Singularity. The problem, in a nut shell, is that we are shallow information consumers, evolved to generate as much gene-promoting behaviour out of as little environmental information as possible. Human cognition relies on simple cues to draw very complex conclusions simply because it could always rely on adaptive correlations between those cues and the systems requiring solution: it could always depend on what might be called cognitive ecological stability.
Since our growing cognitive dependency on our technology always involves trade-offs, it should remain an important concern (as it clearly seems to be, given the endless stream of works devoted to the downside of this or that technology in this or that context). The dependency we really need to worry about, however, is our cognitive biological dependency on ancestral environmental correlations, simply because we have good reason to believe those cognitive ecologies will very soon cease to exist. Human cognition is thoroughly heuristic, which is to say, thoroughly dependent on cues reliably correlated to whatever environmental system requires solution. AI constitutes a particular threat because no form of human cognition is more heuristic, more cue dependent, than social cognition. Humans are very easily duped into anthropomorphizing given the barest cues, let alone processes possessing AI. It pays to remember the simplicity of the bots Ashley Madison used to gull male subscribers into thinking they were getting female nibbles.
And herein lies the rub: the environmental proliferation of AI means the fundamental transformation of our ancestral sociocognitive ecologies, from one where the cues we encounter are reliably correlated to systems we can in fact solve—namely, each other—into one where the cues we encounter are correlated to systems that cannot be fathomed, and the only soul solved is the consumer’s.
* Bakker, R. Scott. “On Alien Philosophy,” Journal of Consciousness Studies, forthcoming.
The most recent Conscious Entities has this link:
http://nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable
which suggests human beings already can’t understand how neural networks to which we are entrusting important decisions makes those decisions and asks if we should continue to entrust those decisions to those networks. My sense is that, corporations being what they are, we no longer have the option of demanding any sort of accountability from neural networks or the corporations that use them. To the extent that governments use them as well they will accelerate the loss of accountability between governments and citizens. As have been pointed out elsewhere in this blog, neural networks are no better than the data on which they are trained. The racial and gender bias built into the training data will be built into the networks and given a spurious veneer of objectivity. This suggests the new technologies will work to preserve the old injustices.
As have been pointed out elsewhere in this blog, neural networks are no better than the data on which they are trained. The racial and gender bias built into the training data will be built into the networks and given a spurious veneer of objectivity.
The veneer provided by the reliance people have on machines in order to survive. “I can’t think of it as being wrong because that would shake the mountain under my feet, make it feel unsafe!”. Meanwhile we raise our silicon children badly.
Wonderful piece – and the topic of my next post I’m pretty sure. I haven’t checked out the discussion on CE yet, but I hope someone’s mentioned that this inscrutability is a petri dish example of the inscrutability of intentional cognition in general. These systems are ecologically opportunistic (this is why I take the field effects bit to be such a strong indicator that consciousness involves EMFs): they make most out of what they got, leaping to conclusions on the basis of cues correlated to solvable systems. Minsky’s ‘suitcase words’ are such because they themselves are the product of the same cue-correlative dependency. ‘Interpretation’ is the term we use for understandings requiring the isolation of environmental cues reliably correlated to our targets. The worry that arises again and again isn’t simply that these systems are inscrutable, but that they are ecological, requiring contexts often possessing quirky features given quirks in their environments – data sets – used to train them. Inscrutability is a problem because it entails blindness to misapplications! This whole problematic, in fact, is a kind of analogue to the problem of intentional cognition, where the heuristic nature of metacognition stymies the interpretation of heuristic cognitive capacities.
I think it’s the adaptive that’s the issue – people can figure out Madison bots with time (as people are adaptive). We haven’t run into words which change themselves. Words that adapt to our adaption to their initial configuration. Self writing words. But hey, it’s like I’m quibbling that it’s a nuke, not TNT.
Seems a reasonable hypothesis, anyway – I wonder where peoples understanding of it will run out and they will reject the entire post for not understanding part of it.
(Yeah, I’m having a bit of a dig there, but it’s actually a very valid concern)
Anyway,
https://en.wikipedia.org/wiki/Morgan_(2016_film)
Let me tell you an anecdote that may or may not be relevant to what you are saying, but will hopefully prove amusing nonetheless.
A couple of months ago I was using the website OKCupid because that’s how we do things nowadays. Although I may at one point or another have been duped by a bot, much more interesting was a when I was messaged by a new account who communicated in a way that cued me to believe that it was a bot. After further examination I came to the conclusion that I had been mistaken and there was an actual soul on the other side of the screen, but think about that… we live in an era where people can legitimately fail the Turing test for non-trivial amount of time.
I wonder the greatest danger in our new environment isn’t that we will be utterly exploitable by bots (that’s a given!), but rather that in the ensuing cognitive arms race we may become wholly skeptical of the humanity of those we interact with. A kind of solipsistic Blade Runner nightmare, where the Voight-Kampff test gets so difficult even some humans fail.
It’s only natural. As ‘bots get more clever we’ll have to ramp our ‘bot detectors up, generating more false positives. It may be that as some point we’ll have to assume that anyone with whom we are interacting other than in the flesh is a machine. On the other hand, it might be the case that humans will have to become more ‘bot-like to function in a ‘bot dominated world. If so, the human-‘bot distinction might come to be moot.
Great story! I think it’s inevitable that at some point, humans will decide that humans are too superficial and self-obsessed to deal with. Look at what’s happening in Japan now with the ‘herbivore men’…
Since WW2, the number of wrenches thrown into “normal” human relationships are enormous. Reliable birth control. Antibiotics and vaccines that are effective against a wide range of STDs. Portable communications and social networking. In vitro fertilization. Re-writing of gender roles and expectations. Kinsey. Cheap and reliable paternity testing. Winner-take-all economics. Nuclear detente.
No wonder both genders feel like they are treated unfairly. The behavioral repertoire we inherited from our ancestors is completely maladapted to the current situation and will be more and more inadequate as technology continues to chug along.
Sometimes I wonder if we’re gonna end up like that one episode of Rick and Morty with the aliens who are sexually dimorphic to the point of looking like different species (spoiler: the women are enlightened scientists while the men are hyperviolent apelike brutes).
https://soundcloud.com/samharrisorg/40-information-complexity-stupidity-a-conversation-with-david-krakauer
[…] Bakker: […]
i get yer sense of why this is all getting worse but what i don’t get is why you think this hasn’t been happening for a very long time already with the reaches of technologies that poison our environs, do calculations beyond our capacities to check, cause superbug (or superweed, etc) mutations, or when we just try and be “informed: voters as if we could have any real sense of the complexity of politics/economics/etc? seems we’ve been in over our heads and out of touch for since at least the industrial revolution , no?
I entirely agree. But no one really seems to understand the cognitive dimensions of ecology, so this is where I focus. I also think this is the greater ecological threat as well.
ah i see a matter of emphasis than thanks, time will tell i suppose which collapse will do us in first but i like my odds, that simple mechanics are more than enough..
https://www.propublica.org/article/california-drought-colorado-river-water-crisis-explained
http://www.ttbook.org/book/app-intelligence
http://rationallyspeakingpodcast.org/show/rs-167-samuel-arbesman-on-why-technology-is-becoming-too-com.html
Has anyone here read the book? I’d be interested in impressions. For me this misses the target: the complexities have always outrun us, which pretty clearly suggests that complexity itself isn’t the problem. My view puts that complexity in a cognitive ecological context.
haven’t read the book but my sense is that they are just now coming to terms with the fact that we are assembling and unleashing forces that we can’t even understand let alone control, there is a great deal of delusional hubris at work in these complexity “scientists” and their mathematical neo-platonisms and I think it is taking the invention of mathematical/computational black-boxes to get on their radar.
[…] raises the likelihood, perhaps even the inevitability, that human social cognition will effectively breakdown altogether. The problem lies in the radically heuristic nature of the cognitive modes we use to understand […]
[…] https://rsbakker.wordpress.com/2016/09/11/ai-and-the-coming-cognitive-ecological-collapse-a-reply-to… via […]