Human Enhancement as Paradigmatic Crash Space
A quick apology to all for my blog delinquency of late. The copy-edit of The Great Ordeal represents my last real crack at the book, so I’ve been avoiding the web like the plague.
Eric Schwitzgebel has posted on my “Crash Space” story over at Splintered Minds, explaining why he’s been unable to get the story out of his head. Since I agree with pretty much everything he has to say, I thought it worthwhile augmenting his considerations with a critical point of view. In one sense, “Crash Space” narrativizes what is sometimes called a ‘wisdom of nature argument,’ the notion that enhancing natural systems can only undo the ancient, evolutionary fine-tuning involved in, say, giving us the sociocognitive capacities we happen to possess. As Allen Buchanan (for my money, one of the most lucid combatants in the ‘enhancement wars’) would argue, such arguments rely on what he calls the “extreme connectedness thesis,” which is to say, the mistaken assumption that biological systems are so interdependent that knocking out one irrevocably degrades the capacities of others. As he points out (In Better than Human and elsewhere), nature is replete with modularity (functional self-sufficiencies), redundancy (back up systems), and ‘canalisation’ (roughly, biological robustness), which he thinks, does not so much moot wisdom of nature concerns as block their generalization: enhancements need to be considered on a case by case basis.
Although the Crash Space argument fits the wisdom of nature profile it actually turns on the radically heuristic nature of human sociocognition–something far more specific than any ‘extreme connectedness assumption.’ Heuristic cognition is cognition that neglects information, which is to say, cognition that heavily relies on background invariances–things that can be taken for granted–to generate solutions. Once again, think of the recent Ashley Madison controversy, the way it was so easy to dupe so many men into thinking that real women were looking at their profiles. All the bots needed to do was to hit the right cues, heuristic triggers that, ancestrally at least, reliably meant we were engaging fellow humans.
Human sociocognitive capacities, which leverage cognitive miracles out of an astonishingly small number of cues (think of Sherry Turkle’s work on ‘Darwinian buttons,’ or Deidre Barrett’s on ‘supernormal stimuli’) are so powerful simply because they turn so heavily on background invariances. Allen’s counterargument fails against the Crash Space model, I think, for buying into the very same ‘one size fits all assumption’ he uses to critique bioconservatives like Francis Fukuyama. The more a cognitive system turns on cues, the more it turns on background invariances, the more vulnerable to technological transformation it becomes. The question isn’t ‘how evolution works in general,’ as Buchanan would have it, but how evolution worked in the particular case of human social cognition. The short story, I like to think, gives a vivid hypothetical in vivo look at the consequences of enhancing human cognitive capacities.
And as the Ashley Madison example suggests, the problem likely far outruns the problem of human enhancement.