Human Enhancement as Paradigmatic Crash Space
by rsbakker
A quick apology to all for my blog delinquency of late. The copy-edit of The Great Ordeal represents my last real crack at the book, so I’ve been avoiding the web like the plague.
Eric Schwitzgebel has posted on my “Crash Space” story over at Splintered Minds, explaining why he’s been unable to get the story out of his head. Since I agree with pretty much everything he has to say, I thought it worthwhile augmenting his considerations with a critical point of view. In one sense, “Crash Space” narrativizes what is sometimes called a ‘wisdom of nature argument,’ the notion that enhancing natural systems can only undo the ancient, evolutionary fine-tuning involved in, say, giving us the sociocognitive capacities we happen to possess. As Allen Buchanan (for my money, one of the most lucid combatants in the ‘enhancement wars’) would argue, such arguments rely on what he calls the “extreme connectedness thesis,” which is to say, the mistaken assumption that biological systems are so interdependent that knocking out one irrevocably degrades the capacities of others. As he points out (In Better than Human and elsewhere), nature is replete with modularity (functional self-sufficiencies), redundancy (back up systems), and ‘canalisation’ (roughly, biological robustness), which he thinks, does not so much moot wisdom of nature concerns as block their generalization: enhancements need to be considered on a case by case basis.
Although the Crash Space argument fits the wisdom of nature profile it actually turns on the radically heuristic nature of human sociocognition–something far more specific than any ‘extreme connectedness assumption.’ Heuristic cognition is cognition that neglects information, which is to say, cognition that heavily relies on background invariances–things that can be taken for granted–to generate solutions. Once again, think of the recent Ashley Madison controversy, the way it was so easy to dupe so many men into thinking that real women were looking at their profiles. All the bots needed to do was to hit the right cues, heuristic triggers that, ancestrally at least, reliably meant we were engaging fellow humans.
Human sociocognitive capacities, which leverage cognitive miracles out of an astonishingly small number of cues (think of Sherry Turkle’s work on ‘Darwinian buttons,’ or Deidre Barrett’s on ‘supernormal stimuli’) are so powerful simply because they turn so heavily on background invariances. Allen’s counterargument fails against the Crash Space model, I think, for buying into the very same ‘one size fits all assumption’ he uses to critique bioconservatives like Francis Fukuyama. The more a cognitive system turns on cues, the more it turns on background invariances, the more vulnerable to technological transformation it becomes. The question isn’t ‘how evolution works in general,’ as Buchanan would have it, but how evolution worked in the particular case of human social cognition. The short story, I like to think, gives a vivid hypothetical in vivo look at the consequences of enhancing human cognitive capacities.
And as the Ashley Madison example suggests, the problem likely far outruns the problem of human enhancement.
i have a hard time seeing how canalization doesnt cut both ways. the same retooled capacity that allows you to be moved by films also allowed ted bundy to amass a body count.
ie cuing off of facial signals
I don’t see how it applies either, simply because we’re talking about changing the traits themselves, not tracking their robustness across ecologies. But then I finds it’s one of the more difficult evolutionary concepts to wrap my head around.
isn’t canalization the real temporal core of ratcheting, how pathways become more and more etched with each instance of ‘use’?
in terms of the cognitive story i think the canalization stories will dovetail pretty nicely with the neglect story you are putting together.
Genetic ‘inverse variance’ seems to clearly cut against relevance to Buchanan’s point, primarily because any discussion of enhancement is an intragenerational discussion. But for the same reason, I’m not sure how it stands one way or another vis a vis the background invariance account of heuristic cognition I’m offering. What do you have in mind?
Crash space was an amazing short story. I think I learned and understood more about this blog, and subject as a whole, in those 20 pages and brief afterward, than in all the hours I’ve spent bewildered here.
Thanks Wilshire. This is the reason why Eric thinks SF is becoming increasingly important to philosophy: the combination of conceptual gymnastics with scientific jargon is making it more and more inaccessible, more and more contingent on having read several million words to simply follow.
I actually have another short story, “The Dime Spared,” which is basically a walk through of Blind Brain Theory, and I was thinking I would publish here–it’s too didactic for any commercial market, I think. Be sure to let me know what you think when I do.
I worry about “Crash Space,” though. I’m so bad at bringing the right stories to the right audiences. If I had published it in Analog, say, I think it would have received a lot more attention.
its no sub for grasphing the theoretical idiom in itself!
What does Allen have to say about introduced species into various countries ecologies, in regards to biological robustness? In broad overview it’s easy to say that various countries ecologies didn’t utterly crash over a few introduced species. Thus you could say they are robust. But in higher resolution evaluation, you can count the extinctions and growing endangered list. Certainly from a devil may care attitude about introducing new species to new countries in the past, we’ve as a whole gone very ultra conservative (and still failing – fucking european wasps getting into Australia on industrial equipment).
The rigged in crash space kind of make me think of Rubiks cubes, with TV tropes as the colours – but more important is how it all ends up at some universally jointed spindles on the inside. Whether fragments of humanity seep into those spindles or something else comes out to spin the colours, is the question.
Buchanan’s point is primarily that things are never so simple as the ‘wisdom of nature’ worries assume. Catastrophic enhancements are possible, he would say, but they are impossible to predict in advance, and by no means discredit the promise of human enhancement as a whole.
That can’t be an accurate summary of what he’s said – it’s like saying the occasional exploding mine does not discredit crossing a minefield. Well, more like global nuke mines, really.
Would seem an interesting premise for a movie though – perhaps some future setting where enhancements have an ‘undo’ built into them and the movie shows people going off the deep end and being ‘undone’ to their former enhanced selves. Then it all goes wrong, of course, as movies love to toy with any such pivotal mechanism going haywire – conveying the idea of an irreversible fuck up.
The people in the late 18th/early 19th century who thought to burn coal in a steam engine couldn’t have known about global warming. “Catastrophic enhancements” are much more likely when you enhance systems you don’t understand very well. They are also much more likely when they are being done by people who not only don’t know but don’t care. Modern capitalism seems to me to be built on the idea that the job of a manager is to capture the profits and dump the risk of loss and the risk of externalities on people outside the company, as if they were waste heat. When you combine the ignorance built into the universe with the willful ignorance of consequences built into capitalism you get a system that’s built to seek catastrophe.
Michael, yes, but that should be obvious to Buchanan. It suggests what is really at stake for him is not at play in the conversation. The self loathing hinted at in the Crash Space story might be the undercurrent. But it’s not being engaged by discussion, if it is.
http://www.newyorker.com/magazine/2016/03/21/the-internet-of-us-and-the-end-of-facts
via
http://www.ufblog.net/quotable-148
That ‘end of facts’ article is a really good one, IMO. It seems language rapidly hits a ‘floaty’ stage where only insults have any traction. As she points out with her baseball bat example, the test of ‘truth’ ends up martial. Apparently when gramophones were first being sold, people would look behind curtains and search around the room for who was singing. It took brute state of the world – practically a martial thing, to drum into heads that it was the machine that was producing the ‘singing’ (really it wasn’t, it was producing sound waves, but w/e).
I suspect in the past with death from war or polio or whatever other random shit was all too apparent, words were often enough tied to these very real and obvious negative events. Now I suspect the whole anti vacination thing is one example that is in part because words are losing association with very real and obvious negative events. Like money can lose it’s value, words are losing their meaning. Possibly why people are clustering to game of thrones as random death and shit suddenly infuses words with meaning again, if only from an artificial source.
So what we need is some death and mayhem and we’ll all start to make a lot more sense again! lol!
Christ, it’s like some bad sisyphean joke – in trying to avoid death and mayhem by using words instead, the very means of doing so, words, is eroded of effectiveness.
But it feels like I’m walking right into lauding some author if they chose to use a story full of death and violence!!
What a bunch of self-congratulatory bullshit. ‘Pre-IT epistemic mediation simply has to be better because that’s the way I was raised, by gum, and I know which facts are facts!’
It’s these kinds of vanity interpretations of the transformation that need to be stamped out.
I registered the snub in her article, but I charitably read it as a reference to how paid research teams are on the decline because of the freebie internet. Instead you have a corporate entity (google) who can tweak their algorithm to what they’d prefer you to see (or a mix of that and what the user would prefer to see in terms of opinion/avoid dissenting views, as you’ve noted yourself). That and implementing a biased algorithm is a one time trial of conscience (and an abstract one at that), where as a research team actually has to face multiple trials.
And one could hardly fault her for doomsaying and offering no suggestions of a solution.
Was that too charitable/I just projected that onto her? What am I missing?