The Posthuman as Evolution 3.0
by rsbakker
Aphorism of the Day: Knowing that you know that I know that you know, we should perhaps, you know, spark a doob and like, call the whole thing off.
.
So for years now I’ve had this pet way of understanding evolution in terms of effect feedback (EF) mechanisms, structures whose functions produce effects that alter the original structure. Morphological effect feedback mechanisms started the show: DNA and reproductive mutation (and other mechanisms) allowed adaptive, informatic reorganization according to the environmental effectiveness of various morphological outputs. Life’s great invention, as they say, was death.
This original EF process was slow, and adaptive reorganization was transgenerational. At a certain point, however, morphological outputs became sophisticated enough to enable a secondary, intragenerational EF process, what might be called behavioural effect feedback. At this level, the central nervous system, rather than DNA, was the site of adaptive reorganization, producing behavioural outputs that are selected or extinguished according to their effectiveness in situ.
For whatever reason, I decided to plug the notion of the posthuman into this framework the other day. The idea was that the evolution from Morphological EF to Behavioural EF follows a predictable course, one that, given the proper analysis, could possibly tell us what to expect from the posthuman. The question I had in my head when I began this was whether we were groping our way to some entirely new EF platform, something that could effect adaptive, informatic reorganization beyond morphology and behaviour.
First, consider some of the key differences between the processes:
Morphological EF is transgenerational, whereas Behavioural EF is circumstantial – as I mentioned above. Adaptive informatic reorganization is therefore periodic and inflexible in the former case, and relatively continuous and flexible in the latter. In other words, morphology is circumstantially static, while behaviour is circumstantially plastic.
Morphological EF operates as a fundamental physiological generative (in the case of the brain) and performative (in the case of the body) constraint on Behavioural EF. Our brains limit the behaviours we can conceive, and our bodies limit the behaviours we can perform.
Morphologies and their generators (genetic codes) are functionally inseparable, while behaviours and their generators (brains) are functionally separable. Behaviours are disposable.
Defined in these terms, the posthuman is simply the point where neural adaptive reorganization generates behaviours (in this case, tool-making) such that morphological EF ceases to be a periodic and inflexible physiological generative and performative constraint on behavioural EF. Put differently, the posthuman is the point where morphology becomes circumstantially plastic. You could say tools, which allow us to circumvent morphological constraints on behaviour, have already accomplished this. Spades make for deeper ditches. Writing makes for bottomless memories. But tool-use is clearly a transitional step, ways to accessorize a morphology that itself remains circumstantially static. The posthuman is the point where we put our body on the lathe (with the rest of our tools).
In a strange, teleonomic sense, you could say that the process is one of effect feedback bootstrapping, where behaviour revolutionizes morphology, which revolutionizes behaviour, which revolutionizes morphology, and so on. We are not so much witnessing the collapse of morphology into behaviour as the acceleration of the circuit between the two approaching some kind of asymptotic limit that we cannot imagine. What happens when the mouth of behaviour after digesting the tail and spine of morphology, finally consumes the head?
What’s at stake, in other words, is nothing other than the fundamental EF structure of life itself. It makes my head spin, trying to fathom what might arise in its place.
Some more crazy thoughts falling out of this:
1) The posthuman is clearly an evolutionary event. We just need to switch to the register of information to see this. We’re accustomed to being told that dramatic evolutionary changes outrun our human frame of reference, which is just another way of saying that we generally think of evolution as something that doesn’t touch us. This was why, I think, I’ve been thinking the posthuman by analogy to the Enlightenment, which is to say, as primarily a cultural event distinguished by a certain breakdown in material constraints. No longer. Now I see it as an evolutionary event literally on par with the development of Morphological and Behavioural EF. As perhaps I should have all along, given that posthuman enthusiasts like Kurzweil go on and on about the death of death, which is to say, the obsolescence of a fundamental evolutionary invention.
2) The posthuman is not a human event. We may be the thin edge of the wedge, but every great transformation in evolution drags the whole biosphere in tow. The posthuman is arguably more profound than the development of multicellular life.
3) The posthuman, therefore, need not directly involve us. AI could be the primary vehicle.
4) Calling our descendents ‘transhuman’ makes even less sense than calling birds ‘transdinosaurs.’
5) It reveals posthuman optimism for the wishful thinking it is. If this transformation doesn’t warrant existential alarm, what on earth does?
I wonder if upgrading your view of evolution from merely adaptation would really blow your mind, and what the downstream effects might be. I did my PhD in molecular evolution and phylogenetics, and the sheer complexity that can originate from neutral processes in the absence of selection is also mind-boggling.
Adaptation is a granular level of analysis to be sure. My fear is that cranking the resolution to the molecular level would make the ‘big picture’ very difficult to manage, but that’s just my fear. I don’t know one way or another. How would you tweak this picture, Dan?
What Dan is talking about is complexity increase at the molecular level without increased adaptation. The model usually goes something like this: pretend there’s a signaling pathway that goes A->B->C. somewhere along the line, A has an amino acid substitution that allows it to be phosphorylated by D without changing its effect (“mostly neutral” change). However, eventually, neutral evolution can act on B in such a way that it can no longer bind to A if it is not phosphorylated, meaning that the pathway has now evolved to be D->A->B->C. It doesn’t do anything new, but the increase in complexity allows it to evolve novel functions in the future (for example by allowing one further degree of freedom in signaling).
Cultural evolution has a parallel: sometimes we tack on bureaucracy and it doesn’t actually do anything, but along the way other institutions become dependent on the original and the original can’t be eliminated without massive restructuring of the whole system.
The question becomes: if suddenly the loop between environment, behavior, morphology and culture gets super-tight, can we expect something analogous to these neutral changes to predominate? Probably.
What any of this will look like is anyone’s guess. Evolution is often driven by arms races, and the arms races of the future will probably be informatic, so… radical rewriting of one’s neuroarchitecture periodically to prevent instrumentalization by competing parasitic memeplexes? My money is on the lobsters.
http://books.google.com/books/about/Accelerando.html?id=W2KWzLYIO-AC
That’s really what’s at stake here. The collapse between structure and function. The funny thing is that you can look at intentionality as the illusion of this collapse.
Well it wouldn’t need to be tweaked to the molecular level. Evolution is driven by the molecular underneath it all, the engine as it were, but neutrality applies right on up the conceptual levels. I always thought it best to think of evolution less as organisms adapting in any sort of optimization sense but merely in the sense of, if it doesn’t outright kill you the change (mutation) will probably be accepted. It totally opens up the possibility landscape IMHO. I think the change is more subtle in its effects than radical, but conceptually it is cleaner.
In terms of this perspective you presented, it might not tweak the picture radically, after all the premise seems to lie on the break where underlying genetics are no longer indispensable, because we have the capability of swapping them out ourselves. From an evolutionary perspective genetic engineering of ourselves makes the entire fitness/possibility landscape of evolutionary change available or accessible, something that isn’t likely to be true otherwise.
Speak of the Devil: http://www.cam.ac.uk/research/news/humanitys-last-invention-and-our-uncertain-future/
Birds evolved from dinosaurs…..you really are out there, Bakker!
Anyhoo- I have a question…how come the Dunyain (or Kellhus actually, since he has access to resources, smart people, and labor) haven’t invented any forms of technology yet? I would think being so smart they would have come up with electricity or something by now. …..Or will the series end with Kellhus saving the world, dropping some tekne on them, and the epilogue takes place one thousand years in the future on Earwa in flying cars or a moon base and everything is based on a Kellhus culture? That would be weird. Of all the possible theories that I’ve thought up on where this series could possibly go, that is one of my crazier, thinking-too-far-ahead type ones.
Kellhus invented the siege engines used to assault Sakarpus.
Also, although electricity was known in the time of the Greeks, it took nearly 2000+ years of human cultural evolution to harness it (the ancients may have even been able to use electroplating* but without any knowledge of the underlying chemistry and physics, they would not have been able to generalize). While Kellhus may know a lot, he probably can’t simply intuit electrochemical theory. The existence of sorcery also makes it a bit of an inefficient proposition, seeing as the Mandate is more destructive than even the cannons of the Napoleonic wars.
*http://en.wikipedia.org/wiki/Baghdad_Battery
Thats a good point about the sorcery. While certain real life examples of technology in antiquity exist, and they took a looooong time to really get perfected, it can be explained straight away by a lack of Dunyain super-intellect in real life. But in the book, Kellhus is just a pinch more advanced than the average man. Or he is at least incredibly trained to harness his brain power at a more efficient level. Inn any case, I just assumed he was so caught up in perfecting his own sorcery, running/building the empire, and then marching the Ordeal North he wouldn’t have had a chance to poke around and creating and inventing things.
Birds evolved from dinosaurs…
And the dinosaurs are dead.
Though after being around for a much longer time period than we have.
You have to remember the ‘invisibility of ignorance effect,’ Justin. It doesn’t matter how super-intelligent you are, short of necessity, it’s not going to occur to you to research any single thing, let alone invent.
The Dunyain as a whole seem uninterested in technological advancement, focusing on selective breeding and the neuroscience side of things instead. Humans successfully practiced selective breeding of livestock and support animals millennia before coming to grips with industrial technology, and since the Dunyain utilize the good ole brute force method in learning what part of the brain does what, they’ve had little need to advance even that sort of tech.
Well, the necessity thing is even worse when you so emphasize the shortest path. It’s only dumbasses who travel to their destination in arcs who experience something more. Something beyond necessity?
I wonder which produces more false starts by percentage, evolution as a series of random mutations selected for by reproduction, or evolution “guided” by the evolving life forms themselves in a fit of certainty that they know what they are doing?
Have you ever read Dresden Codak? It’s a webcomic that tackles trans/posthumanism as its primary theme. Aaron Diaz is the artist, and I suspect he would be lumped in with Kurzweil and others you’ve mentioned in that he is very much a trans/posthumanism optimist. But though he reaches different conclusions, he seems to think along very similar lines as this post, if I’m interpreting his art correctly.
Lastly, in a true tangent, I must admit that my engineering degree wanted to keep replacing “static” with “elastic” since it was being coupled with “plastic” as a descriptor and since those are the two types of deformation a material can undergo. “Static” works better for your analogy, but it got me thinking about a possible substitution and the change of meaning it would imply. I still think “elastic” could work for morphological EF. You can try and change it all you like, but it will always revert back to what it is.
…so, you’ve dialed cyberpunk up a notch? 😉
Transdinosaurs is a laugh. I can even see those damned feathered reptiles planning their metamorphosis…
The breakdown seems to hinge on recognition – not specifically or strictly social – and the coherency consciousness experiences.
I think, from discussions on TPB, it’s fully expected that that sense of coherency will persist through our tinkerings, mirroring the bloating or shrinking sufficiency of awareness resulting from selective loss of function.
Just rapping but the thought strikes me aside that as recognition seems fulcrum – for this metaphor, let’s say, from a biological perspective – any selective augmentation correlates to a greater/diminished effect feedback.*
Regardless, isn’t there a threshold? While we might create any number of strange neurological creatures, augment ourselves in unimaginable ways, “we,” “I,” or any “sufficiency awareness,” exist, at least in part, as the portal between the trillions of connections within and the billions of connections without. While still potentially horrible and terrifying, there are innumerable interacting complexities resulting in our stable form – it suggests that there can only be so many frivolous, or confused, augmentations outside an EF circuit before stable matter simply rejects us as anomalous.
Just thoughts.
I think I share a similar thought
such that morphological EF ceases to be a periodic and inflexible physiological generative and performative constraint on behavioural EF.
The final morphology is that if you’re dead, you’re dead. The cyclical pattern no longer repeats – this is a dead parrot!
I idly wonder if that’s part of the problem in arguing with a transhumanist – they know this at a certain level, so the quoted statement above gets rejected. Which means no other argument stands in the way of the ‘better and brighter’ hope they have. I think Cordelia Fine’s book had a bit on how we are ruthless prosecutors of others arguments. Certainly here I think the statement is actually false. Sure with a bit of charitable reading, trying to work the flawed statement into something workable, it would lend an argument – but that’s why it’s called ‘ruthless’ and not ‘charitable’. Or perhaps I’ve just read too much about Kellhus?
If you think about the drive to create anthropomorphic AI, intelligence that we can recognize as human. Call this the ‘H-configuration.’ This configuration is simply one among an infinite possibilities. Call all functional possible configurations, ‘F-configurations,’ and all the dysfunctional systems, ‘D-configurations.’ You’re right that the there’s far more D’s than F’s, but there’s still an infinite number of F’s… and only one H.
I’m not arguing against that, I’m awkwardly proposing why pro transhumanist people might not absorb the idea morphology ceases to be a restraint on behaviour. Maybe it’s because the ruthless prosecutor of ideas (others ideas) in them sees that as false (correctly, in it’s literal state – like a computer reads a program absolutely literally and if you even have a semi colon out of place, it ditches the whole thing as an error). And just dismisses it, instead of being charitable and thinking of all the scuttley, slithery F-configurations possible.
I’m not sure how to describe the process of not just taking the statement purely as is and instead elaborating on it. Certainly in terms of strict wording, it’s not listening to the wording given by the other person (even a reader fallacy), to do this. However, it may actually match the intent of the speaker. An error upon anothers prior error that actually matches the initial, unspoken intent of that prior speaker. And the idea is the pro transhumanists aren’t doing this complicated part of communication. Every semi colon must be in place. And you can’t say morphology wont affect behaviour, because it’s not true, no mattery how many skittering, snaking F-configurations there are and only one H configuration. To read it that way would actually involve a reader fallacy in regards to the text taken in a technical sense, even as that fallacy might match the speakers intent.
And okay, I swear while that might seem a tangled ball of string, that’s the untangled version!! Alternative conversation option to escape that: How’s call of duty going!? lol 😉
a little off topic to this particular thread, but still, it’s always talking about consciousness, right?
http://io9.com/5963143/the-19th+century-psychologist-who-tried-to-illustrate-consciousness-with-geometric-shapes
I don’t understand how this essay relates to optimism (your point 5). There is no necessary logical coonection between “things will get pretty weird” and “things will be shit, therefore no optimism”.
It seems traditionalistic to bury optimism at the prospect of change. I don’t see an existential crisis necessarily, but a re-invention of existentiality.
Your argument works as well against optimism as it does pessimism. Otherwise, I’m curious as to why you didn’t answer the question: If what I describe isn’t cause for alarm, then what is?
I assume, you understand the existential crisis you’re speaking of, to affect everyone, instead of just “me” (in an existencialistic way).
How could we (humans 1.0) be able to figure out what a human 2.0 might regard as an existentialistic crisis, though? Having established, that neither optimism nor pessimism play a part in this, why should we assume any other stance than interest?
You ask: what might be a cause for alarm? Well, not being able to provide for your family, for example. How is that different from the “end of mankind”? In one case you make your pessimism the yardstick for every possible entity, which might have existential crises, ever to walk the face of the earth. In the second case you start with your Dasein, the only possible vantage point you can ever assume.
What you describe is not cause for alarm, because alarm would recquire either traditionalism (I don’t want things to change), or normativism (things, how they are, are right, no matter for whom). Both viewpoints try to judge something that is not for them to judge.
Why should I care what a ‘human 2.0’ worries about? To be more precise, since we’re not talking about humans anymore, why should I care what ‘X’ worries about?
I’m afraid your reasoning in the second paragraph escapes me. Are you suggesting the ‘fate of humanity’ is not something rational to worry about? Are you saying the posthuman isn’t something humans should be alarmed by? I find that claim almost too extraordinary to credit.
I’m not committed to either traditionalism or normativism (you’re new the blog, I take it).
I am not saying that the ‘fate of humanity’ is not rational to worry about. I am saying that it is not something we can rationally worry about, though.
What exactly is the nature of your posthumanist alarm?
If it is about the future of mankind, let’s examine what, following your essay, might cause alarm, and how we might rationally worry about it.
First, what do we need to rationally worry? Facts or compelling arguments concerning the object of our worry (‘fate of humanity’).
Do we have those? No.
Why? Because the problem can only be viewed from our POV. Worrying -as humans- about the possibility of changing into something else just does not work. That is because our POV is limited to how we function. We cannot imagine what it is like to function differently. And worrying about functioning differently without being able to know, even intuitively, how that might be, just does not seem logical.
We are left with anxiety about posthumanism, without any way to rationally debate its meaning or consequences. The nature of the posthumanist alarm, then, seems to be anxiety.
Of course it is rational to debate the future of mankind, if only we could have such a debate.
Am I saying the posthuman isn’t something humans should be alarmed by?
Well, alarmed in the way of keeping an eye on it? Sure. Alarmed as pessimistic? Personally, a little. Philosophically, I just don’t see how that might turn into a fertile debate.
(I am indeed rather new to the blog. If my English sounds like I am trying to be anything else than a pleasant contributor, please excuse me. I hear there have been people on this blog who came here just to polemicise. It is rather hard to find the right tone and nuance in a foreign language.)
No need to worry about tone, Phillip! Half the time I have only a moment or two to pound out an answer, so I apologize if I sound abrupt.
David Roden makes the same argument you’re making. I just think the reasoning is too fine. We humans worry about the unknown for good reason, not the least being the fact that our brains are (probably) prediction-error correction machines. Being unable to predict kills, which is presumeably why we have the paranoid brains we have. This where I argue against David: the very inability to predict, the ‘X’ that fails to provide warrant for our fears is itself the warrant to fear. All we can say about this X is that humanity as we know and conceive it will not come out the other side. We are, after all, talking about the collapse of predictability! But even if I weaken my claim, the threat is still existential, and still more than justifies alarm.
Do you see how this works? The argument that alarm requires explicit knowledge of some ‘object of alarm’ to be justified simply misconstrues the nature of alarm.
My pessimism about the future is actually quite separate from the issues in this post. Check out: https://rsbakker.wordpress.com/2012/10/29/less-human-than-human-the-cyborg-fantasy-versus-the-neuroscientific-real/
Or even better, my novel, Neuropath.
Phillip,
‘We cannot imagine what it is like to function differently. And worrying about functioning differently…’
You’re basically saying we’d be functioning differently’ – even so much differently we can’t imagine it, but it’d still be us, even as it escapes the bounds of our capacity to imagine.
That’s all that’s coming across to you – that we would be functioning differently.
It’s an intuitive dualism – there’s A: Us and B: how we function.
So what happens if how you function IS what we are? Call it H function. And you change that function to Z function. How is H function still there when it’s now Z function?
Intuitive dualism suggests to us that we’d still be there, just thinking differently somehow. That somehow H fuction still sits in the background there, still present, but now it has a snazzy new Z function coat on.
If it’s not convincing, what if we dial it back to human, but a matter of whom. If we take your entire brain and rewrite it to the brain recording of some guy called Steve, do you think you’ll still be there somewhere, but thinking like Steve now?
Or will you be absent. Dead?
Phillip, worrying over the unprecedented is often worth doing. If the inevitable march of technology allows us to transform ourselves into truly “self-improving” (if admittedly organic… for now) machines, we’ll be creating the very self-improving AI people speculate about, only with our own brains as the actual FOUNDATION, rather than simply the template. And if all those improvements are accomplished independent of one another, we won’t even be a single post-human species (something Scott has blogged about extensively before), but tens of thousands of species of just a few members (or only one) each.
To come at it from another angle, try to envision it from the perspective of a person who cannot afford said improvements, and finds themselves falling further and further behind, unable to hold a job because improved transhumans who think and work faster are snapping them all up. That’s a very specific (and admittedly emotionally manipulative) case, but it’s something that should be relatable even if you can’t see past the fog of uncertainty that comes with any topic like this.
Well Scott. Here you go. Chew on this one. I’m still reeling.
http://www.sciencemag.org/content/338/6111/1202
Metzinger actually just sent me the very same article this morning! It’s very cool, and definitely something I’m going to work into my introspection paper. It feels lately, anyway, that the ‘curse of dimensionality’ research is pressing a lot of people onto my ledge, getting them thinking about varieties of information loss necessitated by the structure of the brain. Questioning the ways these losses impact the system’s ability to model itself is just a short step away.
BBT is going to be big, Jorge.
Got caught out on my narcissistic myopia there! I’d been bone boning up on the curse of dimensionality so I assumed that was the reason both you and M had sent the paper. I’m curious why you think this is so huge. For me, it’s the multitasking that’s the most impressive – apart from the fact that it only took them 2.5 million simulated neurons. Aside from integrating cognitive skills, is there anything SPAUN has done that hasn’t been done elsewhere?
Well, speaking of neuron count, there’s a spider species (Portia labiata) that does many of typically “mammalian” tasks (including some fairly sophisticated planning and de-novo learning of new behaviors) with about 600 000 (no zeroes missed) neurons or so.
Dunno if the portia spider can multitask though…
But… but there were trans-dinosaurs between “dinosaurs” and “birds”, we even found some of them in the fossil record!
The “transhumans” (and I do think that First has a point when he says that “we’re the transhumans already”) are essentially urvogels, a transient stage whose representatives will baffle and amuse our posthuman robotic descendants with the almost comical trait mish-mash of their fossilized cadavers.