Anarcho-ecologies and the Problem of Transhumanism
by rsbakker
So a couple weeks back I posed the Augmentation Paradox:
The more you ‘improve’ some ancestral cognitive capacity, the more you degrade all ancestral cognitive capacities turning on the ancestral form of that cognitive capacity.
I’ve been debating this for several days now (primarily with David Roden, Steve Fuller, Rick Searle, and others over at Enemy Industry), as well as scribbling down thoughts on my own. One of the ideas falling out of these exchanges and ruminations is something that might be called ‘anarcho-ecology.’
Let’s define an ‘anarcho-ecocology’ as an ecology too variable to permit human heuristic cognition. Now we know that such an ecology is possible because we know that heuristics use cues possessing stable differential relations to systems to solve systems. The reliability of these cues depends on the stability of those differential relations, which in turn depends on the invariance of the systems to be solved. This simply unpacks the platitude that we are adapted to the world the way it is (or perhaps to be more precise (and apropos this post) the way it was). Anarcho-ecologies arise when systems, either targeted or targeting, begin changing so rapidly that ‘cuing,’ the process of forming stable differential relations to the target systems, becomes infeasible. They are problem-solving domains where crash space has become absolute.
I propose that Transhumanism, understood as “an international cultural and intellectual movement with an eventual goal of fundamentally transforming the human condition by developing and making widely available technologies to greatly enhance human intellectual, physical, and psychological capacities,” is actually promoting the creation of anarcho-ecologies, and as such, the eventual obsolescence of human heuristic cognition. And since intentional cognition constitutes a paradigmatic form of human heuristic cognition, this amounts to saying that Transhumanism is committed to what I’ve been calling the Semantic Apocalypse.
The argument, as I’ve been posing it, looks like this:
1) Heuristic cognition depends on stable, taken-for-granted backgrounds.
2) Intentional cognition is heuristic cognition.
/3) Intentional cognition depends on stable, taken-for-granted backgrounds.
4) Transhumanism entails the continual transformation of stable, taken-for-granted backgrounds.
/5) Transhumanism entails the collapse of intentional cognition.
Let’s call this the ‘Anarcho-ecological Argument Against Transhumanism,’ or AAAT.
Now at first blush, I’m sure this argument must seem preposterous, but I assure you, it’s stone-cold serious. So long as the reliability of intentional cognition turns on invariant, ancestral backgrounds, transformations in those backgrounds will compromise intentional cognition. Consider ants as a low-dimensional analogue. As an eusocial species they form ‘super-organisms,’ collectives exhibiting ‘swarm intelligence,’ where simple patterns of interaction–chemical, acoustic, and tactile communicative protocols–between individuals scale to produce collective solutions to what seem to be complex problems. Now if every ant were suddenly given idiosyncratic communicative protocols–different chemicals, different sounds, different sensitivities–it seems rather obvious that the colony would simply collapse. Lacking any intrasystematic cohesion, it just would not be able to resolve any problems.
Now of course humans, though arguably eusocial, are nowhere near so simple as ants. Human soldiers don’t automatically pace out pheromone trails, they have to be ‘convinced’ that this is what they ‘should’ do. Where ants need only cue one another, humans need to both cue and decode each other. Individual humans, unlike ants, possess ‘autonomy.’ And this disanalogy between ants and humans, I think, handily isolates why most people simply assume that AAAT has to be wrong, that it is obviously too ‘reductive’ is some way. They understand the ‘cue’ part of the argument, appreciate the way changing those systems that intentional cognition takes for granted will transform ancestrally reliable cues into miscues. It’s the decode part, they think, that saves the transhumanist day. We humans, unlike ants, are not passive consumers of our social environments. Miscues can be identified, diagnosed, and then overcome, precisely because we are autonomous.
So much for AAAT.
Except that it entirely agrees. The argument says nothing about the possibility of somehow decoding intentional miscues (like those we witnessed in spectacular fashion with Ashley Madison’s use of bots to simulate interested women), it only claims that such decoding will not involve intentional cognition, insofar as intentional cognition is heuristic cognition, and heuristic cognition requires invariant backgrounds, stable ecologies. Since Transhumanism does not endorse any coercive, collective augmentations of human capacities, Transhumanists generally see augmentation in consumer terms, something that individuals are free to choose or to eschew given the resources at their disposal. Not only will individuals be continually transforming their capacities, they will be doing so idiomatically. The invariant background that intentional cognition is so exquisitely adapted to exploit will become a supermarket of endless enhancement possibilities–or so they hope. And as that happens, intentional cognition will become increasingly unreliable, and ultimately, obsolete.
To return to our ant analogy, then, we can see that it’s not simply a matter of humans possessing autonomy (however this is defined). Humans, like ants, possess specifically social adaptations, entirely unconscious sensitivities to cues provided by others. We generally ‘solve’ one another effortlessly and automatically, and only turn to ‘decoding,’ deliberative problem-solving, when these reflexive forms of cognition let us down. The fact is, decoding is metabolically expensive, and we tend to avoid it as often as we can. Even more significantly (but not surprisingly), we tend to regard instances of decoding successful to the extent that we can once again resume relying on our thoughtless social reflexes. This is why, despite whatever ‘autonomy’ we might possess, we remain ant-like, blind problem-solvers, in this respect. We have literally evolved to participate in co-dependent communities, to cooperate when cooperation served our ancestors, to compete when competition served our ancestors, to condemn when condemnation served our ancestors, and so on. We do these things automatically, without ‘decoding,’ simply because they worked well enough in the past, given the kinds of systems that required solving (meaning others, even ourselves). We take their solving power for granted.
Humans, for all their vaunted ‘autonomy,’ remain social animals, biologically designed to take advantage of what we are without having to know what we are. This is the design–the one that allows us to blindly solve our social environments–that Transhumanism actively wants to render obsolete.
But before you shout, ‘Good riddance!’ it’s worth remembering that this also happens to be the design upon which all discourse regarding meaning and freedom happens to depend. Intentional discourse. The language of humanism…
Because as it turns out, ‘human’ is a heuristic construct through and through.
http://www.vox.com/2015/9/18/9352117/zoltan-istvan-2016-campaign
“Transhumanists generally see augmentation in consumer terms, something that individuals are free to choose or to eschew given the resources at their disposal.”
That’s the real problem. Those who can afford augmentations will acquire the ability to decode their social environments in higher level, causal terms. Those who can’t afford them (the same people who can’t afford high quality health care now) will be to the augmented as our house pets are to us, at best. Perhaps sixty percent of the current human population will become so intellectually inferior that they will no longer be considered human. What is to be done with them?
I think you’re basically right, but there are two escape routes in principle.
One is that transhumanist development might include equipping us with new and better heuristics, so that the failure of the inherited ones would no longer be disastrous.
Although this seems possible in principle, I believe transhumanism generally just envisages stretching and augmentation of existing capacities. It isn’t very likely that new heuristics would emerge naturally simply from that kind of enhanced capacity; and they can’t be deliberately built in because by definition we don’t see the limits of the ones we inherited.
Second, and similar, it could be that intentionality, as it happens, is not directly affected by transhumanist augmentation but by sheer luck somehow continues to work. Or moves on to a new form, Intentionality 2, which is as much beyond our current understanding as our current understanding is beyond a dog’s. For somewhat similar reasons, this second way out also seems wildly, lottery-win optimistic.
To me this all suggests that the only practical way forward for human development is the same one that got us here – evolution, with all its limitations and slow speed.
Sorry for the delay getting this up, Peter. I had to fish it from the spam folder (where I found a couple other lost soldiers!)
“One is that transhumanist development might include equipping us with new and better heuristics, so that the failure of the inherited ones would no longer be disastrous.
Although this seems possible in principle, I believe transhumanism generally just envisages stretching and augmentation of existing capacities.”
I think this escape route is closed simply given the nature of anarcho-ecologies: any new heuristic regime you come up with would only function given some new, shared, and stable background ecology. With everyone idiosyncratically tweaking all the time, it’s hard to see how any such new equilibrium might arise.
“To me this all suggests that the only practical way forward for human development is the same one that got us here – evolution, with all its limitations and slow speed.”
The difference being that disaster is the beginning of evolution, and the end of us! Another difference being that evolution turns on ‘punctuated, species wide stabilities.’ In this instance we would need as many evolutions as individual encounters, given that everyone is changing in their own way continuously.
My understanding of heuristics is that we use them because we can’t work out what’s going on in causal detail. If so, then beings who can work out what’s going on in causal detail would not need heuristics at all. When two beings need to interact they exchange the information needed to facilitate the interaction, so in effect they can have an evolution for every individual encounter. If you expect regular interactions with that being you can store the information you need to interact with him and only download the updates, or if you interact with that being continuously you can keep each other updated constantly. If you have enough processing power, storage and bandwidth you don’t need stable backgrounds.
This is the post-intentional upshot, I think: the only invariant background you can leverage into cognition is the natural world, leaving only the languages of science as our basis to understand one another. This is how I’ve always seen the ‘Semantic Apocalypse’ unfolding, anyway. What I find so interesting about this way of posing the problem faced by meaning is that it provides a very crisp way to characterize possible alternatives.
Is it plausible to argue that certain functional constraints applying to any intelligence whatsoever will allow for useful applications of some kind of ‘minimal intentional cognition’ no matter how anarchic the ecology becomes? I don’t know. I’m not sure this question can be answered pre as opposed to post facto. But the degree that one can make this argument, I think, is the degree to which one could argue that AAAT, although certainly problematizing the future prospects of meaning, doesn’t necessarily rule it out altogether…
Any being that one can think of as intelligent processes information, and one always needs energy to process information, so it seems safe to assume that any intelligent entity will need to extract energy from its environment. Any entity that extracts energy from its environment for information processing or any other purpose will generate waste heat and perhaps other waste products. It seems safe to assume any intelligent being will have a metabolism. I’m not sure how far “I think, therefore I eat and shit” gets you, but it’s a start.
My problem is number four: “Transhumanism entails the continual transformation of stable, taken-for-granted backgrounds.”
If the goal is a pure processuality, a move into hyperchaos or a totally unstable, mercurial environment where time is no longer sliced, where distinctions can no longer be tolerated; where the ability to abstract out, solidify, and work on reality is forever gone: Who or what is left to do anything at all? Is this a realm of pure sense relations without thought?
Once we remove heuristics and intentionality what exactly do we gain? I mean I get where your going with it, I’ve always gotten where your going, your final erasure of the human intentional creature and his abstract and even Platonic divisions of reality into stable backgrounds through selective decisioning processes.
But what replaces it? Once you remove heuristics and intentional consciousness what remains? You’ve explained everything else up to that point, but have yet to show what exactly it would entail. Obviously once you throw out reason and affects, what’s left? We constructed reason to stabilize reality so we could deal with it scientifically, but if we do away with reason want this be a reversion to a totally affective environment where thought and being merge (as in Parmenides)? Is this where your heading? Removing the circle, division, distinctions between thought and object?
Exactly. This is just another way to wipe the chalkboard (a sight more elegant than anything else, I’ve come up, I think). What is post-intentional thinking? Are (ancestral) humans even capable of it?
No. That’s why I think we’ve begun that process where it’s the posthuman rather than some trans or beyonding of the human that is going on… ever since we began the process of constructing machines and artificial materialist systems that might surpass us… I think we’ve entered the arena of the next stage in intelligent design, but it’s not god who is doing it but man himself: we’re building our replacement. We’re going to vanish like every other species before us on the evolutionary timeclock of biology, but before our light goes dark we’re going to invent something beyond us not ourselves. So instead of a transcending of the human, we are speaking of the posthuman emergence of our progeny in the machinic phylum.
It just seems to click in my own form of thinking that this is the tendency we’ve been moving toward for a long while now in human thinking… let’s face it we’ve been trying to slough the old emotions, affects, etc. through moral taming of the human animal for thousands of years; with the intelligent machine there is no emotion or affect whatsoever, so whatever it becomes it will be decisively beyond the human animal or any animal in that capacity. Where does that leave us? Oh, I’m sure will enhance ourselves for a few hundred or thousands of years but in the end our machines will inherit the universe, not us bios…
And you think it’s able to circumvent the evolutionary trend of conservativism in ratcheting because of the detailed mapping of morphology. By mapping and gaining access to morphology it’s able to wholesale delete or tinker with the shielded layers of componency which would normally be inaccessible to evolutionary access?
You mean once you remove human heuristics, rather than remove heuristics perse, right?
I believe that was implied. I didn’t use a qualifier since in his points he had already qualified it:
2) Intentional cognition is heuristic cognition.
5) Transhumanism entails the collapse of intentional cognition.
Accepting these two points implies intentional and heuristic cognition will collapse in the Semantic Apocalypse. Whether some future form of heuristics will survive (i.e., in AI, robotics, Wide Human (Roden), etc.) remains to be seen.
Well, that seems a curious evangelicisation of AI/transhumanism, but from the other direction. Or atleast I think that as I come from a position where I don’t think you can use anything but heuristics (it’s just a question of which). Aint no escaping heuristic cognition, as opposed to any question of it collapsing/going extinct.
Remember it’s not my idea, but Scott’s. He’s the one that implied heuristics and intentionality are collapsing. Not me. So tell Scott. Read his article.
Fair enough, S.C – my post was for anyone in general on this matter. I just didn’t see it in the article, but maybe it’s coming out in the comments…of course I say just in regards to having taken the position heuristics of some kind are unavoidable..
Reblogged this on synthetic zero.
Fellow, neurocreeps, y’all should check out the film Listening from 2014 which is a decent hard sf romp about brain hacking.
Ah, just occured to me – probably our animosity (or atleast claimed animosity, lol!) toward hypocracy is a reflection of our evolutionary need for stable backgrounds.
Click to access Bostrom%20-%20The%20Wisdom%20of%20Nature,%20An%20Evolutionary%20Heuristic%20for%20Human%20Enhancement.pdf
Thanks for this ochlo! Where did you come across it?