Anarcho-ecologies and the Problem of Transhumanism

by rsbakker

So a couple weeks back I posed the Augmentation Paradox:

The more you ‘improve’ some ancestral cognitive capacity, the more you degrade all ancestral cognitive capacities turning on the ancestral form of that cognitive capacity.

I’ve been debating this for several days now (primarily with David Roden, Steve Fuller, Rick Searle, and others over at Enemy Industry), as well as scribbling down thoughts on my own. One of the ideas falling out of these exchanges and ruminations is something that might be called ‘anarcho-ecology.’

Let’s define an ‘anarcho-ecocology’ as an ecology too variable to permit human heuristic cognition. Now we know that such an ecology is possible because we know that heuristics use cues possessing stable differential relations to systems to solve systems. The reliability of these cues depends on the stability of those differential relations, which in turn depends on the invariance of the systems to be solved. This simply unpacks the platitude that we are adapted to the world the way it is (or perhaps to be more precise (and apropos this post) the way it was). Anarcho-ecologies arise when systems, either targeted or targeting, begin changing so rapidly that ‘cuing,’ the process of forming stable differential relations to the target systems, becomes infeasible.  They are problem-solving domains where crash space has become absolute.

I propose that Transhumanism, understood as “an international cultural and intellectual movement with an eventual goal of fundamentally transforming the human condition by developing and making widely available technologies to greatly enhance human intellectual, physical, and psychological capacities,” is actually promoting the creation of anarcho-ecologies, and as such, the eventual obsolescence of human heuristic cognition. And since intentional cognition constitutes a paradigmatic form of human heuristic cognition, this amounts to saying that Transhumanism is committed to what I’ve been calling the Semantic Apocalypse.

The argument, as I’ve been posing it, looks like this:

1) Heuristic cognition depends on stable, taken-for-granted backgrounds.

2) Intentional cognition is heuristic cognition.

/3) Intentional cognition depends on stable, taken-for-granted backgrounds.

4) Transhumanism entails the continual transformation of stable, taken-for-granted backgrounds.

/5) Transhumanism entails the collapse of intentional cognition.

Let’s call this the ‘Anarcho-ecological Argument Against Transhumanism,’ or AAAT.

Now at first blush, I’m sure this argument must seem preposterous, but I assure you, it’s stone-cold serious. So long as the reliability of intentional cognition turns on invariant, ancestral backgrounds, transformations in those backgrounds will compromise intentional cognition. Consider ants as a low-dimensional analogue. As an eusocial species they form ‘super-organisms,’ collectives exhibiting ‘swarm intelligence,’ where simple patterns of interaction–chemical, acoustic, and tactile communicative protocols–between individuals scale to produce collective solutions to what seem to be complex problems. Now if every ant were suddenly given idiosyncratic communicative protocols–different chemicals, different sounds, different sensitivities–it seems rather obvious that the colony would simply collapse. Lacking any intrasystematic cohesion, it just would not be able to resolve any problems.

Now of course humans, though arguably eusocial, are nowhere near so simple as ants. Human soldiers don’t automatically pace out pheromone trails, they have to be ‘convinced’ that this is what they ‘should’ do. Where ants need only cue one another, humans need to both cue and decode each other. Individual humans, unlike ants, possess ‘autonomy.’ And this disanalogy between ants and humans, I think, handily isolates why most people simply assume that AAAT has to be wrong, that it is obviously too ‘reductive’ is some way. They understand the ‘cue’ part of the argument, appreciate the way changing those systems that intentional cognition takes for granted will transform ancestrally reliable cues into miscues. It’s the decode part, they think, that saves the transhumanist day.  We humans, unlike ants, are not passive consumers of our social environments. Miscues can be identified, diagnosed, and then overcome, precisely because we are autonomous.

So much for AAAT.

Except that it entirely agrees. The argument says nothing about the possibility of somehow decoding intentional miscues (like those we witnessed in spectacular fashion with Ashley Madison’s use of bots to simulate interested women), it only claims that such decoding will not involve intentional cognition, insofar as intentional cognition is heuristic cognition, and heuristic cognition requires invariant backgrounds, stable ecologies. Since Transhumanism does not endorse any coercive, collective augmentations of human capacities, Transhumanists generally see augmentation in consumer terms, something that individuals are free to choose or to eschew given the resources at their disposal. Not only will individuals be continually transforming their capacities, they will be doing so idiomatically. The invariant background that intentional cognition is so exquisitely adapted to exploit will become a supermarket of endless enhancement possibilities–or so they hope. And as that happens, intentional cognition will become increasingly unreliable, and ultimately, obsolete.

To return to our ant analogy, then, we can see that it’s not simply a matter of humans possessing autonomy (however this is defined). Humans, like ants, possess specifically social adaptations, entirely unconscious sensitivities to cues provided by others. We generally ‘solve’ one another effortlessly and automatically, and only turn to ‘decoding,’ deliberative problem-solving, when these reflexive forms of cognition let us down. The fact is, decoding is metabolically expensive, and we tend to avoid it as often as we can. Even more significantly (but not surprisingly), we tend to regard instances of decoding successful to the extent that we can once again resume relying on our thoughtless social reflexes. This is why, despite whatever ‘autonomy’ we might possess, we remain ant-like, blind problem-solvers, in this respect. We have literally evolved to participate in co-dependent communities, to cooperate when cooperation served our ancestors, to compete when competition served our ancestors, to condemn when condemnation served our ancestors, and so on. We do these things automatically, without ‘decoding,’ simply because they worked well enough in the past, given the kinds of systems that required solving (meaning others, even ourselves). We take their solving power for granted.

Humans, for all their vaunted ‘autonomy,’ remain social animals, biologically designed to take advantage of what we are without having to know what we are. This is the design–the one that allows us to blindly solve our social environments–that Transhumanism actively wants to render obsolete.

But before you shout, ‘Good riddance!’ it’s worth remembering that this also happens to be the design upon which all discourse regarding meaning and freedom happens to depend. Intentional discourse. The language of humanism…

Because as it turns out, ‘human’ is a heuristic construct through and through.