Since Massimo Pigluicci has reposted Julia Galef’s tepid defense of transhumanism from a couple years back, I thought I would repost the critique I gave last fall, an argument which actually turns Galef’s charge of ‘essentialism’ against transhumanism. Short of some global catastrophe, transhumanism is coming (for those who can afford it, at least) whether we want it to or not. My argument is simply that transhumanists need to recognize that the very values they use to motivate their position are likely among the things our posthuman descendents will leave behind.
When alien archaeologists sift through the rubble of our society, which public message, out of all those they unearth, will be the far and away most common?
The answer to this question is painfully obvious–when you hear it, that is. Otherwise, it’s one of those things that is almost too obvious to be seen.
Sale… Sale–or some version of it. On sale. For sale. 10% off. 50% off. Bigger savings. Liquidation event!
Or, in other words, more for less.
Consumer society is far too complicated to be captured in any single phrase, but you could argue that no phrase better epitomizes its mangled essence. More for less. More for less. More for less.
Thus the intuitive resonance of “More Human than Human,” the infamous tagline of the Tyrell Corporation, or even ‘transhumanism’ more generally, which has been vigorously rebranding itself the past several months as ‘H+,’ an abbreviation of ‘Humanity plus.’
What I want to do is drop a few rocks into the hungry woodchipper of transhumanist enthusiasm. Transhumanism has no shortage of critics, but given a potent brand and some savvy marketing, it’s hard not to imagine the movement growing by leaps and bounds in the near future. And in all the argument back and forth, no one I know of (with the exception of David Roden, whose book I eagerly anticipate) has really paused to consider what I think is the most important issue of all. So what I want to do is isolate a single, straightforward question, one which the transhumanist has to be able to answer to anchor their claims in anything resembling rational discourse (exuberant discourse is a different story). The idea, quite simply, is to force them to hold the fingers they have crossed plain for everyone to see, because the fact is, the intelligibility of their entire program depends on research that is only just getting under way.
I think I can best sum up my position by quoting the philosopher Andy Clark, one the world’s foremost theorists of consciousness and cognition, who after considering competing visions of our technological future, good and bad, writes, “Which vision will prove the most accurate depends, to some extent, on the technologies themselves, but it depends also–and crucially–upon a sensitive appreciation of our own nature” (Natural-Born Cyborgs, 173). It’s this latter condition, the ‘sensitive appreciation of our own nature,’ that is my concern, if only because this is precisely what I think Clark and just about everyone else fails to do.
First, we need to get clear on just how radical the human future has become. We can talk about the singularity, the transformative potential of nano-bio-info-technology, but it serves to look back as well, to consider what was arguably humanity’s last great break with its past, what I will here call the ‘Old Enlightenment.’ Even though no social historical moment so profound or complicated can be easily summarized, the following opening passage, taken from a 1784 essay called, “An Answer to the Question: ‘What is Enlightenment?’” by Immanuel Kant, is the one scholars are most inclined to cite:
“Enlightenment is man’s emergence from his self-incurred immaturity. Immaturity is the inability to use one’s own reason without the guidance of another. This immaturity is self-incurred if its cause is not lack of understanding, but lack of resolution and courage to use it without the guidance of another. The motto of the enlightenment is therefore: Sapere aude! Have courage to use your own understanding!” (“An Answer to the Question: ‘What is Enlightenment?’” 54)
Now how modern is this? For my own part, I can’t count all the sales pitches this resonates with, especially when it comes to that greatest of contradictions, the television commercial. What is Enlightenment? Freedom, Kant says. Autonomy, not from the political apparatus of the state (he was a subject of Frederick the Great, after all), but from the authority of traditional thought–from our ideological inheritance. More new. Less old. New good. Old bad. Or in other words, More better, less worse. The project of the Enlightenment, according to Kant, lies in the maximization of intellectual and moral freedom, which is to say, the repudiation of what we were and an openness to what we might become. Or, as we still habitually refer to it, ‘Progress.’ The Old Enlightenment effectively rebranded humanity as a work in progress, something that could be improved–enhanced–through various forms of social and personal investment. We even have a name for it, nowadays: ‘human capital.’
The transhumanists, in a sense, are offering nothing new in promising the new. And this is more than just ironic. Why? Because even though the Old Enlightenment was much less transformative socially and technologically than the New will almost certainly be, the transhumanists nevertheless assume that it was far more transformative ideologically. They assume, in other words, that the New Enlightenment will be more or less conceptually continuous with the Old. Where the Old Enlightenment offered freedom from our ideological inheritance, but left us trapped in our bodies, the New Enlightenment is offering freedom from our biological inheritance–while leaving our belief systems largely intact. They assume, quite literally, that technology will deliver more of what we want physically, not ideologically.
Of course, everything hinges upon the ‘better,’ here. More is not a good in and of itself. Things like more flooding, more tequila, or more herpes, just for instance, plainly count as more worse (although, if the tequila is Patron, you might argue otherwise). What this means is that the concept of human value plays a profound role in any assessment of our posthuman future. So in the now canonical paper, “Transhumanist Values,” Nick Bostrom, the Director of the Future of Humanity Institute at Oxford University, enumerates the principle values of the transhumanist movement, and the reasons why they should be embraced. He even goes so far as to provide a wish list, an inventory of all the ways we can be ‘more human than human’–though he seems to prefer the term ‘enhanced.’ “The limitations of the human mode of being are so pervasive and familiar,” he writes, “that we often fail to notice them, and to question them requires manifesting an almost childlike naiveté.” And so he gives us a shopping list of our various incapacities: lifespan; intellectual capacity; body functionality; sensory modalities, special faculties and sensibilities; mood, energy, and self-control. He characterizes each of these categories as constraints, biological limits that effectively prevent us from reaching our true potential. He even provides a nifty little graph to visualize all that ‘more better’ out there, hanging like ripe fruit in the garden of our future, just waiting to be plucked, if only–as Kant would say–we possess the courage.
As a philosopher, he’s too sophisticated to assume that this biological emancipation will simply spring from the waxed loins of unfettered markets or any such nonsense. He fully expects humanity to be tested by this transformation–”[t]ranshumanism,” as he writes, “does not entail technological optimism”–so he offers transhumanism as a kind of moral beacon, a star that can safely lead us across the tumultuous waters of technological transformation to the land of More-most-better–or as he explicitly calls it elsewhere, Utopia.
And to his credit, he realizes that value itself is in play, such is the profundity of the transformation. But for reasons he never makes entirely clear, he doesn’t see this as a problem. “The conjecture,” he writes, “that there are greater values than we can currently fathom does not imply that values are not defined in terms of our current dispositions.” And so, armed with a mystically irrefutable blanket assertion, he goes on to characterize value itself as a commodity to be amassed: “Transhumanism,” he writes, “promotes the quest to develop further so that we can explore hitherto inaccessible realms of value.”
Now I’ve deliberatively refrained from sarcasm up to this point, even though I think it is entirely deserved, given transhumanism’s troubling ideological tropes and explicit use of commercial advertising practices. You only need watch the OWN channel for five minutes to realize that hope sells. Heaven forbid I inject any anxiety into what is, on any account, an unavoidable, existential impasse. I mean, only the very fate of humanity lies in the balance. It’s not like your netflix is going to be cancelled or anything.
For those unfortunates who’ve read my novel Neuropath, you know that I am nowhere near as sunny about the future as I sound. I think the future, to borrow an acronym from the Second World War, has to be–has to be–FUBAR. Totally and utterly, Fucked Up Beyond All Recognition. Now you could argue that transhumanism is at least aware of this possibility. You could even argue, as some Critical Posthumanists (as David Roden classifies them) do, that FUBAR is exactly what we need, given that the present is so incredibly FU. But I think none of these theorists really has a clear grasp of the stakes. (And how could they, when I so clearly do?)
Transhumanism may not, as Nick Bostrom says, entail ‘technological optimism,’ but as I hope to show you, it most definitely entails scientific optimism. Because you see, this is precisely what falls between the cracks in debates on the posthuman: everyone is so interested in what Techno-Santa has in his big fat bag of More-better, that they forget to take a hard look at Techno-Santa, himself, the science that makes all the goodies, from the cosmetic to the apocalyptic, possible. Santa decides what to put in the bag, and as I hope to show you, we have no reason whatsoever to trust the fat bastard. In fact, I think we have good reason to think he’s going to screw us but good.
As you might expect, the word ‘human’ gets bandied about quite a bit in these debates–we are, after all, our own favourite topic of conversation, and who doesn’t adore daydreaming about winning the lottery? And by and large, the term is presented as a kind of given: after all, we are human, and as such, obviously know pretty much all we need to know about what it means to be human–don’t we?
This is essentially Andy Clark’s take in Natural-born Cyborgs: Given what we now know about human nature, he argues, we should see that our nascent or impending union with our technology is as natural as can be, simply because, in an important sense, we have always been cyborgs, which is to say, at one with our technologies. Clark is a famous proponent of something called the Extended Mind Thesis, and for more than a decade he has argued forcefully that human consciousness is not something confined to our skull, but rather spills out and inheres in the environmental systems that embed the neural. He thinks consciousness is an interactionist phenomena, something that can only be understood in terms of neuro-environmental loops. Since he genuinely believes this, he takes it as a given in his consideration of our cyborg future.
But of course, it is nowhere near a ‘given.’ It isn’t even a scientific controversy: it’s a speculative philosophical opinion. Fascinating, certainly. But worth gambling the future of humanity?
My opinion is equally speculative, equally philosophical–but unlike Clark, I don’t need to assume that it’s true to make my case, only that it’s a viable scientific possibility. Nick Bostrom, of all people, actually explains it best, even though he’s arrogant enough to think he’s arguing for his own emancipatory thesis!
“Further, our human brains may cap our ability to discover philosophical and scientific truths. It is possible that the failure of philosophical research to arrive at solid, generally accepted answers to many of the traditional big philosophical questions could be due to the fact that we are not smart enough to be successful in this kind of enquiry. Our cognitive limitations may be confining us in a Platonic cave, where the best we can do is theorize about “shadows”, that is, representations that are sufficiently oversimplified and dumbed-down to fit inside a human brain.” (“Transhumanist Values”)
Now this is precisely what I think, that our ‘cognitive limitations’ have forced us to make do with ‘shadows,’ ‘oversimplified and dumbed-down’ information, particularly regarding ourselves–which is to say, the human. Since I’ve already quoted the opening passage from Kant’s “What is Enlightenment?” it perhaps serves, at this point, to quote the closing passage. Speaking of the importance of civil freedom, Kant concludes: “Eventually it even influences the principles of governments, which find that they can themselves profit by treating man, who is more than a machine, in a manner appropriate to his dignity” (60). Kant, given the science of his day, could still assert a profound distinction between man, the possessor of values, and machine, the possessor of none. Nowadays, however, the black box of the human brain has been cracked open, and the secrets that have come tumbling out would have made Kant shake for terror or fury. Man, we now know, is a machine–that much is simple. The question, and I assure you it is very real, is one of how things like moral dignity–which is to say, things like value–arise from this machine, if at all.
It literally could be the case that value is another one of these ‘shadows,’ an ‘oversimplified’ and ‘dumbed-down’ way to make the complexities of evolutionary effectiveness ‘fit inside a human brain.’ It now seems pretty clear, for instance, that the ‘feeling of willing’ is a biological subreption, a cognitive illusion that turns on our utter blindness to the neural antecedents to our decisions and thoughts. The same seems to be the case with our feeling of certainty. It’s also becoming clear that we only think we have direct access to things like our beliefs and motivations, that, in point of fact, we use the same ‘best guess’ machinery that we use to interpret the behaviour of others to interpret ourselves as well.
The list goes on. But the only thing that’s clear at this point is that we humans are not what we thought we were. We’re something else. Perhaps something else entirely. The great irony of posthuman studies is that you find so many people puzzling and pondering the what, when, and how of our ceasing to be human in the future, when essentially that process is happening now, as we speak. Put in philosophical terms, the ‘posthuman’ could be an epistemological achievement rather than an ontological one. It could be that our descendants will look back and laugh their gearboxes off, the notion of a bunch of soulless robots worrying about the consequences of becoming a bunch of soulless robots.
So here’s the question I would ask Mr. Bostrom: Which human are you talking about? The one you hope that we are, or the one that science will show us to be?
Either way, transhumanism as praxis–as a social movement requiring real-world action like membership drives and market branding, is well and truly ‘forked,’ to use a chess analogy: ‘Better living through science’ cannot be your foundational assumption unless you are willing to seriously consider what science has to say. You don’t get to pick and choose which traditional illusion you get to cling to.
Transhumanism, if you think about it, should be renamed transconfusionism, and rebranded as X+.
In a sense what I’m saying is pretty straightforward: no posthumanism that fails to consider the problem of the human (which is just to say, the problem of meaning and value) is worthy of the name. Such posthumanisms, I think anyway, are little more than wishful thinking, fantasies that pretend otherwise. Why? Because at no time in human history has the nature of the human been more in doubt.
But there has to be more to the picture, doesn’t there? This argument is just too obvious, too straightforward, to have been ‘overlooked’ these past couple decades. Or maybe not.
The fact is, no matter how eloquently I argue, no matter how compelling the evidence I adduce, how striking or disturbing the examples, next to no one in this room is capable of slipping the intuitive noose of who and what they think they are. The seminal American philosopher Wilfred Sellars calls this the Manifest Image, the sticky sense of subjectivity provided by our immediate intuitions–and here’s the thing, no matter what science has to say (let alone a fantasy geek with a morbid fascination with consciousness and cognition). To genuinely think the posthuman requires us to see past our apparent, or manifest, humanity–and this, it turns out, is difficult in the extreme. So, to make my argument stick, I want to leave you with a way of understanding both why my argument is so destructive of transhumanism, and why that destructiveness is nevertheless so difficult to conceive, let alone to believe.
Look at it this way. The explanatory paradigm of the life sciences is mechanistic. Either we humans are machines, or everything from Kreb’s cycle to cell mitosis is magical. This puts the question of human morality and meaning in an explanatory pickle, because, for whatever reason, the concepts belonging to morality and meaning just don’t make sense in mechanistic terms. So either we need to understand how machines like us generate meaning and morality, or we need to understand how machines like us hallucinate meaning and morality.
The former is, without any doubt, the majority position. But the latter, the position that occupies my time, is slowly growing, as is the mountain of counterintuitive findings in the sciences of the mind and brain. I have, quite against my inclination, prepared a handful of images to help you visualize this latter possibility, what I call the Blind Brain Theory.
Imagine we had perfect introspective access, so that each time we reflected on ourselves we were confronted with something like this:
We would see it all, all the wheels and gears behind what William James famously called the “blooming, buzzing confusion” of conscious life. Would their be any ‘choice’ in this system? Obviously not, just neural mechanisms picking up where environmental mechanisms have left off. How about ‘desire’? Again, nothing we really could identify as such, given that we would know, in intimate detail, the particulars of the circuits that keep our organism in homeostatic equilibrium with our environments. Well, how about morals, the values that guide us this way and that? Once again, it’s hard to understand what these might be, given that we could, at any moment, inspect the mechanistic regularities that in fact govern our behaviour. So no right or wrong? Well, what would these be? Of course, given the unpredictability of events, the mechanism would malfunction periodically, throw its wife’s work slacks into the dryer, maybe have a tooth or two knocked out of its gears. But this would only provide information regarding the reliability of its systems, not its ‘moral character.’
Now imagine dialling back the information available for introspective access, so that your ability to perfectly discriminate the workings of your brain becomes foggy:
Now imagine a cost-effectiveness expert (named ‘Evolution’) comes in, and tells you that even your foggy but complete access is far, far too expensive: computation costs calories, you know! So he goes through and begins blacking out whole regions of access according to arcane requirements only he is aware of. What’s worse, he’s drunk and stoned, and so there’s a whole haphazard, slap-dash element to the whole procedure, leaving you with something like this:
But of course, this foggy and fractional picture actually presumes that you have direct introspective access to information regarding the absence of information, when this is plainly not the case, and not required, given the rigours of your paleolithic existence. This means, you can no longer intuit the fractional nature of your introspection intuitions, that the far-flung fragments of access you possess actually seem like unified and sufficient wholes, leaving you with:
This impressionistic mess is your baseline. Your mind. But of course, it doesn’t intuitively seem like an impressionistic mess–quite the opposite, in fact. But this is simply because it is your baseline, your only yardstick. I know it seems impossible, but consider, if dreams lacked the contrast of waking life, they would be the baseline for lucidity, coherence, and truth. Likewise, there are degrees of introspective access–degrees of consciousness–that would make what you are experiencing this very moment seem like little more than a pageant of phantasmagorical absurdities.
The more the sciences of the brain discover, the more they are revealing that consciousness and its supposed verities–like value–are confused and fractional. This is the trend. If it persists, then meaning and morality could very well turn out to be artifacts of blindness and neglect–illusions the degree to which they seem whole and sufficient. If meaning and morality are best thought of as hallucinations, then the human, as it has been understood down through the ages, from the construction of Khufu to the first performance of Hamlet to the launch of Sputnik, never existed, and, in a crazy sense, we have been posthuman all along. And the transhuman program as envisioned by the likes of Nick Bostrom becomes little more than a hope founded on a pipedream.
And our future becomes more radically alien than any of us could possibly conceive, let alone imagine.