Three Pound Brain

No bells, just whistling in the dark…

Tag: posthuman

Less Human than Human: The Cyborg Fantasy versus the Neuroscientific Real (2012/10/29)

by rsbakker

Since Massimo Pigluicci has reposted Julia Galef’s tepid defense of transhumanism from a couple years back, I thought I would repost the critique I gave last fall, an argument which actually turns Galef’s charge of ‘essentialism’ against transhumanism. Short of some global catastrophe, transhumanism is coming (for those who can afford it, at least) whether we want it to or not. My argument is simply that transhumanists need to recognize that the very values they use to motivate their position are likely among the things our posthuman descendents will leave behind.   

.

When alien archaeologists sift through the rubble of our society, which public message, out of all those they unearth, will be the far and away most common?

The answer to this question is painfully obvious–when you hear it, that is. Otherwise, it’s one of those things that is almost too obvious to be seen.

Sale… Sale–or some version of it. On sale. For sale. 10% off. 50% off. Bigger savings. Liquidation event!

Or, in other words, more for less.

Consumer society is far too complicated to be captured in any single phrase, but you could argue that no phrase better epitomizes its mangled essence. More for less. More for less. More for less.

Me-me-more-more-me-me-more-arrrrrgh!

Thus the intuitive resonance of “More Human than Human,” the infamous tagline of the Tyrell Corporation, or even ‘transhumanism’ more generally, which has been vigorously rebranding itself the past several months as ‘H+,’ an abbreviation of ‘Humanity plus.’

What I want to do is drop a few rocks into the hungry woodchipper of transhumanist enthusiasm. Transhumanism has no shortage of critics, but given a potent brand and some savvy marketing, it’s hard not to imagine the movement growing by leaps and bounds in the near future. And in all the argument back and forth, no one I know of (with the exception of David Roden, whose book I eagerly anticipate) has really paused to consider what I think is the most important issue of all. So what I want to do is isolate a single, straightforward question, one which the transhumanist has to be able to answer to anchor their claims in anything resembling rational discourse (exuberant discourse is a different story). The idea, quite simply, is to force them to hold the fingers they have crossed plain for everyone to see, because the fact is, the intelligibility of their entire program depends on research that is only just getting under way.

I think I can best sum up my position by quoting the philosopher Andy Clark, one the world’s foremost theorists of consciousness and cognition, who after considering competing visions of our technological future, good and bad, writes, “Which vision will prove the most accurate depends, to some extent, on the technologies themselves, but it depends also–and crucially–upon a sensitive appreciation of our own nature” (Natural-Born Cyborgs, 173). It’s this latter condition, the ‘sensitive appreciation of our own nature,’ that is my concern, if only because this is precisely what I think Clark and just about everyone else fails to do.

First, we need to get clear on just how radical the human future has become. We can talk about the singularity, the transformative potential of nano-bio-info-technology, but it serves to look back as well, to consider what was arguably humanity’s last great break with its past, what I will here call the ‘Old Enlightenment.’ Even though no social historical moment so profound or complicated can be easily summarized, the following opening passage, taken from a 1784 essay called, “An Answer to the Question: ‘What is Enlightenment?’” by Immanuel Kant, is the one scholars are most inclined to cite:

Enlightenment is man’s emergence from his self-incurred immaturity. Immaturity is the inability to use one’s own reason without the guidance of another. This immaturity is self-incurred if its cause is not lack of understanding, but lack of resolution and courage to use it without the guidance of another. The motto of the enlightenment is therefore: Sapere aude! Have courage to use your own understanding!” (“An Answer to the Question: ‘What is Enlightenment?’” 54)

Now how modern is this? For my own part, I can’t count all the sales pitches this resonates with, especially when it comes to that greatest of contradictions, the television commercial. What is Enlightenment? Freedom, Kant says. Autonomy, not from the political apparatus of the state (he was a subject of Frederick the Great, after all), but from the authority of traditional thought–from our ideological inheritance. More new. Less old. New good. Old bad. Or in other words, More better, less worse. The project of the Enlightenment, according to Kant, lies in the maximization of intellectual and moral freedom, which is to say, the repudiation of what we were and an openness to what we might become. Or, as we still habitually refer to it, ‘Progress.’ The Old Enlightenment effectively rebranded humanity as a work in progress, something that could be improved–enhanced–through various forms of social and personal investment. We even have a name for it, nowadays: ‘human capital.’

The transhumanists, in a sense, are offering nothing new in promising the new. And this is more than just ironic. Why? Because even though the Old Enlightenment was much less transformative socially and technologically than the New will almost certainly be, the transhumanists nevertheless assume that it was far more transformative ideologically. They assume, in other words, that the New Enlightenment will be more or less conceptually continuous with the Old. Where the Old Enlightenment offered freedom from our ideological inheritance, but left us trapped in our bodies, the New Enlightenment is offering freedom from our biological inheritance–while leaving our belief systems largely intact. They assume, quite literally, that technology will deliver more of what we want physically, not ideologically.

More better

Of course, everything hinges upon the ‘better,’ here. More is not a good in and of itself. Things like more flooding, more tequila, or more herpes, just for instance, plainly count as more worse (although, if the tequila is Patron, you might argue otherwise). What this means is that the concept of human value plays a profound role in any assessment of our posthuman future. So in the now canonical paper, “Transhumanist Values,” Nick Bostrom, the Director of the Future of Humanity Institute at Oxford University, enumerates the principle values of the transhumanist movement, and the reasons why they should be embraced. He even goes so far as to provide a wish list, an inventory of all the ways we can be ‘more human than human’–though he seems to prefer the term ‘enhanced.’ “The limitations of the human mode of being are so pervasive and familiar,” he writes, “that we often fail to notice them, and to question them requires manifesting an almost childlike naiveté.” And so he gives us a shopping list of our various incapacities: lifespan; intellectual capacity; body functionality; sensory modalities, special faculties and sensibilities; mood, energy, and self-control. He characterizes each of these categories as constraints, biological limits that effectively prevent us from reaching our true potential. He even provides a nifty little graph to visualize all that ‘more better’ out there, hanging like ripe fruit in the garden of our future, just waiting to be plucked, if only–as Kant would say–we possess the courage.

As a philosopher, he’s too sophisticated to assume that this biological emancipation will simply spring from the waxed loins of unfettered markets or any such nonsense. He fully expects humanity to be tested by this transformation–”[t]ranshumanism,” as he writes, “does not entail technological optimism”–so he offers transhumanism as a kind of moral beacon, a star that can safely lead us across the tumultuous waters of technological transformation to the land of More-most-better–or as he explicitly calls it elsewhere, Utopia.

And to his credit, he realizes that value itself is in play, such is the profundity of the transformation. But for reasons he never makes entirely clear, he doesn’t see this as a problem. “The conjecture,” he writes, “that there are greater values than we can currently fathom does not imply that values are not defined in terms of our current dispositions.” And so, armed with a mystically irrefutable blanket assertion, he goes on to characterize value itself as a commodity to be amassed: “Transhumanism,” he writes, “promotes the quest to develop further so that we can explore hitherto inaccessible realms of value.”

Now I’ve deliberatively refrained from sarcasm up to this point, even though I think it is entirely deserved, given transhumanism’s troubling ideological tropes and explicit use of commercial advertising practices. You only need watch the OWN channel for five minutes to realize that hope sells. Heaven forbid I inject any anxiety into what is, on any account, an unavoidable, existential impasse. I mean, only the very fate of humanity lies in the balance. It’s not like your netflix is going to be cancelled or anything.

For those unfortunates who’ve read my novel Neuropath, you know that I am nowhere near as sunny about the future as I sound. I think the future, to borrow an acronym from the Second World War, has to be–has to be–FUBAR. Totally and utterly, Fucked Up Beyond All Recognition. Now you could argue that transhumanism is at least aware of this possibility. You could even argue, as some Critical Posthumanists (as David Roden classifies them) do, that FUBAR is exactly what we need, given that the present is so incredibly FU. But I think none of these theorists really has a clear grasp of the stakes. (And how could they, when I so clearly do?)

Transhumanism may not, as Nick Bostrom says, entail ‘technological optimism,’ but as I hope to show you, it most definitely entails scientific optimism. Because you see, this is precisely what falls between the cracks in debates on the posthuman: everyone is so interested in what Techno-Santa has in his big fat bag of More-better, that they forget to take a hard look at Techno-Santa, himself, the science that makes all the goodies, from the cosmetic to the apocalyptic, possible. Santa decides what to put in the bag, and as I hope to show you, we have no reason whatsoever to trust the fat bastard. In fact, I think we have good reason to think he’s going to screw us but good.

As you might expect, the word ‘human’ gets bandied about quite a bit in these debates–we are, after all, our own favourite topic of conversation, and who doesn’t adore daydreaming about winning the lottery? And by and large, the term is presented as a kind of given: after all, we are human, and as such, obviously know pretty much all we need to know about what it means to be human–don’t we?

Don’t we?

Maybe.

This is essentially Andy Clark’s take in Natural-born Cyborgs: Given what we now know about human nature, he argues, we should see that our nascent or impending union with our technology is as natural as can be, simply because, in an important sense, we have always been cyborgs, which is to say, at one with our technologies. Clark is a famous proponent of something called the Extended Mind Thesis, and for more than a decade he has argued forcefully that human consciousness is not something confined to our skull, but rather spills out and inheres in the environmental systems that embed the neural. He thinks consciousness is an interactionist phenomena, something that can only be understood in terms of neuro-environmental loops. Since he genuinely believes this, he takes it as a given in his consideration of our cyborg future.

But of course, it is nowhere near a ‘given.’ It isn’t even a scientific controversy: it’s a speculative philosophical opinion. Fascinating, certainly. But worth gambling the future of humanity?

My opinion is equally speculative, equally philosophical–but unlike Clark, I don’t need to assume that it’s true to make my case, only that it’s a viable scientific possibility. Nick Bostrom, of all people, actually explains it best, even though he’s arrogant enough to think he’s arguing for his own emancipatory thesis!

“Further, our human brains may cap our ability to discover philosophical and scientific truths. It is possible that the failure of philosophical research to arrive at solid, generally accepted answers to many of the traditional big philosophical questions could be due to the fact that we are not smart enough to be successful in this kind of enquiry. Our cognitive limitations may be confining us in a Platonic cave, where the best we can do is theorize about “shadows”, that is, representations that are sufficiently oversimplified and dumbed-down to fit inside a human brain.” (“Transhumanist Values”)

Now this is precisely what I think, that our ‘cognitive limitations’ have forced us to make do with ‘shadows,’ ‘oversimplified and dumbed-down’ information, particularly regarding ourselves–which is to say, the human. Since I’ve already quoted the opening passage from Kant’s “What is Enlightenment?” it perhaps serves, at this point, to quote the closing passage. Speaking of the importance of civil freedom, Kant concludes: “Eventually it even influences the principles of governments, which find that they can themselves profit by treating man, who is more than a machine, in a manner appropriate to his dignity” (60). Kant, given the science of his day, could still assert a profound distinction between man, the possessor of values, and machine, the possessor of none. Nowadays, however, the black box of the human brain has been cracked open, and the secrets that have come tumbling out would have made Kant shake for terror or fury. Man, we now know, is a machine–that much is simple. The question, and I assure you it is very real, is one of how things like moral dignity–which is to say, things like value–arise from this machine, if at all.

It literally could be the case that value is another one of these ‘shadows,’ an ‘oversimplified’ and ‘dumbed-down’ way to make the complexities of evolutionary effectiveness ‘fit inside a human brain.’ It now seems pretty clear, for instance, that the ‘feeling of willing’ is a biological subreption, a cognitive illusion that turns on our utter blindness to the neural antecedents to our decisions and thoughts. The same seems to be the case with our feeling of certainty. It’s also becoming clear that we only think we have direct access to things like our beliefs and motivations, that, in point of fact, we use the same ‘best guess’ machinery that we use to interpret the behaviour of others to interpret ourselves as well.

The list goes on. But the only thing that’s clear at this point is that we humans are not what we thought we were. We’re something else. Perhaps something else entirely. The great irony of posthuman studies is that you find so many people puzzling and pondering the what, when, and how of our ceasing to be human in the future, when essentially that process is happening now, as we speak. Put in philosophical terms, the ‘posthuman’ could be an epistemological achievement rather than an ontological one. It could be that our descendants will look back and laugh their gearboxes off, the notion of a bunch of soulless robots worrying about the consequences of becoming a bunch of soulless robots.

So here’s the question I would ask Mr. Bostrom: Which human are you talking about? The one you hope that we are, or the one that science will show us to be?

Either way, transhumanism as praxis–as a social movement requiring real-world action like membership drives and market branding, is well and truly ‘forked,’ to use a chess analogy: ‘Better living through science’ cannot be your foundational assumption unless you are willing to seriously consider what science has to say. You don’t get to pick and choose which traditional illusion you get to cling to.

Transhumanism, if you think about it, should be renamed transconfusionism, and rebranded as X+.

In a sense what I’m saying is pretty straightforward: no posthumanism that fails to consider the problem of the human (which is just to say, the problem of meaning and value) is worthy of the name. Such posthumanisms, I think anyway, are little more than wishful thinking, fantasies that pretend otherwise. Why? Because at no time in human history has the nature of the human been more in doubt.

But there has to be more to the picture, doesn’t there? This argument is just too obvious, too straightforward, to have been ‘overlooked’ these past couple decades. Or maybe not.

The fact is, no matter how eloquently I argue, no matter how compelling the evidence I adduce, how striking or disturbing the examples, next to no one in this room is capable of slipping the intuitive noose of who and what they think they are. The seminal American philosopher Wilfred Sellars calls this the Manifest Image, the sticky sense of subjectivity provided by our immediate intuitions–and here’s the thing, no matter what science has to say (let alone a fantasy geek with a morbid fascination with consciousness and cognition). To genuinely think the posthuman requires us to see past our apparent, or manifest, humanity–and this, it turns out, is difficult in the extreme. So, to make my argument stick, I want to leave you with a way of understanding both why my argument is so destructive of transhumanism, and why that destructiveness is nevertheless so difficult to conceive, let alone to believe.

Look at it this way. The explanatory paradigm of the life sciences is mechanistic. Either we humans are machines, or everything from Kreb’s cycle to cell mitosis is magical. This puts the question of human morality and meaning in an explanatory pickle, because, for whatever reason, the concepts belonging to morality and meaning just don’t make sense in mechanistic terms. So either we need to understand how machines like us generate meaning and morality, or we need to understand how machines like us hallucinate meaning and morality.

The former is, without any doubt, the majority position. But the latter, the position that occupies my time, is slowly growing, as is the mountain of counterintuitive findings in the sciences of the mind and brain. I have, quite against my inclination, prepared a handful of images to help you visualize this latter possibility, what I call the Blind Brain Theory.

Imagine we had perfect introspective access, so that each time we reflected on ourselves we were confronted with something like this:

We would see it all, all the wheels and gears behind what William James famously called the “blooming, buzzing confusion” of conscious life. Would their be any ‘choice’ in this system? Obviously not, just neural mechanisms picking up where environmental mechanisms have left off. How about ‘desire’? Again, nothing we really could identify as such, given that we would know, in intimate detail, the particulars of the circuits that keep our organism in homeostatic equilibrium with our environments. Well, how about morals, the values that guide us this way and that? Once again, it’s hard to understand what these might be, given that we could, at any moment, inspect the mechanistic regularities that in fact govern our behaviour. So no right or wrong? Well, what would these be? Of course, given the unpredictability of events, the mechanism would malfunction periodically, throw its wife’s work slacks into the dryer, maybe have a tooth or two knocked out of its gears. But this would only provide information regarding the reliability of its systems, not its ‘moral character.’

Now imagine dialling back the information available for introspective access, so that your ability to perfectly discriminate the workings of your brain becomes foggy:

Now imagine a cost-effectiveness expert (named ‘Evolution’) comes in, and tells you that even your foggy but complete access is far, far too expensive: computation costs calories, you know! So he goes through and begins blacking out whole regions of access according to arcane requirements only he is aware of. What’s worse, he’s drunk and stoned, and so there’s a whole haphazard, slap-dash element to the whole procedure, leaving you with something like this:

But of course, this foggy and fractional picture actually presumes that you have direct introspective access to information regarding the absence of information, when this is plainly not the case, and not required, given the rigours of your paleolithic existence. This means, you can no longer intuit the fractional nature of your introspection intuitions, that the far-flung fragments of access you possess actually seem like unified and sufficient wholes, leaving you with:

This impressionistic mess is your baseline. Your mind. But of course, it doesn’t intuitively seem like an impressionistic mess–quite the opposite, in fact. But this is simply because it is your baseline, your only yardstick. I know it seems impossible, but consider, if dreams lacked the contrast of waking life, they would be the baseline for lucidity, coherence, and truth. Likewise, there are degrees of introspective access–degrees of consciousness–that would make what you are experiencing this very moment seem like little more than a pageant of phantasmagorical absurdities.

The more the sciences of the brain discover, the more they are revealing that consciousness and its supposed verities–like value–are confused and fractional. This is the trend. If it persists, then meaning and morality could very well turn out to be artifacts of blindness and neglect–illusions the degree to which they seem whole and sufficient. If meaning and morality are best thought of as hallucinations, then the human, as it has been understood down through the ages, from the construction of Khufu to the first performance of Hamlet to the launch of Sputnik, never existed, and, in a crazy sense, we have been posthuman all along. And the transhuman program as envisioned by the likes of Nick Bostrom becomes little more than a hope founded on a pipedream.

And our future becomes more radically alien than any of us could possibly conceive, let alone imagine.

Technocracy, Buddhism, and Technoscientific Enlightenment (by Benjamin Cain)

by rsbakker

In “Homelessness and the Transhuman” I used some analogies to imagine what life without the naive and illusory self-image would be like. The problem of imagining that enlightenment should be divided into two parts. One is the relatively uninteresting issue of which labels we want to use to describe something. Would an impersonal, amoral, meaningless, and purposeless posthuman, with no consciousness or values as we usually conceive of them “think” at all? Would she be “alive”? Would she have a “mind”? Even if there are objective answers to such questions, the answers don’t really matter since however far our use of labels can be stretched, we can always create a new label. So if the posthuman doesn’t think, maybe she “shminks,” where shminking is only in some ways similar to thinking. This gets at the second, conceptual issue here, though. The interesting question is whether we can conceive of the contents of posthuman life. For example, just what would be the similarities and differences between thinking and shminking? What could we mean by “thought” if we put aside the naive, folk psychological notions of intentionality, truth, and value? We can use ideas of information and function to start to answer that sort of question, but the problem is that this taxes our imagination because we’re typically committed to the naive, exoteric way of understanding ourselves, as R. Scott Bakker explains.

One way to get clearer about what the transformation from confused human to enlightened posthuman would entail is to consider an example that’s relatively easy to understand. So take the Netflix practice described by Andrew Leonard in “How Netflix is Turning Viewers into Puppets.” Apparently, more Americans now watch movies legally streamed over the internet than they do on DVD or Blu-Ray, and this allows the stream providers to accumulate all sorts of data that indicate our movie preferences. When we pause, fast forward or stop watching streamed content, we supply companies like Netflix with enormous quantities of information which their number crunchers explain with a theory about our viewing choices. For example, according to Leonard, Netflix recently spent $100 million to remake the BBC series House of Cards, based on that detailed knowledge of viewers’ habits. Moreover, Netflix learned that the same subscribers who liked that earlier TV show also tend to like Kevin Spacey, and so the company hired Kevin Spacey to star in the remake.

So the point isn’t just that entertainment providers can now amass huge quantities of information about us, but that they can use that information to tailor their products to maximize their profits. In other words, companies can now come much closer to giving us exactly what we objectively want, as indicated by scientific explanations of our behaviour. As Leonard says, “The interesting and potentially troubling question is how a reliance on Big Data [all the data that’s now available about our viewing habits] might funnel craftsmanship in particular directions. What happens when directors approach the editing room armed with the knowledge that a certain subset of subscribers are opposed to jump cuts or get off on gruesome torture scenes or just want to see blow jobs. Is that all we’ll be offered? We’ve seen what happens when news publications specialize in just delivering online content that maximizes page views. It isn’t always the most edifying spectacle.”

So here we have an example not just of how technocrats depersonalize consumers, but of the emerging social effects of that technocratic perspective. There are numerous other fields in which the fig leaf of our crude self-conception is stripped away and people are regarded as machines. In the military, there are units, targets, assets, and so forth, not free, conscious, precious souls. Likewise, in politics and public relations, there are demographics, constituents, and special interests, and such categories are typically defined in highly cynical ways. Again, in business there are consumers and functionaries in bureaucracies, not to mention whatever exotic categories come to the fore in Wall Street’s mathematics of financing. Again, though, it’s one thing to depersonalize people in your thoughts, but it’s another to apply that sophisticated conception to some professional task of engineering. In other words, we need to distinguish between fantasy- and reality-driven depersonalization. Military, political, and business professionals, for example, may resort to fashionable vocabularies to flatter themselves as insiders or to rationalize the vices they must master to succeed in their jobs. Then again, perhaps those vocabularies aren’t entirely subjective; maybe soldiers can’t psych themselves up to kill their opponents unless they’re trained to depersonalize and even to demonize them. And perhaps public relations, marketing, and advertising are even now becoming more scientific.

.

The Double Standard of Technocracy

Be that as it may, I’d like to begin with just the one, pretty straightforward example of creating art to appeal to the consumer, based on inferences about patterns in mountains of data acquired from observations of the consumer’s behaviour. As Leonard says, we don’t have to merely speculate on what will likely happen to art once it’s left in the hands of bean counters. For decades, producers of content have researched what people want so that they could fulfill that demand. It turns out that the majority of people in most societies have bad taste owing to their pedestrian level of intelligence. Thus, when an artist is interested in selling to the largest possible audience to make a short-term profit, that is, when the artist thinks purely in such utilitarian terms, she must give those people what they want, which is drivel. And if all artists come to think that way, the standard of art (of movies, music, paintings, novels, sports, and so on) is lowered. Leonard points out that this happens in online news as well. The stories that make it to the front page are stories about sex or violence, because that’s what most people currently want to see.

So entertainment companies that will use this technoscience (the technology that accumulates data about viewing habits plus the scientific way of drawing inferences to explain patterns in those data) have some assumptions I’d like to highlight. First, these content producers are interested in short-term profits. If they were interested in long-term ones and were faced with depressing evidence of the majority’s infantile preferences, the producers could conceivably raise the bar by selling not to the current state of consumers but to what consumers could become if exposed to a more constructive, challenging environment. In other words, the producers could educate or otherwise improve the majority, suffering the consumers’ hostility in the short-term but helping to shape viewers’ preferences for the better and betting on that long-term approval. Presumably, this altruistic strategy would tend to fail because free-riders would come along and lower the bar again, tempting consumers with cheap thrills. In any case, this engineering of entertainment is capitalistic, meaning that the producers are motivated to earn short-term profit.

Second, the producers are interested in exploiting consumers’ weaknesses. That is, the producers themselves behave as parasites or predators. Again, we can conclude that this is so because of what the producers choose to observe. Granted, the technology offers only so many windows into the consumer’s preferences; at best, the data show only what consumers currently like to watch, not the potential of what they could learn to prefer if given the chance. Thus, these producers don’t think in a paternalistic way about their relationship with consumers. A good parent offers her child broccoli, pickles, and spinach rather than just cookies and macaroni and cheese, to introduce the child to a variety of foods. A good parent wants the child to grow into an adult with a mature taste. By contrast, an exploitative parent would feed her daughter, say, only what she prefers at the moment, in her current low point of development, ensuring that the youngster will suffer from obesity-related health problems when she grows up. Likewise, content producers are uninterested in polling to discern people’s potential for greatness, by asking about their wishes, dreams, or ideals. No, the technology in question scrutinizes what people do when they vegetate in front of the TV after a long, hard day on the job. The content producers thus learn what we like when we’re effectively infantilized by television, when the TV literally affects our brain waves, making us more relaxed and open to suggestion, and the producers mean to exploit that limited sample of information, as large as it may be. Thus, the producers mean to cash in by exploiting us when we’re at our weakest, to profit by creating an environment that tempts us to remain in a childlike state and that caters to our basest impulses, to our penchant for fallacies and biases, and so on. So not only are the content producers thinking as capitalists, they’re predators/parasites to boot.

Finally, this engineering of content depends on the technoscience in question. Acquiring huge stores of data is useless without a way of interpreting the data. The companies must look for patterns and then infer the consumer’s mindset in a way that’s testable. That is, the inferences must follow logically from a hypothesis that’s eventually explained by a scientific theory. That theory then supports technological applications. If the theory is wrong, the technology won’t work; for example, the streamed movies won’t sell.

The upshot is that this scientific engineering of entertainment is based on only a partial depersonalization: the producers depersonalize the consumers while leaving their own personal self-image intact. That is, the content producers ignore how the consumers naively think of themselves, reducing them to robots that can be configured or contained by technology, but the producers don’t similarly give up their image of themselves as people in the naive sense. Implicitly, the consumers lose their moral, in not their legal, rights when they’re reduced to robots, to passive streamers of content that’s been carefully designed to appeal to the weakest part of them, whereas the producers will be the first to trumpet their moral and not just their legal right to private property. The consumers consent to purchase the entertainment, but the producers don’t respect them as dignified beings; otherwise, again, the producers would think more about lifting these consumers up instead of just exploiting their weaknesses for immediate returns. Still, the producers think of themselves, surely, as normatively superior. Even if the producers style themselves as Nietzschean insiders who reject altruistic morality and prefer a supposedly more naturalistic, Ayn Randian value system, they still likely glorify themselves at the expense of their victims. And even if some of those who profit from the technocracy are literally sociopathic, that means only that they don’t feel the value of those they exploit; nevertheless, a sociopath acts as an egotist, which means she presupposes a double standard, one for herself and one for everyone else.

.

From Capitalistic Predator to Buddhist Monk

What interests me about this inchoate technocracy, this business of using technoscience to design and manage society, is that it functions as a bridge to imagining a possible posthuman state. To cross over in our minds to the truly alien, we need stepping stones. Netflix is analogous to enlightened posthumanity in that Netflix is part of the way toward that destination. So when we consider Netflix we stand closer to the precipice and we can ask ourselves what giving up the rest of the personal self-image would be like. So suppose a content provider depersonalizes everyone, viewing herself as well as just a manipulable robot. On this supposition, the provider becomes something like a Buddhist who can observe her impulses and preferences without being attached to them. She can see the old self-image still operating in her mind, sustained as it is by certain neural circuits, but she’s trained not to be mesmerized by that image. She’s learned to see the reality behind the illusion, the code that renders the matrix. So she may still be inclined in certain directions, but she won’t reflexively go anywhere. She has the capacity to exploit the weak and to enrich herself, and she may even be inclined to do so, but because she doesn’t identify with the crudely-depicted self, she may not actually proceed down that expected path. In fact, the mystery remains as to why any enlightened person does whatever she does.

This calls for a comparison between the posthuman’s science-centered enlightenment and the Buddhist kind. The sort of posthuman self I’m trying to imagine transcends the traditional categories of the self, on the assumption that these categories rest on ignorance owing to the brain’s native limitations in learning about itself. The folk categories are replaced with scientific ones and we’re left wondering what we’d become were we to see ourselves strictly in those scientific terms. What would we do with ourselves and with each other? The emerging technocratic entertainment industry gives us some indication, but I’ve tried to show that that example provides us with only one stepping stone. We need another, so let’s try that of the Buddhist.

Now, Buddhist enlightenment is supposed to consist of a peaceful state of mind that doesn’t turn into any sort of suffering, because the Buddhist has learned to stop desiring any outcome. You only suffer when you don’t get what you want, and if you stop wanting anything, or more precisely if you stop identifying with your desires, you can’t be made to suffer. The lack of any craving for an outcome entails a discarding of the egoistic pretense of your personal independence, since it’s only when you identify narrowly with some set of goals that you create an illusion that’s bound to make you suffer, because the illusion is out of alignment with reality. In reality, everything is interconnected and so you’re not merely your body or your mind. When you assume you are, the world punishes you in a thousand degrees and dimensions, and so you suffer because your deluded expectations are dashed.

Here are a couple of analogies to clarify how this Buddhist frame of mind works, according to my understanding of it. Once you’ve learned to drive a car, driving becomes second nature to you, meaning that you come to identify with the car as your extended body. Prior to that identification, when you’re just starting to drive, the car feels awkward and new because you experience it as a foreign body. When you’ve familiarized yourself with the car’s functions, with the rules of the road, and with the experience of driving, sitting in the driver’s seat feels like slipping on an old pair of shoes. Every once in a while, though, you may snap out of that familiarity. When you’re in the middle of an intersection, in a left turn lane, you may find yourself looking at cars anew and being amazed and even a little scared about your current situation on the road: you’re in a powerful vehicle, surrounded by many more such vehicles, following all of these signs to avoid being slammed by those tons of steel. In a similar way, a native speaker of a language becomes very familiar with the shapes of the symbols in that language, but every now and again, when you’re distracted perhaps, you can slip out of that familiarity and stare in wonder at a word you’ve used a thousand times, like a child who’s never seen it before.

What I’m trying to get at here is the difference between having a mental state and identifying with it, which difference I take to be central to Buddhism. Being in a car is one thing, identifying with it is literally something else, meaning that there’s a real change that happens when driving becomes second nature to you. Likewise, having the desire for fame or fortune is one thing, identifying with either desire is something else. A Buddhist watches her thoughts come and go in her mind, detaching from them so that the world can’t upset her. But this raises a puzzle for me. Once enlightened, why should a Buddhist prefer a peaceful state of mind to one of suffering? The Buddhist may still have the desire to avoid pain and to seek peace, but she’ll no longer identify with either of those or with any other desire. So assuming she acts to lessen suffering in the world, how are those actions caused? If an enlightened Buddhist is just a passive observer, how can she be made to do anything at all? How can she lean in one direction or another, or favour one course of action rather than another? Why peace rather than suffering?

Now, there’s a difference between a bodhisattva and a Buddha: the former harbours a selfless preference to help others achieve enlightenment, whereas the latter gives up on the rest of the world and lives in a state of nirvana, which is passive, metaphysical selflessness. So a bodhisattva still has an interest in social engagement and merely learns not to identify so strongly with that interest, to avoid suffering if the interest doesn’t work out and the world slams the door in her face, whereas a Buddha may extinguish all of her mental states, effectively lobotomizing herself. Either way, though, it’s hard to see how the Buddhist could act intelligently, which is to say exhibit some pattern in her activities that reflects a pattern in her mind and acts at least as the last step in the chain of interconnected causes of her actions. A bodhisattva has desires but doesn’t identify with them and so can’t favor any of them. How, then, could this Buddhist put any morality into practice? Indeed, how could she prefer Buddhism to some other religion or worldview? And a Buddha may no longer have any distinguishable mental states in the first place, so she would have no interests to tempt her with the potential for mental attachments. Thus, we might expect full enlightenment in the Buddhist sense to be a form of suicide, in which the Buddhist neglects all aspects of her body because she’s literally lost her mind and thus her ability to care or to choose to control herself or even to manage her vital functions. (In Hinduism, an elderly Brahmin may choose this form of suicide for the sake of moksha, which is supposed to be liberation from nature, and Buddhism may explain how this suicide becomes possible for the enlightened person.)

The best explanation I have of how a Buddhist could act at all is the Taoist one that the world acts through her. The paradox of how the Buddhist’s mind could control her body even when the Buddhist dispenses with that mind is resolved if we accept the monist ontology in which everything is interconnected and so unified. Even if an enlightened Buddha loses personal self-control, this doesn’t mean that nothing happens to her, since the Buddhist’s body is part of the cosmic whole, and so the world flows in through her senses and out through her actions. The Buddhist doesn’t egoistically decide what to do with herself, but the world causes her to act in one way or another. Her behaviour, then, shouldn’t reflect any private mental pattern, such as a personal character or ego, since she’s learned to see through that illusion, but her actions will reflect the whole world’s character, as it were.

.

From Buddhist Monk to Avatar of Nature

Returning to the posthuman, the question raised by the Buddhist stepping stone is whether we can learn what it would be like to experience the death of the manifest image, the absence of the naive, dualistic and otherwise self-glorifying conception of the self, by imagining what it would be like to be the sun, the moon, the ocean, or just a robot. That’s how a scientifically enlightened posthuman would conceive of “herself”: she’d understand that she has no independent self but is part of some natural process, and if she’d identify with anything it would be with that larger process. Which process? Any selection would betray a preference and thus at least a partial resurrection of the ghostly, illusory self. The Buddhist gets around this with metaphysical monism: if everything is interconnected, the universe is one and there’s no need to choose what you are, since you’re metaphysically everything at once. So if all natural processes feed into each other, nature is a cosmic whole, and the posthuman sees very far and wide, sampling enough of nature to understand the universe’s character so that she’d presumably understand her actions to flow from that broader character.

And just here we reach a difference between Eastern (if not specifically Buddhist) and technoscientific enlightenment. Strictly speaking, Buddhism is atheistic, I think, but some forms of Buddhism are pantheistic, meaning that some Buddhists personify the interconnected whole. If we suppose that technoscience will remain staunchly atheistic, we must assume only that there are patterns in nature and not any character or ghostly Force or anything like that. Thus, if a posthuman can’t identify with the traditional myth of the self, with the conscious, rational, self-controlling soul, and yet the posthuman is to remain some distinct entity, I’m led to imagine this posthuman entity as an avatar of lifeless nature. What does nature do with its forces? It evolves molecules, galaxies, solar systems, and living species. The posthuman would be a new force of nature that would serve those processes of complexification and evolution, creating new orders of being. The posthuman would have no illusion of personal identity, because she’d understand too well the natural forces at work in her body to identify so narrowly and desperately with any mere subset of their handiwork. Certainly, the posthuman wouldn’t cling to any byproduct of the brain, but would more likely identify with the underlying, microphysical patterns and processes.

So would this kind of posthumanity be a force for good or evil? Surely, the posthuman would be beyond good or evil, like any natural force. Moral rules are conventions to manage deluded robots like us who are hypnotized by our brain’s daydream of our identity. Values derive from preferences of some things as better than others, which in turn depend on some understanding of The Good. In the technoscientific picture of nature, though, goodness and badness are illusions, but this doesn’t imply anything like the Satanist’s exhortation to do whatever you want. The posthuman would have as many wants as the rain when the rain falls from the sky. She’d have no ego to flatter, no will to express. Nevertheless, the posthuman would be caused to act, to further what the universe has already been doing for billions of years. I have only a worm’s understanding of that cosmic pattern. I speak of evolution and complexification, but those are just placeholders, like an empty five-line staff in modern musical notation. If we’re imagining a super-intelligent species that succeeds us, I take it we’re thinking of a species that can read the music of the spheres and that’s compelled to sing along.

Reactionary Atheism: Hagglund, Derrida, and Nooconservatism

by rsbakker

(Belated) Aphorism of the Day: Why break hearts or blow minds when you can rot souls?

.

The difference between the critic and the apologist in philosophy, one would think, is the difference between conceiving philosophy as refuge, a post hoc means to rationalize and so recuperate what we cherish or require, and conceiving philosophy as exposure, an ad hoc means to mutate thought and so see our way through what we think we cherish or require. Now in Continental philosophy so-called, the overwhelming majority of thinkers would consider themselves critics and not apologists. They would claim to be proponents of exposure, of the new, and deride the apologist for abusing reason in the service of wishful thinking.

But this, I hope to show, is little more than a flattering conceit. We are all children of Hollywood, all prone to faux-renegade affectations. Nowadays ‘critic,’ if anything, simply names a new breed of apologist. This is perhaps inevitable, in a certain sense. The more cognitive science learns regarding reason, the more intrinsically apologetic it seems to become, a confabulatory organ primarily adapted to policing and protecting our parochial ingroup aspirations. But it is also the case that thought (whatever the hell it is) has been delivered to a radically unprecedented juncture, one that calls its very intelligibility into question. Our ‘epoch of thinking’ teeters upon the abyssal, a future so radical as to make epic fantasy of everything we are presently inclined to label ‘human.’ Whether it acknowledges as much or not, all thought huddles in the shadow of the posthuman–the shadow of its end.

I’ve been thumping this particular tub for almost two decades now. It has been, for better or worse, the thematic impetus behind every novel I have written and every paper I have presented. And at long last, what was once a smattering of voices has become a genuine chorus (for reasons quite independent of my tub thumping I’m sure). Everyone agrees that something radical is happening. Also, everyone agrees that this ‘something’ turns on the every-expanding powers of science–and the sciences of the brain in particular. This has led to what promises to become one of those generational changes in philosophical thinking, at least in its academic incarnation. Though winded, thought is at last attempting to pace the times we live in. But I fear that it’s failing this attempt, that, far from exposing itself to the most uncertain future humanity has ever known, materially let alone intellectually, it is rather groping for ways to retool and recuperate a philosophical heritage that the sciences are transforming into mythology as we speak. It is attempting to innoculate thought as it exists against the sweeping transformations engulfing its social conditions. To truly expose thought, I want to argue, is to be willing to let it die…

Or become inhuman.

My position is quite simple: Now that science is overcoming the neural complexities that have for so long made an intentional citadel out of the soul, it will continue doing what it has always done, which is offer sometimes simple, sometimes sophisticated, mechanical explanations of what it finds, and so effectively ‘disenchanting’ the brain the way it has the world. This first part, at least, is uncontroversial. The real question has to do with the ‘disenchantment,’ which is to say the degree to which these mechanical explanations will be commensurate with our intentional self-understanding, or what Sellars famously called the ‘manifest image.’ Since there are infinitely more ways for our mechanistic scientific understanding to contradict our intentional prescientific understanding, we should, all things being equal, expect that the latter will be overthrown. Indeed, we already have a growing mountain of evidence trending in this direction. Given our apologetic inclinations, however, it should come as no surprise that the literature is rife with arguments why all things are not equal. Aside from an ingrained suspicion of happy endings, especially where science is concerned (I’m inclined to think it will cut our throats), the difficulty I have with such arguments lies in their reliance on metacognitive intuition. For the life of me, I cannot understand why we are in any better position peering into our souls than our ancestors were peering into the heavens. Why should the accumulation of scientific information be any friendlier to our traditional, prescientific assumptions this one time around?

I simply don’t think the human, or for that matter, any of the concepts science has chased from the world into the shadows of the human brain, will prove to be the miraculous exception. Science will rewrite ‘rules’ the way it has orbits, ‘meanings’ the way it has planets, and so on, doing what it has done so many times in the past: take simplistic, narcissistic notions founded on spare and fragmentary information and replacing them portraits of breathtaking causal complexity.

This is why I’m so suspicious of the ongoing ‘materialist turn’ in Continental philosophy, why I see it more as a crypto-apologetic attempt to rescue traditional conceptual conceits than any genuine turn away from ‘experience.’ This is how I read Zizek’s The Parallax View several weeks back, and this is how I propose to read Martin Hagglund’s project in his recent (and quite wonderfully written), Radical Atheism: Derrida and the Time of Life. Specifically, I want to take issue with his materialist characterization of Derrida’s work, even though this seems to be the aspect of his book that has drawn the most praise. Aaron Hodges, in “Martin Hagglund’s Speculative Materialism,” contends that Radical Atheism has “effectively dealt the coup de grace to any understanding of deconstructive logic that remains under the sway of idealist interpretation.” Even John Caputo, in his voluminous counterargument concedes that Hagglund’s Derrida is a materialist Derrida; he just happens to think that there are other Derridas as well.

Against the grain of Radical Atheism’s critical reception, then, I want to argue that no Derrida, Hagglund’s or otherwise, can be ‘materialist’ in any meaningful sense and remain recognizable as a ‘Derrida.’ He simply is not, as Hagglund claims, a philosopher of ‘ultratranscendence’ (as Hagglund defines the term). Derrida is not the author of any singular thought ‘beyond’ the empirical and the transcendental. Nor does he, most importantly, provide any way to explain the fundamental ‘synthesis,’ as Hagglund calls it, required to make sense of experience.

To evidence this last point, I will rehearse the explanation of ‘synthesis’ provided by the Blind Brain Theory (BBT). I will then go on to flex a bit of theoretical muscle, to demonstrate the explanatory power of BBT, the way it can ‘get behind’ and explicate philosophical positions even as notoriously arcane as Husserlian phenomenology or Derridean deconstruction. This provides us with the conceptual resources required to see the extent of Derrida’s noocentrism, the way he remains, despite the apparent profundity of his aleatory gestures, thoroughly committed to the centrality of meaning–the intentional. Far from ‘radical,’ I will contend, Derrida remains a nooconservative thinker, one thoroughly enmeshed in the very noocentric thinking Hagglund and so many others seem to think he has surpassed.

For those not familiar with Radical Atheism, I should note the selective, perhaps even opportunistic, nature of the reading I offer. From the standpoint of BBT, the distinction between deconstruction and negative theology is the distinction between deflationary conceptions of intentionality in its most proximal and distal incarnations. Thus the title of the present piece, ‘Reactionary Atheism.’ To believe in meaning of any sort is to have faith in some version of ‘God.’ Finite or infinite, mortal or immortal, the intentional form is conserved–and as I hope to show, that form is supernatural. BBT is a genuinely post-intentional theoretical position. According to it, there are no meaning makers,’ objective or subjective. According to it, you are every bit as mythological as the God you would worship or honour. In this sense, the contest between atheistic and apophatic readings of Derrida amounts to little more than another intractable theological dispute. On the account offered here, both houses are equally poxed.

My reading therefore concentrates on the first two chapters of Radical Atheism, where Hagglund provides an interpretation of how (as Derrida himself claims) trace and differance arise out of his critique of Husserl’s Phenomenology of Internal Time-consciousness. Since Hagglund’s subsequent defence of ‘radical atheism’ turns on the conclusions he draws from this interpretation–namely, the ‘ultratranscendental’ status of trace and differance and the explanation of synthesis they offer–undermining these conclusions serves to undermine Hagglund’s thesis as a whole.

Horn head

Atheism as traditionally understood, Hagglund begins, does not question the desire for God or immortality and so leaves ‘mortal’ a privative concept. To embrace atheism is to settle for mere mortality. He poses radical atheism as Derrida’s alternative, the claim that the conceptual incoherence of the desire for God and immortality forces us to affirm its contrary, the mortal:

The key to radical atheism is what I analyze as the unconditional affirmation of survival. This affirmation is not a matter of choice that some people make and others do not: it is unconditional because everyone is engaged by it without exception. Whatever one may want or whatever one may do, one has to affirm the time of survival, since it opens the possibility to live on–and thus to want something or to do something–in the first place. This unconditional affirmation of survival allows us to read the purported desire for immortality against itself. The desire to live on after death is not a desire for immortality, since to live on is to remain subjected to temporal finitude. The desire for survival cannot aim at transcending time, since the given time is the only chance for survival. There is thus an internal contradiction in the so-called desire for immortality. Radical Atheism, 2

Time becomes the limit, the fundamental constraint, the way, Hagglund argues, to understand how the formal commitments at the heart of Derrida’s work render theological appropriations of deconstruction unworkable. To understand deconstruction, you need to understand Derrida’s analysis of temporality. And once you understand Derrida’s analysis of temporality, he claims, you will see that deconstruction entails radical atheism, the incoherence of desiring immortality.

Although Hagglund will primarily base his interpretation of deconstructive temporality on a reading of Speech and Phenomena, it is significant, I think, that he begins with a reading of “Ousia and Gramme,” which is to say, a reading of Derrida’s reading of Heidegger’s reading of Hegel! In “Ousia and Gramme,” Derrida is concerned with the deconstructive revision of the Heideggerean problematic of presence. The key to this revision, he argues, lies in one of the more notorious footnotes in Being and Time, where Heidegger recapitulates the parallels between Hegel’s and Aristotle’s considerations of temporality. This becomes “the hidden passageway that makes the problem of presence communicate with the problem of the written trace” (Margins of Philosophy, 34). Turning from Heidegger’s reading of Hegel, Derrida considers what Aristotle himself has to say regarding time in Physics (4:10), keen to emphasize Aristotle’s concern with the apories that seem to accompany any attempt to think the moment. The primary problem, as Aristotle sees it, is the difficulty of determining whether the now, which divides the past from the future, is always one and the same or distinct, for the now always seems to somehow be the same now, even as it is unquestionably a different now. The lesson that Derrida eventually draws from this has to do with the way Heidegger, in his attempt to wrest time from the metaphysics of presence, ultimately commits the very theoretical sins that he imputes to Hegel and Aristotle. As he writes: “To criticize the manipulation or determination of any one of these concepts from within the system always amounts, and let this expression be taken with its full charge of meaning here, to going around in circles: to reconstituting, according to another configuration, the same system” (60). The lesson, in other words, is that there is no escaping the metaphysics of presence. Heidegger’s problem isn’t that he failed to achieve what he set out to achieve–How could it be when such failure is constitutive of philosophical thought?–but that he thought, if only for a short time, that he had succeeded.

The lesson that Hagglund draws from “Ousia and Gramme,” however, is quite different:

The pivotal question is what conclusion to draw from the antinomy between divisible time and indivisible presence. Faced with the relentless division of temporality, one must subsume time under a nontemporal presence in order to secure the philosophical logic of identity. The challenge of Derrida’s thinking stems from his refusal of this move. Deconstruction insists on a primordial division and thereby enables us to think the radical irreducibility of time as constitutive of any identity. Radical Atheism, 16-17

If there is one thing about Hagglund’s account that almost all his critics agree on, it is his clarity. But even at this early juncture, it should be clear that this purported ‘clarity’ possesses a downside. Derrida raises and adapts the Aristotelian problem of divisibility in “Ousia and Gramme” to challenge, not simply Heidegger’s claim to primordiality, but all claims to primordiality. And he criticizes Heidegger, not for thinking time in terms of presence, but for believing it was possible to think time in any other way. Derrida is explicitly arguing that ‘refusing this move’ is simply not possible, and he sees his own theoretical practice as no exception. His ‘challenge,’ as Hagglund calls it, lies in conceiving presence as something at once inescapable and impossible. Hagglund, in other words, distills his ‘pivotal question’ via a reading of “Ousia and Gramme” that pretty clearly runs afoul the very theoretical perils it warns against. We will return to this point in due course.

Having isolated the ‘pivotal,’ Hagglund turns to the ‘difficult’:

The difficult question is how identity is possible in spite of such division. Certainly, the difference of time could not even be marked without a synthesis that relates the past to the future and thus posits an identity over time. Philosophies of time-consciousness have usually solved the problem by anchoring the synthesis in a self-present subject, who relates the past to the future through memories and expectations that are given in the form of the present. The solution to the problem, however, must assume that the consciousness that experiences time in itself is present and thereby exempt from the division of time. Hence, if Derrida is right to insist that the self-identity of presence is impossible a priori, then it is all the more urgent to account for how the synthesis of time is possible without being grounded in the form of presence. 17

Identity has to come from somewhere. And this is where Derrida, according to Hagglund, becomes a revolutionary part of the philosophical solution. “For philosophical reason to advocate endless divisibility,” he writes, “is tantamount to an irresponsible empiricism that cannot account for how identity is possible” (25). This, Hagglund contends, is Derrida’s rationale for positing the trace. The nowhere of the trace becomes the ‘from somewhere’ of identity, the source of ‘originary synthesis.’ Hagglund offers Derrida’s account of the spacing of time and the temporalizing of space as a uniquely deconstructive account of synthesis, which is to say, an account of synthesis that does not “subsume time under a nontemporal presence in order to secure the philosophical logic of identity” (16).

Given the centrality of the trace to his thesis, critics of Radical Atheism were quick to single it out for scrutiny. Where Derrida seems satisfied with merely gesturing to the natural, and largely confining actual applications of trace and difference to semantic contexts, Hagglund presses further: “For Derrida, the spacing of time is an ‘ultratranscendental’ condition from which nothing can be exempt” (19). And when he says ‘nothing,’ Hagglund means nothing, arguing that everything from the ideal to “minimal forms of life” answers to the trace and differance. Hagglund was quick to realize the problem. In a 2011 Journal of Philosophy interview, he writes, “[t]he question then, is how one can legitimize such a generalization of the structure of the trace. What is the methodological justification for speaking of the trace as a condition for not only language and experience but also processes that extend beyond the human and even the living?”

Or to put the matter more simply, just what is ‘ultratranscendental’ supposed to mean?

Derrida, for his part, saw trace and differance as (to use Gasche’s term) ‘quasi-transcendental.’ Derrida’s peculiar variant of contextualism turns on his account of trace and differance. Where pragmatic contextualists are generally fuzzy about the temporality implicit to the normative contexts they rely upon, Derrida actually develops what you could call a ‘logic of context’ using trace and differance as primary operators. This is why his critique of Husserl in Speech and Phenomena is so important. He wants to draw our eye to the instant-by-instant performative aspect of meaning. When you crank up the volume on the differential (as opposed to recuperative) passage of time, it seems to be undeniably irreflexive. Deconstruction is a variant of contextualism that remains ruthlessly (but not exclusively) focussed on the irreflexivity of semantic performances, dramatizing the ‘dramatic idiom’ through readings that generate creativity and contradiction. The concepts of trace and differance provide synchronic and diachronic modes of thinking this otherwise occluded irreflexivity. What renders these concepts ‘quasi-transcendental,’ as opposed to transcendental in the traditional sense, is nothing other than trace and differance. Where Hegel temporalized the krinein of Critical Philosophy across the back of the eternal, conceiving the recuperative role of the transcendental as a historical convergence upon his very own philosophy, Derrida temporalizes the krinein within the aporetic viscera of this very moment now, overturning the recuperative role of the transcendental, reinterpreting it as interminable deflection, deferral, divergence–and so denying his thought any self-consistent recourse to the transcendental. The concept DIFFERANCE can only reference differance via the occlusion of differance. “The trace,” as Derrida writes, “is produced as its own erasure” (“Ousia and Gramme,” 65). One can carve out a place for trace and differance in the ‘system space’ of philosophical thinking, say their ‘quasi-transcendentality’ (as Gasche does in The Tain of the Mirror, for instance) resides in the way they name both the condition of possibility and impossibility of meaning and life, or one can, as I would argue Derrida himself did, evince their ‘quasi-transcendentality’ through actual interpretative performances. One can, in other words, either refer or revere.

Since second-order philosophical accounts are condemned to the former, it has become customary in the philosophical literature to assign content to the impossibility of stable content assignation, to represent the way performance, or the telling, cuts against representation, or the told. (Deconstructive readings, you could say, amount to ‘toldings,’ readings that stubbornly refuse to allow the antinomy of performance and representation to fade into occlusion). This, of course, is one of the reasons late 20th century Continental philosophy came to epitomize irrationalism for so many in the Anglo-American philosophical community. It’s worth noting, however, that in an important sense, Derrida agreed with these worries: this is why he prioritized demonstrations of his position over schematic statements, drawing cautionary morals as opposed to traditional theoretical conclusions. As a way of reading, deconstruction demonstrates the congenital inability of reason and representation to avoid implicitly closing the loop of contradiction. As a speculative account of why reason and representation possess this congenital inability, deconstruction explicitly closes that loop itself.

Far from being a theoretical virtue, then, ‘quasi-transcendence’ names a liability. Derrida is trying to show philosophy that inconsistency, far from being a distal threat requiring some kind of rational piety to avoid, is maximally proximal, internal to its very practice. The most cursory survey of intellectual history shows that every speculative position is eventually overthrown via the accumulation of interpretations. Deconstruction, in this sense, can be seen as a form of ‘interpretative time-travel,’ a regimented acceleration of processes always already in play, a kind of ‘radical translation’ put into action in the manner most violent to theoretical reason. The only way Derrida can theoretically describe this process, however, is by submitting to it–which is to say, by failing the way every other philosophy has failed. ‘Quasi-transcendence’ is his way of building this failure in, a double gesture of acknowledging and immunizing; his way of saying, ‘In speaking this, I speak what cannot be spoken.’

(This is actually the insight that ended my tenure as a ‘Branch Derridean’ what seems so long ago, the realization that theoretical outlooks that manage to spin virtue out of their liabilities result in ‘performative first philosophy,’ positions tactically immune to criticism because they incorporate some totalized interpretation of critique, thus rendering all criticisms of their claims into exemplifications of those claims. This is one of the things I’ve always found the most fascinating about deconstruction: the way it becomes (for those who buy into it) a performative example of the very representational conceit it sets out to demolish.)

‘Quasi-transcendental,’ then, refers to ‘concepts’ that can only be shown. So what then, does Hagglund mean by ‘utlratranscendental’ as opposed to ‘transcendental’ and ‘quasi-transcendental’? The first thing to note is that Hagglund, like Gasche and others, is attempting to locate Derrida within the ‘system space’ of philosophy and theory more generally. For him (opposed to Derrida), deconstruction implies a distinct position that rationalizes subsequent theoretical performances. As far as I can tell, he views the recursive loop of performance and representation, telling and told, as secondary. The ultratranscendental is quite distinct from the quasi-transcendental (though my guess is that Hagglund would dispute this). For Hagglund, rather, the ultratranscendental is thought through the lense of the transcendental more traditionally conceived:

On the one hand, the spacing of time has an ultratranscendental status because it is the condition for everything all the way up and including the ideal itself. The spacing of time is the condition not only for everything that can be cognized and experienced, but also for everything that can be thought and desired. On the other hand, the spacing of time has an ultratranscendental status because it is the condition for everything all the way down to minimal forms of life. As Derrida maintains, there is no limit to the generality of differance and the structure of the trace applies to all fields of the living. Radical Atheism, 19

The ultratranscendental, in other words, is simply an ‘all the way’ transcendental, as much a condition of possibility of life as a condition of possibility of experience. “The succession of time,” Hagglund states in his Journal of Philosophy interview, “entails that every moment negates itself–that it ceases to be as soon as it comes to be–and therefore must be inscribed as trace in order to be at all.” Trace and differance, he claims, are logical as opposed to ontological implications of succession, and succession seems to be fundamental to everything.

This is what warrants the extension of trace and differance from the intentional (the kinds of contexts Derrida was prone to deploy them) to the natural. And this is why Hagglund is convinced he’s offering a materialist reading of Derrida, one that allows him to generalize Derrida’s arche-writing to an ‘arche-materiality’ consonant with philosophical naturalism. But when you turn to his explicit statements to this effect, you find that the purported, constitutive generality of the trace, what makes it ultratranscendental, becomes something quite different:

This notion of the arche-materiality can accommodate the asymmetry between the living and the nonliving that is integral to Darwinian materialism (the animate depends upon the inanimate but not the other way around). Indeed, the notion of arche-materiality allows one to account for the minimal synthesis of time–namely, the minimal recording of temporal passage–without presupposing the advent or existence of life. The notion of arche-materiality is thus metatheoretically compatible with the most significant philosophical implications of Darwinism: that the living is essentially dependant on the nonliving, that animated intention is impossible without mindless, inanimate repetition, and that life is an utterly contingent and destructible phenomenon. Unlike current versions of neo-realism or neo-materialism, however, the notion of arche-materiality does not authorize its relation to Darwinism by constructing an ontology or appealing to scientific realism but rather articulating a logical infrastructure that is compatible with its findings. Journal of Philosophy

The important thing to note here is how Hagglund is careful to emphasize that the relationship between arche-materiality and Darwinian naturalism is one of compatibility. Arche-materiality, here, is posited as an alternative way to understand the mechanistic irreflexivity of the life sciences. This is more than a little curious given the ‘ultratranscendental’ status he wants to accord to the former. If it is the case that trace and differance understood as arche-materiality are merely compatible with rather than anterior to and constitutive of the mechanistic, Darwinian paradigm of the life sciences, then how could they be ‘ultratranscendental,’ which is to say, constitutive, in any sense? As an alternative, one might wonder what advantages, if any, arche-materiality has to offer theory. The advantages of mechanistic thinking should be clear to anyone who has seen a physician. So the question becomes one of what kind of conceptual work do trace and differance do.

Hagglund, in effect, has argued himself into the very bind which I fear is about to seize Continental philosophy as a whole. He recognizes the preposterous theoretical hubris involved in arguing that the mechanistic paradigm depends on arche-materiality, so he hedges, settles for ‘compatibility’ over anteriority. In a sense, he has no choice. Time is itself the object of scientific study, and a divisive one at that. Asserting that trace and differance are constitutive of the mechanistic paradigm places his philosophical speculation on firmly empirical ground (physics and cosmology, to be precise)–a place he would rather not be (and for good reason!).

But this requires that he retreat from his earlier claims regarding the ultratranscendental status of trace and differance, that he rescind the claim that they constitute an ‘all the way down’ condition. He could claim they are merely transcendental in the Kantian, or ‘conditions of experience,’ sense, but then that would require abandoning his claim to materialism, and so strand him with the ‘old Derrida.’ So instead he opts for ‘compatibility,’ and leaves the question of theoretical utility, the question of why we should bother with arcane speculative tropes like trace and differance given the boggling successes of the mechanistic paradigm, unasked.

One could argue, however, that Hagglund has already given us his answer: trace and differance, he contends, allow us to understand how reflexivity arises from irreflexivity absent the self-present subject. This is their signature contribution. As he writes:

The synthesis of the trace follows from the constitution of time we have considered. Given that the now can appear only by disappearing–that it passes away as soon as it comes to be–it must be inscribed as a trace in order to be at all. This is the becoming-space of time. The trace is necessarily spatial, since spatiality is characterized by the ability to remain in spite of temporal succession. Spatiality is thus the condition for synthesis, since it enables the tracing of relations between past and future. Radical Atheism, 18

But as far as ‘explanations’ are concerned it remains unclear as to how this can be anything other than a speculative posit. The synthesis of now moments occurs somehow. Since the past now must be recuperated within future nows, it makes sense to speak of some kind of residuum or ‘trace.’ If this synthesis isn’t the product of subjectivity, as Kant and Husserl would have it, then it has to be the product of something. The question is why this ‘something’ need have anything to do with space. Why does the fact that the trace (like the Dude) ‘abides’ have anything to do with space? The fact that both are characterized by immunity to succession implies, well… nothing. The trace, you could say, is ‘spatial’ insofar as it possesses location. But it remains entirely unclear how spatiality ‘enables the tracing of relations between past and future,’ and so becomes the ‘condition for synthesis.’

Hagglund’s argument simply does not work. I would be inclined to say the same of Derrida, if I actually thought he was trying to elaborate a traditional theoretical position in the system space of philosophy. But I don’t: I think the aporetic loop he establishes between deconstructive theory and practice is central to understanding his corpus. Derrida takes the notion of quasi-transcendence (as opposed to ultratranscendence) quite seriously. ‘Trace’ and ‘differance’ are figures as much as concepts, which is precisely why he resorts to a pageant of metaphors in his subsequent work, ‘originary supplements’ such as spectres, cinders, gifts, pharmakons and so on: The same can be said of ‘arche-writing’ and yes, even ‘spacing’: Derrida literally offers these as myopic and defective ways of thinking some fraction of the unthinkable. Derrida has no transcendental account of how reflexivity arises from irreflexivity, only a myriad of quasi-transcendental ways we might think the relation of reflexivity and irreflexivity. The most he would say is that trace and differance allow us to understand how the irreflexivity characteristic of mechanism operates both on and within the synthesis of experience.

At the conclusion of “Freud and the Scene of Writing,” Derrida discusses the ‘radicalization of the thought of the trace,’ adding parenthetically, “a thought because it escapes the binarism and makes binarism possible on the basis of a nothing” (Writing and Difference, 230). This, once again, is what makes the trace and differance ‘quasi-transcendental.’ Our inability to think the contemporaneous, irreflexive origin of our thinking means that we can only think that irreflexivity under ‘erasure,’ which is to say, in terms at once post hoc and ad hoc. Given that trace and differance refer to the irreflexive, procrustean nature of representation (or ‘presence’), the fact that being ‘vanishes’ in the disclosure of beings, it seems to make sense that we should wed our every reference to them with an admission of the vehicular violence involved, the making present (via the vehicle of thought) of what can never be, nor ever has been, present.

In positioning Derrida’s thought beyond the binarism of transcendental and empirical, Hagglund is situating deconstruction in the very place Derrida tirelessly argues thought cannot go. As we saw above, Hagglund thinks advocating ‘endless divisibility’ is ‘philosophically irresponsible’ given the fact of identity (Radical Atheism, 25). What he fails to realize is that this is precisely the point: preaching totalized irreflexivity is a form of ‘irresponsible empiricism’ for philosophical reason. Trace and differance, as more than a few Anglo-American philosophical commentators have noted, are rationally irresponsible. No matter how fierce the will to hygiene and piety, reason is always besmirched and betrayed by its occluded origins. Thus the aporetic loop of theory and practice, representation and performance, reflexivity and irreflexivity–and, lest we forget, interiority and exteriority…

Which is to say, the aporetic loop of spacing. As we’ve seen, Hagglund wants to argue that spacing constitutes a solution to the fundamental philosophical problem of synthesis. If this is indeed the cornerstone of Derrida’s philosophy as he claims, then the ingenious Algerian doesn’t seem to think it bears making explicit. If anything, the sustained, explicit considerations of temporality that characterize his early work fade into the implicit background of his later material. This is because Derrida offers spacing, not as an alternate, nonintentional explanation of synthesis, but rather as a profound way to understand the aporetic form of that synthesis:

Even before it ‘concerns’ a text in narrative form, double invagination constitutes the story of stories, the narrative of narrative, the narrative of deconstruction in deconstruction: the apparently outer edge of an enclosure [cloture], far from being simple, simply external and circular, in accordance with the philosophical representation of philosophy, makes no sign beyond itself, toward what is utterly other, without becoming double or dual, without making itself be ‘represented,’ refolded, superimposed, re-marked within the enclosure, at least in what the structure produces as an effect of interiority. But it is precisely this structure-effect that is being deconstructed here. “More Than One Language,” 267-8

The temporal assumptions Derrida isolates in his critique of Husserl are clearly implicit here, but it’s the theme of spacing that remains explicit. What Derrida is trying to show us, over and over again, is a peculiar torsion in what we call experience: the ‘aporetic loop’ I mentioned above. It’s most infamous statement is “there is nothing outside the text” (Of Grammatology, 158) and its most famous image is that of the “labyrinth which includes in itself its own exits” (Speech and Phenomena, 104). Derrida never relinquishes the rhetoric of space because the figure it describes is the figure of philosophy itself, the double-bind where experience makes possible the world that makes experience possible.

What Hagglund calls synthesis is at once the solution and the dilemma. It relates to the outside by doubling, becoming ‘inside-outside,’ thus exposing itself to what lays outside the possibility of inside-outside (and so must be thought under erasure). Spacing refers to the interiorization of exteriority via the doubling of interiority. The perennial philosophical sin (the metaphysics of presence) is to confuse this folding of interiority for all there is, for inside and outside. So to take Kant as an example, positing the noumenal amounts to a doubling of interiority: the binary of empirical and transcendental. What Derrida is attempting is nothing less than a thinking that remains, as much as possible, self-consciously open to what lies outside the inside-outside, the ‘nothing that makes such binarisms possible.’ Since traditional philosophy can only think this via presence, which is to say, via another doubling, the generation of another superordinate binary (the outside-outside versus the inside-outside (or as Hagglund would have it, the ultratranscendental versus the transcendental/empirical)), it can only remain unconsciously open to this absolute outside. Thus Derrida’s retreat into performance.

Far from any ‘philosophical solution’ to the ‘philosophical problem of synthesis,’ spacing provides a quasi-transcendental way to understand the dynamic and aporetic form of that synthesis, giving us what seems to be the very figure of philosophy itself, as well as a clue as to how thinking might overcome the otherwise all-conquering illusion of presence. Consider the following passage from “Differance,” a more complete version of the quote Hagglund uses to frame his foundational argument in Radical Atheism:

An interval must separate the present from what it is not in order for the present to be itself, but this interval that constitutes it as present must, by the same token, divide the present in and of itself, thereby also dividing, along with the present, everything that is thought on the basis of the present, that is, in our metaphysical language, every being, and singularly substance or the subject. In constituting itself, in dividing itself dynamically, this interval is what might be called spacing, the becoming-space of time or the becoming-time of space (temporization). And it is this constitution of the present, as an ‘originary’ and irreducibly nonsimple (and therefore, stricto sensu nonoriginary) synthesis of marks, or traces of retentions and protentions (to reproduce analogically and provisionally a phenomenological and transcendental language that soon will reveal itself to be inadequate), that I propose to call archi-writing, archi-traces, or differance. Which (is) (simultaneously) spacing (and) temporization. Margins of Philosophy, 13

Here we clearly see the movement of ‘double invagination’ described above, the way the ‘interval’ divides presence from itself both within itself and without, generating the aporetic figure of experience/world that would for better or worse become Derrida’s lifelong obsession. The division within is what opens the space (as inside/outside), while the division without, the division that outruns the division within, is what makes this space the whole of space (because of the impossibility of any outside inside/outside). Hagglund wants to argue “that an elaboration of Derrida’s definition allows for the most rigourous thinking of temporality by accounting for an originary synthesis without grounding it in an indivisible presence” (Radical Atheism, 18). Not only is his theoretical, ultratranscendental ‘elaboration’ orthogonal to Derrida’s performative, quasi-transcendental project, his rethinking of temporality (despite its putative ‘rigour’), far from explaining synthesis, ultimately re-inscribes him within the very metaphysics of presence he seeks to master and chastise. The irony, then, is that even though Hagglund utterly fails to achieve his thetic goals, there is a sense in which he unconsciously (and inevitably) provides a wonderful example of the very figure Derrida is continually calling to our attention. The problem of synthesis is the problem of presence, and it is insoluble, insofar as any theoretical solution, for whatever reason, is doomed to merely reenact it.

Derrida does not so much pose a solution to the problem of synthesis as he demonstrates the insolubility of the problem given the existing conceptual resources of philosophy. At most Derrida is saying that whatever brings about synthesis does so in a way that generates presence as deconstructively conceived, which is to say, structured as inside/outside, self/other, experience/world–at once apparently complete and ‘originary’ and yet paradoxically fragmentary and derivative. Trace and differance provide him with the conceptual means to explore the apparent paradoxicality at the heart of human thought and experience at a particular moment of history:

Differance is neither a word nor a concept. In it, however, we see the juncture–rather than the summation–of what has been most decisively inscribed in the thought of what is conveniently called our ‘epoch’: the difference of forces in Nietzche, Saussure’s principle of semiological difference, difference as the possibility of [neurone] facilitation, impression and delayed effect in Freud, difference as the irreducibility of the trace of the other in Levinas, and the ontic-ontological difference in Heidegger. Speech and Phenomena, 130

It is this last ‘difference,’ the ontological difference, that Derrida singles out for special consideration. Differance, he continues, is strategic, a “provisionally privileged” way to track the “closure of presence” (131). In fact, if anything is missing in an exegetical sense from Hagglund’s consideration of Derrida it has to be Heidegger, who edited The Phenomenology of Internal Time-consciousness and, like Derrida, arguably devised his own philosophical implicature via a critical reading of Husserl’s account of temporality. In this sense, you could say that trace and differance are not the result of a radicalization of Husserl’s account of time, but rather a radicalization of a radicalization of that account. It is the ontological difference, the difference between being and beings, that makes presence explicit as a problem. Differance, you could say, startegically and provisionally renders the problem of presence (or ‘synthesis’) dynamic, conceives it as an effect of the trace. Where the ontological difference allows presence to hang pinned in philosophical system space for quick reference and retrieval, differance ‘references’ presence as a performative concern, as something pertaining to this very moment now. Far from providing the resources to ‘solve’ presence, differance expands the problem it poses by binding (and necessarily failing to bind) it to the very kernel of now.

Contra Hagglund, trace and differance do not possess the resources to even begin explaining synthesis in any meaningful sense of the term ‘explanation.’ To think that it does, I have argued, is to misconceive both the import and the project of deconstruction. But this does not mean that presence/synthesis is in fact insoluble. As the above quote suggests, Derrida himself understood the ‘epochal’ (as opposed to ‘ultratranscendental’) nature of the problematic motivating trace and differance. A student of intellectual history, he understood the contingency of the resources we are able to bring to any philosophical problem. He did not, as Adorno did working through the same conceptual dynamics via negative dialectics and identity thinking, hang his project from the possibility of some ‘Messianic moment,’ but this doesn’t mean he didn’t think the radical exposure whose semantic shadow he tirelessly attempted to chart wasn’t itself radically exposed.

And as it so happens, we are presently living through what is arguably the most revolutionary philosophical epoch of all, the point when the human soul, so long sheltered by the mad complexities of the brain, is at long last yielding to the technical and theoretical resources of the natural sciences. What Hagglund, deferring to the life sciences paradigm, calls ‘compatibility’ is a constitutive relation after all, only one running from nature to thought, world to experience. Trace and differance, far from ‘explaining’ the ‘ultratranscendental’ possibility of ‘life,’ are themselves open/exposed to explanation in naturalistic terms. They are not magical.

Deconstruction can be naturalized.

Colonoscopy

So what then is synthesis? How does reflexivity arise from irreflexivity?

Before tackling this question we need to remind ourselves of the boggling complexity of the world as revealed by the natural sciences. Phusis kruptesthai philei, Heraclitus allegedly said, ‘nature loves hiding.’ What it hides ‘behind’ is nothing less than our myriad cognitive incapacities, our inability to fathom complexities that outrun our brain’s ability to sense and cognize. ‘Flicker fusion’ in psychophysics provides a rudimentary and pervasive example: when the frequency of a flickering light crosses various (condition-dependent) thresholds, our experience of it will ‘fuse.’ What was a series of intermittent flashes becomes continuous illumination. As pedestrian as this phenomena seems, it has enormous practical and theoretical significance. This is the threshold that determines, for instance, the frame rate for the presentation of moving images in film or video. Such technologies, you could say, actively exploit our sensory and cognitive bottlenecks, hiding with nature beyond our ability differentiate.

Differentiations that exceed our brain’s capacity to sense/cognize make no difference. Or put differently, information (understood in the basic sense of systematic differences making systematic differences) that exceeds the information processing capacities of our sensory and cognitive systems simply does not exist for those systems–not even as an absence. It simply never occurs to people that their incandescent lights are in fact discontinuous. Thus the profundity of the Heraclitean maxim: not only does nature conceal itself behind the informatic blind of complexity, it conceals this concealment. This is what makes science such a hard-won cultural achievement, why it took humanity so long (almost preposterously so, given hindsight) to see that it saw so little. Lacking information pertaining to our lack of information, we assumed we possessed all the information required. We congenitally assumed, in other words, the sufficiency of what little information we had available. Only now, after centuries of accumulating information via institutionalized scientific inquiry, can we see how radically insufficient that information was.

Take geocentrism for instance. Lacking information regarding the celestial motion and relative location of the earth, our ancestors assumed it was both motionless and central, which is to say, positionally self-identical relative to itself and the cosmos. Geocentrism is the result of a basic perspectival illusion, a natural assumption to make given the information available and the cognitive capacities possessed. As strange as it may sound, it can be interpreted as a high-dimensional, cognitive manifestation of flicker fusion, the way the absence of information (differences making differences) results in the absence of differentiation, which is to say, identity.

Typically we construe ‘misidentifications’ with the misapplication of representations, as when, for example, children call whales fish. Believing whales are fish and believing the earth is the motionless centre of the universe would thus seem to be quite different kinds of mistakes. Both are ‘misrepresentations,’ mismatches between cognition and the world, but where the former mistake is categorical, the latter is empirical. The occult nature of this ‘matching’ makes it difficult to do much more than classify them together as mistakes, the one a false identification, the other a false theory.

Taking an explicitly informatic view, however, allows us to see both as versions of the mistake you’re making this very moment, presuming as you do the constancy of your illuminated computer screen (among other things). Plugging the brain into its informatic environment reveals the decisive role played by the availability of information, how thinking whales are fish and thinking the earth is the motionless centre of the universe both turn on the lack of information, the brain’s inability to access the systematic differences required to differentiate whales from fish or the earth’s position over time. Moreover, it demonstrates the extraordinarily granular nature of human cognition as traditionally conceived. It reveals, in effect, the possibility that our traditional, intentional understanding of cognition should itself be seen as an artifact of information privation.

Each of the above cases–flicker fusion, geocentrism, and misidentification–involve our brain’s ability to comprehend its environments given its cognitive resources and the information available. With respect to cognizing cognition, however, we need to consider the brain’s ability to cognize itself given, once again, its cognitive resources and the information available. Much of the philosophical tradition has attributed an exemplary status to self-knowledge, thereby assuming that the brain is in a far better position to cognize itself than its environments. But as we saw in the case with environmental cognition, the absence of information pertaining to the absence of information generates the illusion of sufficiency, the assumption that the information available is all the information there is. A number of factors, including the evolutionary youth of metacognition, the astronomical complexity of the brain, not to mention the growing mountain of scientific evidence indicating rampant metacognitive error, suggest that our traditional assumptions regarding the sufficiency theoretical metacognition need to be set aside. It’s becoming increasingly likely that metacognitive intuitions, far from constituting some ‘plenum,’ are actually the product of severe informatic scarcity.

Nor should we be surprised: science is only just beginning to mine the informatic complexities of the human brain. Information pertaining to what we are as a matter of scientific fact is only now coming to light. Left to our own devices, we can only see so much of the sky. The idea of our ancient ancestors looking up and comprehending everything discovered by modern physics and cosmology is, well, nothing short of preposterous. They quite simply lacked the information. So why should we think peering at the sky within will prove any different than the sky above? Taking the informatic perspective thus raises the spectre of noocentrism, the possibility that our conception of ourselves as intentional is a kind of perspectival illusion pertaining to metacognition not unlike geocentrism in the case of environmental cognition.

Thus the Blind Brain Theory, the attempt to naturalistically explain intentional phenomena in terms of the kinds and amounts of information missing. Where Hagglund claims ‘compatibility’ with Darwinian naturalism, BBT exhibits continuity: it takes the mechanistic paradigm of the life sciences as its basis. To the extent that it can explain trace and difference, then, it can claim to have naturalized deconstruction.

According to BBT, the intentional structure of first-person experience–the very thing phenomenology takes itself to be describing–is an artifact of informatic neglect, a kind of cognitive illusion. So, for instance, when Hagglund (explaining Husserl’s account of time-consciousness) writes “[t]he notes that run off and die away can appear as a melody only through an intentional act that apprehends them as an interconnected sequence” (56) he is literally describing the way that experience appears to a metacognition trussed in various forms of neglect. As we shall see, where Derrida, via the quasi-transcendentals of trace and differance, can only argue the insufficiencies plaguing such intentional acts, BBT possesses the resources to naturalistically explain, not only the insufficiencies, but why metacognition attributes intentionality to temporal cognition at all, why the apparent paradoxes of time-consciousness arise, and why it is that trace and differance make ‘sense’ the way they do. ‘Brain blindness’ or informational lack, in other words, can not only explain many of the perplexities afflicting consciousness and the first-person, it can also explain–if only in a preliminary and impressionistic way–much of the philosophy turning on what seem to be salient intentional intuitions.

Philosophy becoming transcendentally self-conscious as it did with Hume and Kant can be likened to a kid waking up to the fact that he lives in a peculiar kind of box, one not only walled by neglect (which is to say, the absence of information–or nothing at all), but unified by it as well. Kant’s defining metacognitive insight came with Hume: Realizing the wholesale proximal insufficiency of experience, he understood that philosophy must be ‘critical.’ Still believing in reason, he hoped to redress that insufficiency via his narrow form of transcendental interpretation. He saw the informatic box, in other words, and he saw how everything within it was conditioned, but assuming the sufficiency of metacognition, he assumed the validity of his metacognitive ‘deductions.’ Thus the structure of the empirical, the conditioned, and the transcendental, the condition: the attempt to rationally recuperate the sufficiency of experience.

But the condition is, as a matter of empirical fact, neural. The speculative presumption that something resembling what we think we metacognize as soul, mind, or being-in-the-world arises at some yet-to-be naturalized ‘level of description’–noocentrism–is merely that, a speculative presumption that in this one special case (predictably, our case) science will redeem our intentional intuitions. BBT offers the contrary speculative presumption, that something resembling what we think we metacognize as soul, mind, or being-in-the-world will not arise at some yet-to-be naturalized ‘level of description’ because nothing resembles what we think we metacognize at any level. Cognition is fractionate, heuristic, and captive to the information available. The more scant or mismatched the information, the more error prone cognition becomes. And no cognitive system faces the informatic challenges confronting metacognition. The problem, simply put, is that we lack any ‘meta-metacognition,’ and thus any intuition of the radical insufficiency of the information available relative to the cognitive resources possessed. The kinds of low-dimensional distortions revealed are therefore taken as apodictic.

There are reasons why first-person experience appears the way it does, they just happen to be empirical rather than transcendental. Transcendental explanation, you could say, is an attempt to structurally regiment first-person experience in terms that take the illusion to be real. The kinds of tail-chasing analyses one finds in Husserl literally represent an attempt to dredge some kind of formal science out of what are best understood as metacognitive illusions. The same can be said for Kant. Although he deserves credit for making the apparent asymptotic structure of conscious experience explicit, he inevitably confused the pioneering status of his subsequent interpretations–the fact that they were, for the sake of sheer novelty, the ‘only game in town’–for a kind of synthetic deductive validity. Otherwise he was attempting to ‘explain’ what are largely metacognitive illusions.

According to BBT, ‘transcendental interpretation’ represents the attempt to rationalize what it is we think we see when we ‘reflect’ in terms (intentional) congenial to what it is we think we see. The problem isn’t simply that we see far too little, but that we are entirely blind to the very thing we need to see: the context of neurofunctional processes that explains the why and how of the information broadcast to or integrated within conscious experience. To say the neurofunctionality of conscious experience is occluded is to say metacognition accesses no information regarding the actual functions discharged by the information broadcast or integrated. Blind to what lies outside its informatic box, metacognition confuses what it sees for all there is (as Kahneman might say), and generates ‘transcendental interpretations’ accordingly. Reasoning backward with inadequate cognitive tools from inadequate information, it provides ever more interpretations to ‘hang in the air’ with the interpretations that have come before.

‘Transcendental,’ in other words, simply names those prescientific, medial interpretations that attempt to recuperate the apparent sufficiency of conscious experience as metacognized. BBT, on the other hand, is exclusively interested in medial interpretations of what is actually going on, regardless of speculative consequences. It is an attempt to systematically explain away conscious experience as metacognized–the first-person–in terms of informatic privation and heuristic misadventure.

This will inevitably strike some readers as ‘positivist,’ ‘scientistic,’ or ‘reductive,’ terms that have become scarce more than dismissive pejoratives in certain philosophical circles, an excuse to avoid engaging what science has to say regarding their domain–the human. BBT, in other words, is bound to strike certain readers as chauvinistic, even imperial. But, if anything, BBT is bent upon dispelling views grounded in parochial sources of information–chauvinism. In fact, it is transcendental interpretation that restricts itself to nonscientific sources of information under the blanket assumption of metacognitive sufficiency, the faith that enough information of the right kind is available for actual cognition. Transcendental interpretation, in other words, remains wedded to what Kant called ‘tutelary natures.’ BBT, however, is under no such constraint; it considers both metacognitive and scientific information, understanding that the latter, on pain of supernaturalism, simply has to provide the baseline for reliable theoretical cognition (whatever that ultimately turns out to be). Thus the strange amalgam of scientific and philosophical concepts found here.

If reliable theoretical cognition requires information of the right kind and amount, then it behooves the philosopher, deconstructive or transcendental, to take account of the information their intentional rationales rely upon. If that information is primarily traditional and metacognitive–prescientific–then that philosopher needs some kind of sufficiency argument, some principled way of warranting the exclusion of scientific information. And this, I fear, has become all but impossible to do. If the sufficiency argument provided is speculative–that is, if it also relies on traditional claims and metacognitive intuitions–then it simply begs the question. If, on the other hand, it marshals information from the sciences, then it simply acknowledges the very insufficiency it is attempting to fend.

The epoch of intentional philosophy is at an end. It will deny and declaim–it can do nothing else–but to little effect. Like all prescientific domains of discourse it can only linger and watch its credibility evaporate into New Age aether as the sciences of the brain accumulate ever more information and refine ever more instrumentally powerful interpretations of that information. It’s hard to argue against cures. Any explanatory paradigm that restores sight to the blind, returns mobility to the crippled, not to mention facilitates the compliance of the masses, will utterly dominate the commanding heights of cognition.

Far more than mere theoretical relevance is at stake here.

On BBT, all traditional and metacognitive accounts of the human are the product of extreme informatic poverty. Ironically enough, many have sought intentional asylum within that poverty in the form of apriori or pragmatic formalisms, confusing the lack of information for the lack of substantial commitment, and thus for immunity against whatever the sciences of the brain may have to say. But this just amounts to a different way of taking refuge in obscurity. What are ‘rules’? What are ‘inferences’? Unable to imagine how science could answer these questions, they presume either that science will never be able to answer them, or that it will answer them in a manner friendly to their metacognitive intuitions. Taking the history of science as its cue, BBT entertains no such hopes. It sees these arguments for what they happen to be: attempts to secure the sufficiency of low-dimensional, metacognitive information, to find gospel in a peephole glimpse.

The same might be said of deconstruction. Despite their purported radicality, trace and differance likewise belong to a low-dimensional conceptual apparatus stemming from a noocentric account of intentional sufficiency. ‘Mystic writing pad’ or no, Derrida remains a philosopher of experience as opposed to nature. As David Roden has noted, “while Derrida’s work deflates the epistemic primacy of the ‘first person,’ it exhibits a concern with the continuity of philosophical concepts that is quite foreign to the spirit of contemporary naturalism” (“The Subject”). The ‘advantage’ deconstruction enjoys, if it can be called such, lies in its relentless demonstration of the insufficiency plaguing all attempts to master meaning, including its own. But as we have seen above, it can only do such from the fringes of meaning, as a ‘quasi-transcendentally’ informed procedure of reading. Derrida is, strangely enough, like Hume in this regard, only one forewarned of the transcendental apologetics of Kant.

Careful readers will have already noted a number of striking parallels between the preceding account of BBT and the deconstructive paradigm. Cognition (or the collection of fractionate heuristic subsystems we confuse for such) only has recourse to whatever information is available, thus rendering sufficiency the perennial default. Even when cognition has recourse to supplementary information pertaining to the insufficiency of information, information is processed, which is to say, the resulting complex (which might be linguaformally expressed as, ‘Information x is insufficient for reliable cognition’) is taken as sufficient insofar as the system takes it up at all. Informatic insufficiency is parasitic on sufficiency, as it has to be, given the mechanistic nature of neural processing. For any circuit involving inputs and outputs, differences must be made. Sufficient or not, the system, if it is to function at all, must take it as such.

(I should pause to note a certain temptation at this juncture, one perhaps triggered by the use of the term ‘supplementary.’ One can very easily deconstruct the above set of claims the way one can deconstruct any set of theoretical claims, scientific or speculative. But where the deconstruction of speculative claims possesses or at least seems to possess clear speculative effects, the deconstruction of scientific claims does not, as a rule, possess any scientific effects. BBT, recall, is an empirical theory, and as such stands beyond the pale of decisive speculative judgment (if indeed, there is such a thing).)

The cognition of informatic insufficiency always requires sufficiency. To ‘know’ that you are ‘wrong’ is to be right about being wrong. The positivity of conscious experience and cognition follows from the mechanical nature of brain function, the mundane fact that differences must be made. Now, whatever ‘consciousness’ happens to be as a natural phenomenon (apart from our hitherto fruitless metacognitive attempts to make sense of it), it pretty clearly involves the ‘broadcasting’ or ‘integration’ of information (systematic differences made) from across the brain. At any given instant, conscious experience and cognition access only an infinitesimal fraction of the information processed by the brain: conscious experience and cognition, in other words, possess any number of informatic limits. Conscious experience and cognition are informatically encapsulated at any given moment. It’s not just that huge amounts of information are simply not available to the conscious subsystems of the brain, it’s that information allowing the cognition of those subsystems for what they are isn’t available. The positivity of conscious experience and cognition turns on what might be called medial neglect, the structural inability to consciously experience or cognize the mechanisms behind conscious experience and cognition.

Medial neglect means the mechanics of system are not available to the system. The importance of this observation cannot be overstated. The system cannot cognize itself the way it cognizes its environments, which is to say, causally, and so must cognize itself otherwise. What we call ‘intentionality’ is this otherwise. Most of the peculiarities of this ‘cognition otherwise’ stem from the structural inability of the system to track its own causal antecedents. The conscious subsystems of the brain cannot cognize the origins of any of its processes. Moreover, they cannot even cognize the fact that this information is missing. Medial neglect means conscious experience and cognition are constituted by mechanistic processes that structural escape conscious experience and cognition. And this is tantamount to saying that consciousness is utterly blind to its own irreflexivity.

And as we saw above, in the absence of differences we experience/cognize identity.

On BBT, then, the ‘fundamental synthesis’ described by Hagglund is literally a kind of flicker fusion,’ a metacognitive presumption of identity where there is none. It is a kind of mandatory illusion: illusory because it egregiously mistakes what is the case, and mandatory because, like the illusion of continuous motion in film, it involves basic structural capacities that cannot be circumvented and so ‘seen through.’ But where with film environmental cognition blurs the distinction between discrete frames into an irreflexive, sensible continuity, the ‘trick’ played upon metacognition is far more profound. The brain has evolved to survive and exploit environmental change, irreflexivity. First and foremost, human cognition is the evolutionary product of the need to track environmental irreflexivity with enough resolution and fidelity to identify and avoid threats and identify and exploit opportunities. You could say it is an ensemble of irreflexivities (mechanisms) parasitic upon the greater irreflexitivity of its environment (or to extend Craver’s terms, the brain is a component of the ‘brain/environment’). Lacking the information required to cognize temporal difference, it perceives temporal continuity. Our every act of cognition is at once irrevocable and blind to itself as irrevocable. Because it is blind to itself, it cannot, temporally speaking, differentiate itself from itself. As a result, such acts seem to arise from some reflexive source. The absence of information, once again, means the absence of distinction, which means identity. The now, the hitherto perplexing and inexplicable fusion of distinct times, becomes the keel of subjectivity, something that appears (to metacognition at least) to be a solitary, reflexive exception in an universe entirely irreflexive otherwise.

This is the cognitive illusion that both Kant and Husserl attempted to conceptually regiment, Kant by positing the transcendental unity of apperception, and Husserl via the transcendental ego. This is also the cognitive illusion that stands at the basis of our understanding of persons, both ourselves and others.

When combined with sufficiency, this account of reflexivity provides us with an elegant way to naturalize presence. Sufficiency means that the positivity of conscious experience and cognition ‘fills the existential screen’: there is nothing but what is experienced and cognized at any given moment. The illusion of reflexivity can be seen as a temporalization of the illusion of sufficiency: lacking the information required to relativize sufficiency to any given moment, metacognition blurs it across all times. The ‘only game in town effect’ becomes an ‘only game in time effect’ for the mere want of metacognitive information–medial neglect. The target of metacognition, conscious experience and cognition, appears to be something self-sustaining, something immediately, exhaustively self-present, something utterly distinct from the merely natural, and something somehow related to the eternal.

And with the naturalization of presence comes the naturalization of the aporetic figure of philosophy that so obsessed Derrida for the entirety of his career. Sufficiency, the fact that conscious experience and cognition ‘fills the screen,’ means that the limits of conscious experience and cognition always outrun conscious experience and cognition. Sufficiency means the boundaries of consciousness are asymptotic, ‘limits with only one side.’ The margins of your visual attention provide a great example of this. The limits of seeing can never be seen: the visual information integrated into conscious experience and cognition simply trails into ‘oblivion.’ The limits of seeing are thus visually asymptotic, though the integration of vision into a variety of other systems allows those limits to be continually, effortlessly cognized. Such, however, is not the case when it comes to the conscious subsystems of the brain as a whole. They are, once again, encapsulated. Conscious experience and cognition only exists ‘for’ conscious experience and cognition ‘within’ conscious experience and cognition. To resort to the language of representation favoured by Derrida, the limits of representation only become available via representation.

And all this, once again, simply follows from the mechanistic nature of the human brain, the brute fact that the individual mechanisms engaged in informatically comporting our organism to itself and its (social and natural) environments, are engaged and so incapable of systematically tracking their own activities let alone the limitations besetting them. Sufficiency is asymptosis. Such tracking requires a subsequent reassignation of neurocomputational resources–it must always be deferred to a further moment that is likewise mechanically incapable of tracking its own activities. This post hoc tracking, meanwhile, literally has next to nothing that it can systematically comport itself to (or ‘track’). Thus each instant of functioning blots the instant previous, rendering medial neglect all but complete. Both the incalculably intricate and derived nature of each instant is lost as is the passage between instants, save for what scant information is buffered or stored. And so are irreflexive repetitions whittled into anosognosiac originals.

Theoretical metacognition, or philosophical reflection, confronts the compelling intuition that it is originary, that it stands outside the irreflexive order of its environments, that it is in some sense undetermined or free. Precisely because it is mechanistic, it confuses itself for ‘spirit,’ for something other than nature. As it comes to appreciate (through the accumulation of questions (such as those posed by Hume)) the medial insufficiency of conscious experience as metacognized, it begins to posit medial prosthetics that dwell in the asymptotic murk, ‘conditions of possibility,’ formal rationalizations of conscious experience as metacognized. Asymptosis is conceived as transcendence in the Kantian sense (as autoaffection, apperceptive unity, so on), forms that appeal to philosophical intuition because of the way they seem to conserve the illusions compelled by informatic neglect. But since the assumption of metacognitive identity is an artifact of missing information, which is to say, cognitive incapacity, the accumulation of questions (which provide information regarding the absence of information) and the accumulation of information pertaining to irreflexivity (which, like external relationality, always requires more information to cognize), inevitably cast these transcendental rationalizations into doubt. Thus the strange inevitability of deconstruction (or negative dialectics, or the ‘philosophies of difference’ more generally), the convergence of philosophical imagination about the intuition of some obdurate, inescapable irreflexivity concealed at the very root of conscious experience and cognition.

Deconstruction can be seen as a ‘low resolution’ (strategic, provisional) recognition of the medial mechanicity that underwrites the metacognitive illusion of ‘meaning.’ Trace and differance are emissaries of irreflexivity, an expression of the neuromechanics of conscious experience and cognition given only the limited amount of information available to conscious experience and cognition. As mere glimmers of our mechanistic nature, however, they can only call attention to the insufficiencies that haunt the low-dimensional distortions of the soul. Rather than overthrow the illusions of meaning, they can at most call attention to the way it ‘wobbles,’ thus throwing a certain image of subjective semantic stability and centrality into question. Deconstruction, for all its claims to ‘radicalize,’ remains a profoundly noocentric philosophy, capable of conceiving the irreflexive only as the ‘hidden other’ of the reflexive. The claim to radicality, if anything, cements its status as a profoundly nooconservative mode of philosophical thought. Deconstruction becomes, as we can so clearly see in Hagglund, a form of intellectual hygiene. ‘Deconstructed’ intentional concepts begin to seem like immunized intentional concepts, ‘subjects’ and ‘norms’ and ‘meanings’ that are all the sturdier for referencing their ‘insufficiency’ in theoretical articulations that take them as sufficient all the same. Thus the oxymoronic doubling evinced by ‘deconstructive ethics’ or ‘deconstructive politics.’

The most pernicious hallucination, after all, is the hallucination that claims to have been seen through.

The present account, however, does not suffer happy endings, no matter how aleatory or conditional. According to BBT, nothing has nor ever will be ‘represented.’ Certainly our brains mechanically recapitulate myriad structural features of their environments, but at no point do these recapitulations inherit the occult property of aboutness. With BBT, these phantasms that orthogonally double the world become mere mechanisms, environmentally continuous components that may or may not covary with their environments, just more ramshackle life, the product of over 3 billion years of blind guessing. We become lurching towers of coincidence, happenstance conserved in meat. Blind to neurofunctionality, the brain’s metacognitive systems have no choice but to characterize the relation between the environmental information accumulated and those environments in acausal, nonmechanical terms. Sufficiency assures that this metacognitive informatic poverty will seem a self-evident plenum. The swamp of causal complexity is drained. The fantastically complicated mechanistic interactions constituting the brain/environment vanish into the absolute oblivion of the unknown unknown, stranding metacognition with the binary cartoon of a ‘subject’ ‘intending’ some ‘object.’ Statistical gradations evaporate into the procrustean discipline of either/or.

This, if anything, is the image I want to leave you with, one where the traditional concepts of philosophy can be seen for the granular grotesqueries they are, the cartoonish products of a metacognition pinioned between informatic scarcity and heuristic incapacity. I want to leave you with, in effect, an entirely new way to conceive philosophy, one adequate to the new and far more terrifying ‘Enlightenment’ presently revolutionizing the world around us. Does anyone really think their particular, prescientific accounts of the soul will escape unscathed or emerge redeemed by what sciences of the brain will reveal over the coming decades? Certainly one can argue points with BBT, a position whose conclusions are so dismal that I cannot myself fully embrace them. What one cannot argue against is the radical nature of our times, with the fact that science has at long last colonized the soul, that it is, even now, doing what it always does when it breaches some traditional domain of discourse: replace our always simplistic and typically flattering assumptions with portraits of bottomless intricacy and breathtaking indifference. We are just beginning, as a culture, to awaken to the fact that we are machines. Throw words against this prospect if you must. The engineers and the institutions that own them will find you a most convenient distraction.

Wire Finger

The Posthuman as Evolution 3.0

by rsbakker

Aphorism of the Day: Knowing that you know that I know that you know, we should perhaps, you know, spark a doob and like, call the whole thing off.

.

So for years now I’ve had this pet way of understanding evolution in terms of effect feedback (EF) mechanisms, structures whose functions produce effects that alter the original structure. Morphological effect feedback mechanisms started the show: DNA and reproductive mutation (and other mechanisms) allowed adaptive, informatic reorganization according to the environmental effectiveness of various morphological outputs. Life’s great invention, as they say, was death.

This original EF process was slow, and adaptive reorganization was transgenerational. At a certain point, however, morphological outputs became sophisticated enough to enable a secondary, intragenerational EF process, what might be called behavioural effect feedback. At this level, the central nervous system, rather than DNA, was the site of adaptive reorganization, producing behavioural outputs that are selected or extinguished according to their effectiveness in situ.

For whatever reason, I decided to plug the notion of the posthuman into this framework the other day. The idea was that the evolution from Morphological EF to Behavioural EF follows a predictable course, one that, given the proper analysis, could possibly tell us what to expect from the posthuman. The question I had in my head when I began this was whether we were groping our way to some entirely new EF platform, something that could effect adaptive, informatic reorganization beyond morphology and behaviour.

First, consider some of the key differences between the processes:

Morphological EF is transgenerational, whereas Behavioural EF is circumstantial – as I mentioned above. Adaptive informatic reorganization is therefore periodic and inflexible in the former case, and relatively continuous and flexible in the latter. In other words, morphology is circumstantially static, while behaviour is circumstantially plastic.

Morphological EF operates as a fundamental physiological generative (in the case of the brain) and performative (in the case of the body) constraint on Behavioural EF. Our brains limit the behaviours we can conceive, and our bodies limit the behaviours we can perform.

Morphologies and their generators (genetic codes) are functionally inseparable, while behaviours and their generators (brains) are functionally separable. Behaviours are disposable.

Defined in these terms, the posthuman is simply the point where neural adaptive reorganization generates behaviours (in this case, tool-making) such that morphological EF ceases to be a periodic and inflexible physiological generative and performative constraint on behavioural EF. Put differently, the posthuman is the point where morphology becomes circumstantially plastic. You could say tools, which allow us to circumvent morphological constraints on behaviour, have already accomplished this. Spades make for deeper ditches. Writing makes for bottomless memories. But tool-use is clearly a transitional step, ways to accessorize a morphology that itself remains circumstantially static. The posthuman is the point where we put our body on the lathe (with the rest of our tools).

In a strange, teleonomic sense, you could say that the process is one of effect feedback bootstrapping, where behaviour revolutionizes morphology, which revolutionizes behaviour, which revolutionizes morphology, and so on. We are not so much witnessing the collapse of morphology into behaviour as the acceleration of the circuit between the two approaching some kind of asymptotic limit that we cannot imagine. What happens when the mouth of behaviour after digesting the tail and spine of morphology, finally consumes the head?

What’s at stake, in other words, is nothing other than the fundamental EF structure of life itself. It makes my head spin, trying to fathom what might arise in its place.

Some more crazy thoughts falling out of this:

1) The posthuman is clearly an evolutionary event. We just need to switch to the register of information to see this. We’re accustomed to being told that dramatic evolutionary changes outrun our human frame of reference, which is just another way of saying that we generally think of evolution as something that doesn’t touch us. This was why, I think, I’ve been thinking the posthuman by analogy to the Enlightenment, which is to say, as primarily a cultural event distinguished by a certain breakdown in material constraints. No longer. Now I see it as an evolutionary event literally on par with the development of Morphological and Behavioural EF. As perhaps I should have all along, given that posthuman enthusiasts like Kurzweil go on and on about the death of death, which is to say, the obsolescence of a fundamental evolutionary invention.

2) The posthuman is not a human event. We may be the thin edge of the wedge, but every great transformation in evolution drags the whole biosphere in tow. The posthuman is arguably more profound than the development of multicellular life.

3) The posthuman, therefore, need not directly involve us. AI could be the primary vehicle.

4) Calling our descendents ‘transhuman’ makes even less sense than calling birds ‘transdinosaurs.’

5) It reveals posthuman optimism for the wishful thinking it is. If this transformation doesn’t warrant existential alarm, what on earth does?