Three Pound Brain

No bells, just whistling in the dark…

Writing After the Death of Meaning

by rsbakker

[Presented June 2nd, 2015, for the Posthuman Aesthetics Research Group at Aarhus University]

Abstract: For centuries now, science has been making the invisible visible, thus revolutionizing our understanding of and power over different traditional domains of knowledge. Fairly all the speculative phantoms have been exorcised from the world, ‘disenchanted,’ and now, at long last, the insatiable institution has begun making the human visible for what it is. Are we the last ancient delusion? Is the great, wheezing heap of humanism more an artifact of ignorance than insight? We have ample reason to think so, and as the cognitive sciences creep ever deeper into our biological convolutions, the ‘worst case scenario’ only looms darker on the horizon. To be a writer in this age is stand astride this paradox, to trade in communicative modes at once anchored to our deepest notions of authenticity and in the process of being dismantled or worse, simulated. If writing is a process of making visible, communicating some recognizable humanity, how does it proceed in an age where everything is illuminated and inhuman? All revolutions require experimentation, but all too often experimentation devolves into closed circuits of socially inert production and consumption. The present revolution, I will argue, requires cultural tools we do not yet possess (or know how to use), and a sensibility that existing cultural elites can only regard as anathema. Writing in the 21st century requires abandoning our speculative past, and seeing ‘literature’ as praxis in a time of unprecedented crisis, as ‘cultural triage.’ Most importantly, writing after the death of meaning means communicating to what we in fact are, and not to the innumerable conceits of obsolescent tradition.

So, we all recognize the revolutionary potential of technology and the science that makes it possible. This is just to say that we all expect science will radically remake those traditional domains that fall within its bailiwick. Likewise, we all appreciate that the human is just such a domain. We all realize that some kind of revolution is brewing…

The only real question is one of how radically the human will be remade. Here, everyone differs, and in quite predictable ways. No matter what position people take, however, they are saying something about the cognitive status of traditional humanistic thought. Science makes myth of traditional ontological claims, relegates them to the history of ideas. So all things being equal we should suppose that science will make myth of traditional ontological claims regarding the human as well. Declaring that traditional ontological claims regarding the human will not suffer the fate of other traditional ontological claims more generally, amounts to declaring that all things are not equal when it comes to the human, that in this one domain at least, traditional modes of cognition actually tell us what is the case.

Let’s call this pole of argumentation humanistic exceptionalism. Any position that contends or assumes that science will not fundamentally revolutionize our understanding of the human supposes that something sets the human apart. Not surprisingly, given the underdetermined nature of the subject-matter, the institutionally entrenched nature of the humanities, and the human propensity to rationalize conceit and self-interests, the vast majority of theorists find themselves occupying this pole. There are, we now know, many, many ways to argue exceptionalism, and no way whatsoever to decisively arbitrate between any them.

What all of them have in common, I think it’s fair to say, is the signature theoretical function they accord to meaning. Another feature they share is a common reliance on pejoratives to police the boundaries of their discourse. Any time you encounter the terms ‘scientism’ or ‘positivism’ or ‘reductionism’ deployed without any corresponding consideration of the case against traditional humanism, you are almost certainly reading an exceptionalist discourse. One of the great limitations of committing to status-quo underdetermined discourses, of course, is the infrequency with which adherents encounter the limits of their discourse, and thus run afoul the same fluency and only game in town effects that render all dogmatic pieties self-perpetuating.

My artistic and philosophical project can be fairly summarized, I think, as a sustained critique of humanistic exceptionalism, an attempt to reveal these positions as the latest (and therefore most difficult to recognize) attempts to intellectually rationalize what are ultimately run-of-the-mill conceits, specious ways to set humanity—or select portions of it at least—apart from nature.

I occupy the lonely pole of argumentation, the one that says humans are not ontologically special in any way, and that accordingly, we should expect the scientific revolution of the human to be as profound as the scientific revolution of any other domain. My whole career is premised on arguing the worst case scenario, the future where humanity finds itself every bit as disenchanted—every bit as debunked—as the cosmos.

I understand why my pole of the debate is so lonely. One of the virtues of my position, I think anyway, lies in its ability to explain its own counter-intuitiveness.

Think about it. What does it mean to say meaning is dead? Surely this is metaphorical hyperbole, or worse yet, irresponsible alarmism. What could my own claims mean otherwise?

‘Meaning,’ on my account, will die two deaths, one theoretical or philosophical, the other practical or functional. Where the first death amounts to a profound cultural upheaval on a par with, say, Darwin’s theory of evolution, the second death amounts to a profound biological upheaval, a transformation of cognitive habitat more profound than any humanity has ever experienced.

‘Theoretical meaning’ simply refers to the endless theories of intentionality humanity has heaped on the question of the human. Pretty much the sum of traditional philosophical thought on the nature of humanity. And this form of meaning I think is pretty clearly dead. People forget that every single cognitive scientific discovery amounts to a feature of human nature that human nature is prone to neglect. We are, as a matter of empirical fact, fundamentally blind to what we are and what we do. Like traditional theoretical claims belonging to other domains, all traditional theoretical claims regarding the human neglect the information driving scientific interpretations. The question is one of what this naturally neglected information—or ‘NNI’—means.

The issue NNI poses for the traditional humanities is existential. If one grants that the sum of cognitive scientific discovery is relevant to all senses of the human, you could safely say the traditional humanities are already dwelling in a twilight of denial. The traditionalist’s strategy, of course, is to subdivide the domain, to adduce arguments and examples that seem to circumscribe the relevance of NNI. The problem with this strategy, however, is that it completely misconstrues the challenge that NNI poses. The traditional humanities, as cognitive disciplines, fall under the purview of cognitive sciences. One can concede that various aspects of humanity need not account for NNI, yet still insist that all our theoretical cognition of those aspects does…

And quite obviously so.

The question, ‘To what degree should we trust ‘reflection upon experience’?’ is a scientific question. Just for example, what kind of metacognitive capacities would be required to abstract ‘conditions of possibility’ from experience? Likewise, what kind of metacognitive capacities would be required to generate veridical descriptions of phenomenal experience? Answers to these kinds of questions bear powerfully on the viability of traditional semantic modes of theorizing the human. On the worst case scenario, the answers to these and other related questions are going to systematically discredit all forms of ‘philosophical reflection’ that fail to take account of NNI.

NNI, in other words, means that philosophical meaning is dead.

‘Practical meaning’ refers to the everyday functionality of our intentional idioms, the ways we use terms like ‘means’ to solve a wide variety of practical, communicative problems. This form of meaning lives on, and will continue to do so, only with ever-diminishing degrees of efficacy. Our everyday intentional idioms function effortlessly and reliably in a wide variety of socio-communicative contexts despite systematically neglecting everything cognitive science has revealed. They provide solutions despite the scarcity of data.

They are heuristic, part of a cognitive system that relies on certain environmental invariants to solve what would otherwise be intractable problems. They possess adaptive ecologies. We quite simply could not cope if we were to rely on NNI, say, to navigate social environments. Luckily, we don’t have to, at least when it comes to a wide variety of social problems. So long as human brains possess the same structure and capacities, the brain can quite literally ignore the brain when solving problems involving other brains. It can leap to conclusions absent any natural information regarding what actually happens to be going on.

But, to riff on Uncle Ben, with great problem-solving economy comes great problem-making potential. Heuristics are ecological; they require that different environmental features remain invariant. Some insects, most famously moths, use ‘transverse orientation,’ flying at a fixed angle to the moon to navigate. Porch lights famously miscue this heuristic mechanism, causing the insect to chase the angle into the light. The transformation of environments, in other words, has cognitive consequences, depending on the kind of short cut at issue. Heuristic efficiency means dynamic vulnerability.

And this means not only that heuristics can be short-circuited, they can also be hacked. Think of the once omnipresent ‘bug zapper.’ Or consider reed warblers, which provide one of the most dramatic examples of heuristic vulnerability nature has to offer. The system they use to recognize eggs and offspring is so low resolution (and therefore economical) that cuckoos regularly parasitize their nests, leaving what are, to human eyes, obviously oversized eggs and (brood-killing) chicks that the warbler dutifully nurses to adulthood.

All cognitive systems, insofar as they are bounded, possess what might be called a Crash Space describing all the possible ways they are prone to break down (as in the case of porch lights and moths), as well as an overlapping Cheat Space describing all the possible ways they can be exploited by competitors (as in the case of reed warblers and cuckoos, or moths and bug-zappers).

The death of practical meaning simply refers to the growing incapacity of intentional idioms to reliably solve various social problems in radically transformed sociocognitive habitats. Even as we speak, our environments are becoming more ‘intelligent,’ more prone to cue intentional intuitions in circumstances that quite obviously do not warrant them. We will, very shortly, be surrounded by countless ‘pseudo-agents,’ systems devoted to hacking our behaviour—exploiting the Cheat Space corresponding to our heuristic limits—via NNI. Combined with intelligent technologies, NNI has transformed consumer hacking into a vast research programme. Our social environments are transforming, our native communicative habitat is being destroyed, stranding us with tools that will increasingly let us down.

Where NNI itself delegitimizes traditional theoretical accounts of meaning (by revealing the limits of reflection), it renders practical problem-solving via intentional idioms (practical meaning) progressively more ineffective by enabling the industrial exploitation of Cheat Space. Meaning is dead, both as a second-order research programme and, more alarmingly, as a first-order practical problem-solver. This—this is the world that the writer, the producer of meaning, now finds themselves writing in as well as writing to. What does it mean to produce ‘content’ in such a world? What does it mean to write after the death of meaning?

This is about as open as a question can be. It reveals just how radical this particular juncture in human thought is about to become. Everything is new, here folks. The slate is wiped clean.

[I used the following possibilities to organize the subsequent discussion]

Post-Posterity Writing

The Artist can no longer rely on posterity to redeem ingroup excesses. He or she must either reach out, or risk irrelevance and preposterous hypocrisy. Post-semantic writing is post-posterity writing, the production of narratives for the present rather than some indeterminate tomorrow.

High Dimensional Writing

The Artist can no longer pretend to be immaterial. Nor can they pretend to be something material magically interfacing with something immaterial. They need to see the apparent lack of dimensionality pertaining to all things ‘semantic’ as the product of cognitive incapacity, not ontological exceptionality. They need to understand that thoughts are made of meat. Cognition and communication are biological processes, open to empirical investigation and high dimensional explanations.

Cheat Space Writing

The Artist must exploit Cheat Spaces as much as reveal Cheat Spaces. NNI is not simply an industrial and commercial resource; it is also an aesthetic one.

Cultural Triage

The Artist must recognize that it is already too late, that the processes involved cannot be stopped, let alone reversed.  Extremism is the enemy here, the attempt to institute, either via coercive simplification (a la radical Islam, for instance) or via technical reduction (a la totalized surveillance, for instance), Orwellian forms of cognitive hygiene.

More Disney than Disney World: Semiotics as Theoretical Make-believe (II)

by rsbakker

III: The Gilded Stage

We are one species among 8.7 million, organisms embedded in environments that will select us the way they have our ancestors for 3.8 billion years running. Though we are (as a matter of empirical fact) continuous with our environments, the information driving our environmental behaviour is highly selective. The selectivity of our environmental sensitivities means that we are encapsulated, both in terms of the information available to our brain, and in terms of the information available for consciousness. Encapsulation simply follows from the finite, bounded nature of cognition. Human cognition is the product of ancestral human environments, a collection of good enough fixes for whatever problems those environments regularly posed. Given the biological cost of cognition, we should expect that our brains have evolved to derive as much information as possible from whatever signals available, to continually jump to reproductively advantageous conclusions. We should expect to be insensitive to the vast majority of information in our environments, to neglect everything save information that had managed to get our ancestors born.

As it turns out, shrewd guesswork carried the cognitive day. The correlate of encapsulated information access, in other words, is heuristic cognitive processing, a tendency to always see more than there really is.

So consider the streetscape from above once again:

Southwest Orange-20150421-00452

This looks like a streetscape only because the information provided generally cues the existence of hidden dimensions, which in this case simply do not exist. Since the cuing is always automatic and implicit, you just are looking down a street. Change your angle of access and the illusion of hidden dimensions—which is to say, reality—abruptly evaporates. The impossible New York skyline is revealed as counterfeit.

Southwest Orange-20150421-00453

Let’s call a stage any environment that reliably cues the cognition of alternate environments. On this definition, a stage could be the apparatus of a trapdoor spider, say, or a nest parasitized by a cuckoo, or a painting, or an epic poem, or yes, Disney World—any environment that reliably triggers the cognition of some environment other than the environment actually confronting some organism.

As the inclusion of the spider and the cuckoo should suggest, a stage is a biological phenomenon, the result of some organism cognizing one environment as another environment. Stages, in other words, are not semantic. It is simply the case that beetles sensing environments absent spiders will blunder into trapdoor spiders. It’s simply the case that some birds, sensing chicks, will feed those chicks, even if one of them happens to be a cuckoo. It is simply the case that various organisms exploit the cognitive insensitivities of various other organisms. One need not ascribe anything so arcane as ‘false beliefs’ to birds and beetles to make sense of their exploitation. All they need do is function in a way typically cued by one family of (often happy) environments in a different (often disastrous) environment.

Stages are rife throughout the natural world simply because biological cognition is so expensive. All cognition can be exploited because all cognition is bounded, dependant on taking innumerable factors for granted. Probabilistic guesses have to be made always and everywhere, such are the exigencies of survival and reproduction. Competing species need only happen upon ways to trigger those guesses in environments reproductively advantageous to them, and selection will pace out a new niche, a position in what might be called manipulation space.

The difficulty with qualifying a stage as a biological phenomenon, however, is that I included intentional artifacts such as narratives, paintings, and amusement parks as examples of stages above. The problem with this is that no one knows how to reconcile the biological with the intentional, how to fit meaning into the machinery of life.

And yet, as easy as it is to anthropomorphize the cuckoo’s ‘treachery’ or the trapdoor spider’s ‘cunning’—to infuse our biological examples with meaning—it seems equally easy to ‘zombify’ narrative or painting or Disney World. Hearing the Iliad, for instance, is a prodigious example of staging, insofar as it involves the serial cognition of alternate environments via auditory cues embedded in an actual, but largely neglected, environment. One can easily look at the famed cave paintings of Chauvet, say, as a manipulation of visual cues that automatically triggers the cognition of absent things, in this case, horses:

chauvet horses

But if narrative and painting are stages so far as ‘cognizing alternate environments’ goes, the differences between things like the Iliad or Chauvet and things like trapdoor spiders and cuckoos are nothing less than astonishing. For one, the narrative and pictorial cuing of alternative environments is only partial; the ‘alternate environment’ is entertained as opposed to experienced. For another, the staging involved in the former is communicative, whereas the staging involved in the latter is not. Narratives and paintings mean things, they possess ‘symbolic significance,’ or ‘representational content,’ whereas the predatory and parasitic stages you find in the natural world do not. And since meaning resists biological explanation, this strongly suggests that communicative staging resists biological explanation.

But let’s press on, daring theorists that we are, and see how far our ‘zombie stage’ can take us. The fact is, the ‘manipulation space’ intrinsic to bounded cognition affords opportunities as well as threats. In the case of Chauvet, for instance, you can almost feel the wonder of those first artists discovering the relations between technique and visual effect, ways to trick the eye into seeing what was not there there. Various patterns of visual information cue cognitive machinery adapted to solve environments absent those environments. Flat surfaces become windows.

Let’s divvy things up differently, look at cognition and metacognition in terms of multiple channels of information availability versus cognitive capacity. On this account, staging need not be complete: as with Chauvet, the cognition of alternate environments can be partial, localized within the present environment. And as with Chauvet, this embedded staging can be instrumentalized, exploited for various kinds of effects. Just how the cave paintings at Chauvet were used will always be a matter of archaeological speculation, but this in itself tells us something important about the kind of stage we’re now talking about: namely, their specificity. We share the same basic cognitive mechanisms as the original creators and consumers of the Horses, for instance, but we share nothing of their individual histories. This means the stage we step onto encountering them is bound to differ, perhaps radically, from the stage they stepped onto encountering them in the Upper Paleolithic. Since no individuals share precisely the same history, this means that all embedded stages are unique in some respect.

The potential evolutionary value of embedded stages, the kind of ‘cognitive double-vision’ peculiar to humans, seems relatively clear. If you can draw a horse you can show a fellow hunter what to look for, what direction to approach it, where to strike with a spear, how to carve the joints for efficient transportation, and so on. Embedding, in other words, allows organisms to communicate cognitive relationships to actual environments by cuing the cognition of that environment absent that environment. Embedding also allows organisms to communicate cognitive relationships to nonexistent environments as well. If you can draw a cave bear, you can just as easily deceive as teach a potential competitor. And lastly, embedding allows organisms to game their own cognitive systems. By experimenting with patterns of visual information, they can trigger a wide variety of different responses, triggering wonder, lust, fear, amusement, and so on. The cave paintings at Chauvet include what is perhaps the oldest example of pictorial ‘porn’ (in this case, a vulva formed by a bull overlapping a lion) for a reason.

chauvet vulva

Humans, you could say, are the staging animal, the animal capable of reorganizing and coordinating their cognitive comportments via the manipulation of available information into cues, those patterns prone to trigger various heuristic systems ‘out of school.’ Research into episodic memory reveals an intimate relation between the constructive (as opposed to veridical) nature of episodic memory and the ability to imagine future environments. Apparently the brain does not so much record events as it ransacks them, extracting information strategic to solving future environments. Nothing demonstrates the profound degree to which the brain is invested in strategic staging as the default or task-negative network. Whenever we find ourselves disengaged from some ongoing task, our brains, far from slowing down, switch modes and begin processing alternate, typically social, environments. We ‘daydream,’ or ‘ruminate,’ or ‘fantasize,’ activities almost as metabolically expensive as performing focussed tasks. The resting brain is a staging brain—a story-telling brain. It has literally evolved to cue and manipulate its own cognitive systems, to ‘entertain’ alternate environments, laying down priors in the absence of genuine experience to better manage surprise.

Language looms large over all this, of course, as the staging device par excellence. Language allows us to ‘paint a picture,’ or cue various cognitive systems, at any time. Via language, multiple humans can coordinate their behaviours to provide a single solution; they can engage their environments at ever more strategic joints, intervene in ways that reliably generate advantageous outcomes. Via language, environmental comportments can be compared, tested as embedded stages, which is to say, on the biological cheap. And the list goes on. The upshot is that language, like cave paintings, puts human cognition at the disposal of human cognition

And—here’s the thing—while remaining utterly blind to the structure and dynamics of human cognition.

The reason for this is simple: the biological complexity required to cognize environments is simply too great to be cognized as environmental. We see the ash and pigment smeared across the stone, we experience (the illusion of) horses, and we have no access whatsoever to the machinery in between. Or to phrase it in zombie terms, humans access environmental information, ash and pigment, which cues cognitive comportments to different environmental information, horses, in the absence of any cognitive comportment to this process. In fact, all we see are horses, effortlessly and automatically; it actually requires effort to see the ash and pigment! The activated environment crowds the actual environment from the focus to the fringe. The machinery that makes all this possible doesn’t so much as dimple the margin. We neglect it. And accordingly, what inklings we have strike us as all there is.

The question of signification is as old as philosophy: how the hell do nonexistent horses leap from patterns of light or sound? Until recently, all attempts to answer this question relied on observations regarding environmental cues, the resulting experience, and the environment cued. The sign, the soul, and the signified anchored our every speculative analysis simply because, short baffling instances of neuropathology, the machinery responsible never showed its hand.

Our cognitive comportment to signification, in other words, looked like:

Southwest Orange-20150421-00452

Which is to say, a stage.

Because we’re quite literally ‘hardwired’ into this position, we have no way of intuiting the radically impoverished (because specialized) nature of the information made available. We cannot trudge on the perpendicular to see what the stage looks like from different angles—we cannot alter our existing cognitive comportments. Thus, what might be called the semiotic stage strikes us as the environment, or anything but a stage. So profound is the illusion that the typical indicators of informatic insufficiency, the inability to leverage systematically effective behaviour, the inability to command consensus, are habitually overlooked by everyone save the ‘folk’ (ironically enough). Sign, soul, and signified could only take us so far. Despite millennia of philosophical and psychological speculation, despite all the myriad regimentations of syntax and semantics, language remains a mystery. Controversy reigns—which is to say, we as yet lack any decisive scientific account of language.

But then science has only begun the long trudge on the perpendicular. The project of accessing and interpreting the vast amounts of information neglected by the semiotic stage is just getting underway.

Since all the various competing semiotic theories are based on functions posited absent any substantial reference to the information neglected, the temptation is to assume that those functions operate autonomously, somehow ‘supervene’ upon the higher dimensional story coming out cognitive neuroscience. This has a number of happy dialectical consequences beyond simply proofing domains against cognitive scientific encroachments. Theoretical constraints can even be mapped backward, with the assumption that neuroscience will vindicate semiotic functions, or that semiotic functions actually help clarify neuroscience. Far from accepting any cognitive scientific constraints, they can assert that at least one of their multiple stabs in the dark pierces the mystery of language in the heart, and is thus implicitly presupposed in all communicative acts. Heady stuff.

Semiotics, in other words, would have you believe that either this

Southwest Orange-20150421-00452

is New York City as we know it, and will be vindicated by the long cognitive neuroscientific trudge on the perpendicular, or that it’s a special kind of New York City, one possessing no perpendicular to trudge—not unlike, surprise-surprise, assumptions regarding the first-person or intentionality in general.

On this account, the functions posited are sometimes predictive, sometimes not, and even when they are predictive (as opposed to merely philosophical), they are clearly heuristic, low-dimensional ways of tracking extremely complicated systems. As such, there’s no reason to think them inexplicably—magically—‘autonomous,’ and good reason to suppose why it might seem that way. Sign, soul, and signified, the blinkered channels that have traditionally informed our understanding of language, appear inviolable precisely because they are blinkered—since we cognize via those channels, the limits of those channels cannot be cognized: the invisibility of the perpendicular becomes its impossibility.

These are precisely the kinds of errors we should expect speaking animals to make in the infancy of their linguistic self-understanding. You might even say that humans were doomed to run afoul ‘theoretical hyperrealities’ like semiotics, discursive Disney Worlds…

Except that in Disney World, of course, the stages are advertised as stages, not inescapable or fundamental environments. Aside from policy level stuff, I have no idea how Disney World or Disney corporation systematically contributes to the subversion of social justice, and neither, I would submit, does any semiotician living. But I do think I know how to fit Disney into a far larger, and far more disturbing set of trends that have seized society more generally. To see this, we have to leave semiotics behind…

More Disney than Disney World: Semiotics as Theoretical Make-believe

by rsbakker

Southwest Orange-20150415-00408


Ask a humanities scholar their opinion of Disney and they will almost certainly give you some version of Louis Marin’s famous “degenerate utopia.”

And perhaps they should. Far from a harmless amusement park, Disney World is a vast commercial enterprise, one possessing, as all corporations must, a predatory market agenda. Disney also happens to be in the meaning business, selling numerous forms of access to their propriety content, to their worlds. Disney (much like myself) is in the alternate reality game. Given their commercial imperatives, their alternate realities primarily appeal to children, who, branded at so young an age, continue to fetishize their products well into adulthood. This generational turnover, combined with the acquisition of more and more properties, assures Disney’s growing cultural dominance. And their messaging is obviously, even painfully, ideological, both escapist and socially conservative, designed to systematically neglect all forms of impersonal conflict.

I think we can all agree on this much. But the humanities scholar typically has something more in mind, a proclivity to interpret Disney and its constituents in semiotic terms, as a ‘veil of signs,’ a consciousness constructing apparatus designed to conceal and legitimize existing power inequities. For them, Disney is not simply apologetic as opposed to critical, it also plays the more sinister role of engendering and reinforcing hyperreality, the seamless integration of simulation and reality into disempowering perspectives on the world.

So as Baudrillard claims in Simulacra and Simulations:

The Disneyland imaginary is neither true nor false: it is a deterrence machine set up in order to rejuvenate in reverse the fiction of the real. Whence the debility, the infantile degeneration of this imaginary. It is meant to be an infantile world, in order to make us believe that the adults are elsewhere, in the ‘real’ world, and to conceal the fact that the real childishness is everywhere, particularly among those adults who go there to act the child in order to foster illusions of their real childishness.

Baudrillard sees the lesson as an associative one, a matter of training. The more we lard reality with our representations, Baudrillard believes, the greater the violence done. So for him the great sin of Disneyland lay not so much in reinforcing ideological derangements via simulation, but in completing the illusion of an ideologically deranged world. It is the lie within the lie, he would have us believe, that makes the second lie so difficult to see through. The sin here is innocence, the kind of belief that falls out of cognitive incapacity. Why do kids believe in magic? Arguably, because they don’t know any better. By providing adults a venue for their children to believe, Disney has also provided them evidence of their own adulthood. Seeing through Disney’s simulations generates the sense of seeing through all illusions, and therefore, seeing the real.

Disney, in other words, facilitates ‘hyperreality’—a semiotic form of cognitive closure—by rendering consumers blind to their blindness. Disney, on the semiotic account, is an ideological neglect machine. Its primary social function is to provide cognitive anaesthesia to the masses, to keep them as docile and distracted as possible. Let’s call this the ‘Disney function,’ or Df. For humanities scholars, as a rule, Df amounts to the production of hyperreality, the politically pernicious conflation of simulation and reality.

In what follows, I hope to demonstrate what might seem a preposterous figure/field inversion. What I want to argue is that the semiotician has Df all wrong—Disney is actually a far more complicated beast—and that the production of hyperreality, if anything, belongs to his or her own interpretative practice. My claim, in other words, is that the ‘politically pernicious conflation of simulation and reality’ far better describes the social function of semiotics than it does Disney.

Semiotics, I want to suggest, has managed to gull intellectuals into actively alienating the very culture they would reform, leading to the degeneration of social criticism into various forms of moral entertainment, a way for jargon-defined ingroups to transform interpretative expertise into demonstrations of manifest moral superiority. Piety, in effect. Semiotics, the study of signs in life, allows the humanities scholar to sit in judgment not just of books, but of text,* which is to say, the entire world of meaning. It constitutes what might be called an ideological Disney World, only one that, unlike the real Disney World, cannot be distinguished from the real.

I know from experience the kind of incredulity these kinds of claim provoke from the semiotically minded. The illusion, as I know first-hand, is that complete. So let me invoke, for the benefit of those smirking down at these words, the same critical thinking mantra you train into your students, and remind you that all institutions are self-regarding, all institutions cultivate congratulatory myths, and to suggest that the notion of some institution set apart, some specialized cabal possessing practices inoculated against the universal human assumption of moral superiority, is implausible through and through. Or at least worth suspicion.

You are almost certainly deluded in some respect. What follows merely illustrates how. Nothing magical protects you from running afoul your cognitive shortcomings the same as the rest of humanity. As such, it really could be the case that you are the more egregious sorcerer, and that your world-view is the real ‘magic kingdom.’ If this idea truly is as preposterous as it feels, then you should have little difficulty understanding it on its own terms, and dismantling it accordingly.



Sign and signified, simulation and simulated, appearance and reality: these dichotomies provide the implicit conceptual keel for all ideologically motivated semiotic readings of culture. This instantly transforms Disney, a global industrial enterprise devoted to the production of alternate realities, into a paradigmatic case. The Walt Disney Corporation, as fairly every child in the world knows, is in the simulation business. Of course, this alone does not make Disney ‘bad.’ As an expert interpreter of signs and simulations, the semiotician has no problem with deviations from reality in general, only those deviations prone to facilitate particular vested interests. This is the sense in which the semiotic project is continuous with the Enlightenment project more generally. It presumes that knowledge sets us free. Semioticians hold that some appearances—typically those canonized as ‘art’—actually provide knowledge of the real, whereas other appearances serve only to obscure the real, and so disempower those who run afoul them.

The sin of the Walt Disney Corporation, then, isn’t that it sells simulations, it’s that it sells disempowering simulations. The problem that Disney poses the semiotician, however, is that it sells simulations as simulations, not simulations as reality. The problem, in other words, is that Disney complicates their foundational dichotomy, and in ways that are not immediately clear.

You see microcosms of this complication everywhere you go in Disney World, especially where construction or any other ‘illusion dispelling’ activities are involved. Sights such as this:

Southwest Orange-20150415-00412

where pre-existing views are laminated across tarps meant to conceal some machination that Disney would rather not have you see, struck me as particularly bizarre. Who is being fooled here? My five year old even asked why they would bother painting trees rather than planting them. Who knows, I told her. Maybe they were planting trees. Maybe they were building trees such as this:

Southwest Orange-20150419-00433

Everywhere you go you stumble across premeditated visual obstructions, or the famous, omnipresent gates labelled ‘CAST MEMBERS ONLY.’ Everywhere you go, in other words, you are confronted with obvious evidence of staging, or what might be called premeditated information environments. As any magician knows, the only way to astound the audience is to meticulously control the information they do and do not have available. So long as absolute control remains technically infeasible, they often fudge, relying on the audience’s desire to be astounded to grease the wheels of their machinations.

One finds Disney’s commitment to the staging credo tacked here and there across the very walls raised to enforce it:

Southwest Orange-20150422-00458

Walt Disney was committed to the notion of environmental immersion, with the construction of ‘stages’ that were good enough, given various technical and economic limitations, to kindle wonder in children and generosity in their parents. Almost nobody is fooled outright, least of all the children. But most everyone is fooled enough. And this is the only thing that matters, when any showman tallies their receipts at the end of the day: staging sufficiency, not perfection. The visibility of artifice will be forgiven, even revelled in, so long as the trick manages to carry the day…

No one knows this better than the cartoonist.

The ‘Disney imaginary,’ as Baudrillard calls it, is first and foremost a money making machine. For parents of limited means, the mechanical regularity with which Disney has you reaching for your wallet is proof positive that you are plugged into some kind of vast economic machine. And making money, it turns out, doesn’t require believing, it requires believing enough—which is to say, make-believe. Disney World can revel in its artificiality because artificiality, far from threatening the primary function of the system, actually facilitates it. Children want cartoons; they genuinely prefer low-dimensional distortions of reality over reality. Disney is where cartoons become flesh and blood, where high dimension replicas of low-dimension constructs are staged as the higher dimensional truth of those constructs. You stand in line to have your picture taken with a phoney Tinkerbell that you say is real to play this extraordinary game of make-believe with your children.

To the extent that make-believe is celebrated, the illusion is celebrated as benign deception. You walk into streets like this:

Southwest Orange-20150421-00452

that become this:

Southwest Orange-20150421-00453

as you trudge from the perpendicular. The staged nature of the stage is itself staged within the stage as something staged. This is the structure of the Indiana Jones Stunt Spectacular, for instance, where the audience is actually transformed into a performer on a stage staged as a stage (a movie shoot). At every turn, in fact, families are confronted with this continual underdetermination of the boundaries between ‘real’ and not ‘real.’ We watched a cartoon Crush (the surfer turtle from Finding Nemo) do an audience interaction comedy routine (we nearly pissed ourselves). We had a bug jump out of the screen and spray us with acid (water) beneath that big ass tree above (we laughed and screamed). We were skunked twice. The list goes on and on.

All these ‘attractions’ both celebrate and exploit the narrative instinct to believe, the willingness to overlook all the discrepancies between the fantastic and the real. No one is drugged and plugged into the Disney Matrix against their will; people pay, people who generally make far less than tenured academics, to play make-believe with their children.

So what are we to make of this peculiar articulation of simulations and realities? What does it tell us about Df?

The semiotic pessimist, like Baudrillard, would say that Disney is subverting your ability to reliably distinguish the real from the not real, rendering you a willing consumer of a fictional reality filled with fictional wars. Umberto Eco, on the other hand, suggests the problem is one of conditioning consumer desire. By celebrating the unreality of the real, Disney is telling “us that faked nature corresponds much more to our daydream demands” (Travels in Hyperreality, 44). Disney, on his account, whets the wrong appetite. For both, Disney is both instrumental to and symptomatic of our ideological captivity.

The optimist, on the other hand, would say they’re illuminating the contingency of the real (a.k.a. the ‘power of imagination’), training the young to never quite believe their eyes. On this view, Disney is both instrumental to and symptomatic of our semantic creativity (even as it ruthlessly polices its own intellectual properties). According to the apocryphal quote often attributed to Walt Disney, “If you can dream it, you can do it.”

This is the interpretative antinomy that hounds all semiotic readings of the ‘Disney function.’ The problem, put simply, is that interpretations falling out of the semiotic focus on sign and signified, simulation and simulated, cannot decisively resolve whether self-conscious simulation a la Disney serves, in balance, more to subvert or to conserve prevailing social inequities.

All such high altitude interpretation of social phenomena is bound to be underdetermined, of course, simply because the systems involved are far, far, too complicated. Ironically, the theorist has to make due with cartoons, which is to say skewed idealizations of the phenomena involved, and simply hope that something of the offending dynamic shines through. But what I would like to suggest is that semiotic cartoons are particularly problematic in this regard, particularly apt to systematically distort the phenomena they claim to explicate, while—quite unlike Disney’s representations—concealing their cartoonishness.

To understand how and why this is the case, we need to consider the kinds of information the ‘semiotic stage’ is prone to neglect…


Updated Updates…

by rsbakker

My Posthuman Aesthetics Research Group talk has been pushed back to June 2nd. I blame it on administrative dyslexia and bad feet, which is to say… me. So, apologies all, and a heartfelt thanks to Johannes Poulsen and comrades for hitting the reset button.


by rsbakker

Regarding the vanishing American e-books, my agent tells me that Overlook has recently switched distributors, and that the kerfuffle will be sorted out shortly. If you decide to pass this along, please take the opportunity to shame those who illegally download. I’m hanging on by my fingernails, here, and yet the majority of hits I get whenever I do my weekly vanity Google are for links to illegal downloads of my books. I increasingly meet fools who seem to think they’re ‘sticking it to the man’ by illegally downloading, when in fact, what they’re doing is driving commercially borderline artists–that is, those artists dedicated to sticking it to the man–to the food bank.

As for pub dates, still no word from either Overlook (who will also be handling the Canadian edition) or Orbit. Sorry guys.

Also, I’ll be in Denmark to give a seminar entitled, “Writing After the Death of Meaning,” for the Posthuman Aesthetics Research Group (a seriously cool handle!) at Aarhus University on the thirteenth of this month. I realized writing this that I had simply assumed it wasn’t open to the public, but when I reviewed my correspondence, I couldn’t discover any reason for assuming this short its billing as a ‘seminar.’ I’ve emailed my host asking for clarification, just in case any of you happen to be twiddling your thumbs in Denmark next Wednesday.

Le Cirque de le Fou

by rsbakker


There’s nothing better than a blog to confront you with the urge to police appearances. Given the focus on hypocrisy at Three Pound Brain, I restrict myself to blocking only those comments that seemed engineered to provoke fear. But as a commenter on other blogs, I’ve had numerous comments barred on the basis of what was pretty clearly argumentative merit. I remember on Only Requires Hate, I asked Benjanun Sriduangkaew what criteria she used to distinguish spurious charges of misogyny from serious ones, a comment that never made the light of day. I’ve also seen questions I had answered rewritten in a way that made my answers look ridiculous. I’ve even had the experience of entire debates suddenly vanishing in the aether!

Clowns don’t like having their make-up pointed out to them–at least not by a clown as big as me! This seems to be particularly the case among those invested in the academic humanities. At least these are the forums the least inclined to let my questions past moderation.

This, combined with the problems arising from the vicissitudes of the web, convinced me way back to use Word documents to create a record I could go back to if I needed to.

So, for your benefit and mine, here’s a transcript of how the comment thread to Shaun Duke’s response to “Hugos Weaving” (which proved to be a record-breaking post) should read:


BAKKER: So you agree that genre both reaches out and connects. But you trust that ‘literature’ does as well, even though you have no evidence of this. Like Beale, you have a pretty optimistic impression of yourself and your impact and the institutions you identify with. You find the bureaucracies problematic (like Beale), but you have no doubt the value system is sound (again like Beale). You accost your audiences with a wide variety of interpretative tactics (like Beale), and even though they all serve your personal political agenda (again, like Beale), you think that diversity counts for something (again, like Beale). You think your own pedagogic activity in no way contributes to your society’s social ills (like Beale), that you are doing your bit to make the world a better place (again, like Beale).

So what is the difference between you and Beale? Pragmatically, at least, you both look quite similar. What makes the ‘critical thinking’ you teach truly critical, as opposed to his faux critical thinking? Where and how does your institution criticize and revise its own values? Does it take care to hire genuine critics such as myself, or does it write them off (the way all institutions do) as outgroup bozos, as one of ‘them’?

More importantly, what science do you and your colleagues use to back up your account of ‘critical thinking’? Or are you all just winging it?

Your department doesn’t sound much different than mine, 20 years back, except that genre is perhaps accorded a more prominent role (you have to get those butts in seats, now, for funding). The only difference I can see is that you genuinely believe in it, take genuine pride in belonging to such a distinguished and enlightened order… the way any ingroup soldier should. But if you and your institution is so successful, how do you explain the phenomena of conservative creep? Even conservative commentators are astounded how the Great Recession actually seems to have served right wing interests.


DUKE: This is the point where we part company. I am happy to have a discussion with you about my perspectives of academia, even if you disagree. I’m even happy to defend what I do and its value. But I will not participate in a discussion with someone who makes a disingenuous (and fallacious) comparison between myself and a someone like Beale. The comparison, however rhetorical, is offensive and, frankly, unnecessarily rude.

Have a good day.



BAKKER: Perfect! This is what the science shows us: ‘critical’ always almost means ‘critical of the other.’ Researchers have found this dynamic in babies, believe it or not. We can call ourselves ‘critical thinkers,’ but really this is just cover for using the exact same socio-cognitive toolbox as those we impugn. Group identification, as you’ve shown us once again, is primary among those tools. By pointing out the parallels between you and Beale, I identified you with him, and this triggers some very basic intuitions, those tasked with policing group boundaries and individual identities. You feel ‘disgusted,’ or ‘indignant.’

Again, like Beale.

Don’t you see Shaun? The point isn’t to bait or troll you. The point is to show you the universality of the moral cognitive mechanisms at work in all such confrontations between groups of humans. Beale isn’t some odious, alien invader, he is our most tragic, lamentable SELF. Bigotry is a bullet we can only dodge by BITING. Of course you’re a bigot, as am I. Of course you write off others, other views, without understanding them in the least. Of course you essentialize, naturalize. Of course you spend your days passing judgement for the entertainment of others and yourself. Of course you are anything but a ‘critical thinker.’

You’re human. Nothing magical distinguishes you from Beale.


Shaun does not want to be an ingroup clown. No one reading this wants to be an ingroup clown. It is troubling, to say the least, that the role deliberative cognition plays in moral problem-solving is almost entirely strategic. But it is a fact, one that explains the endless mire surrounding ethical issues. Pretending will not make it otherwise.

If Shaun knew anything scientific about critical thinking, he would have recognized what he was doing, he would have acknowledged the numerous ways groupishness necessarily drives his discourse. But he doesn’t. Since teaching critical thinking stands high among his group’s mythic values, interlocutors such as myself put him into a jam. If he doesn’t actually know anything about critical thinking, then odds are he’s simply in the indoctrination business (just as his outgroup competitors claim). The longer he engages someone just as clownish, but a little more in the scientific know, the more apparent this becomes. The easiest way to prevent contradiction is to shut down contrary voices. The best way to shut down contrary voices, is to claim moral indignation.

Demonizing Beale is the easy road. The uncritical, self-congratulatory one. You kick him off your porch, tell him to throw his own party. Then you spend the afternoon laughing him off with your friends, those little orgies of pious self-congratulation that we all know so well. You smile, teeth gleaming, convinced that justice has been done and the party saved. Meanwhile the bass booms ever louder across the street. More and more cars line up.

But that’s okay, because life is easier among good-looking friends who find you good-looking as well.

Hugos Weaving

by rsbakker

Red Skull

So the whole idea behind Three Pound Brain, way back when, was to open a waystation between ‘incompatible empires,’ to create a forum where ingroup complacencies are called out and challenged, where our native tendency to believe flattering bullshit can be called to account. To this end, I instigated two very different blog wars, one against an extreme ‘right’ figure in the fantasy community, Theodore Beale, another against an extreme ‘left’ figure, Benjanun Sriduangkaew. All along the idea was to expose these individuals, to show, at least for those who cared to follow, how humans were judging machines, prone to rationalize even the most preposterous and odious conceits. Humans are hardwired to run afoul pious delusion. The science is only becoming more definitive in this regard, I assure you. We are, each and every one of us, walking, talking, yardsticks. Unfortunately, we also have a tendency to affix spearheads to our rules, to confuse our sense of exceptionality and entitlement with the depravity and criminality of others—and to make them suffer.

When it comes to moral reasoning, humans are incompetent clowns. And in an age where high-school students are reengineering bacteria for science fairs, this does not bode well for the future. We need to get over ourselves—and now. Blind moral certainty is no longer a luxury our species can afford.

Now we all watch the news. We all appreciate the perils of moral certainty in some sense, the need to be wary of those who believe too hard. We’ve all seen the ‘Mad Fanatic’ get his or her ‘just desserts’ in innumerable different forms. The problem, however, is that the Mad Fanatic is always the other guy, while we merely enjoy the ‘strength of our convictions.’ Short of clinical depression at least, we’re always—magically you might say—the obvious ‘Hero.’

And, of course, this is a crock of shit. In study after study, experiment after experiment, researchers find that, outside special circumstances, moral argumentation and explanation are strategic—with us being none the wiser! (I highly recommend Joshua Greene’s Moral Tribes or Jonathan Haidt’s The Righteous Mind for a roundup of the research). It may feel like divine dispensation, but dollars to donuts it’s nothing more than confabulation. We are programmed to advance our interests as truth; we’d have no need of Judge Judy otherwise!

It is the most obvious invisible thing. But how do you show people this? How do you get humans to see themselves as the moral fool, as the one automatically—one might even say, mechanically—prone to rationalize their own moral interests, unto madness in some cases. The strategy I employ in my fantasy novels is to implicate the reader, to tweak their moral pieties, and then to jam them the best I can. My fantasy novels are all about the perils of moral outrage, the tragedy of willing the suffering of others in the name of some moral verity, and yet I regularly receive hate mail from morally outraged readers who think I deserve to suffer—fear and shame, in most cases, but sometimes death—for having written whatever it is they think I’ve written.

The blog wars were a demonstration of a different sort. The idea, basically, was to show how the fascistic impulse, like fantasy, appeals to a variety of inborn cognitive conceits. Far from a historical anomaly, fascism is an expression of our common humanity. We are all fascists, in our way, allergic to complexity, suspicious of difference, willing to sacrifice strangers on the altar of self-serving abstractions. We all want to master our natural and social environments. Public school is filled with little Hitlers—and so is the web.

And this, I wanted to show, is the rub. Before the web, we either kept our self-aggrandizing, essentializing instincts to ourselves or risked exposing them to the contradiction of our neighbours. Now, search engines assure that we never need run critical gauntlets absent ready-made rationalizations. Now we can indulge our cognitive shortcomings, endlessly justify our fears and hatreds and resentments. Now we can believe with the grain our stone-age selves. The argumentative advantage of the fascist is not so different from the narrative advantage of the fantasist: fascism, like fantasy, cues cognitive heuristics that once proved invaluable to our ancestors. To varying degrees, our brains are prone to interpret the world through a fascistic lens. The web dispenses fascistic talking points and canards and ad hominems for free—whatever we need to keep our clown costumes intact, all the while thunderously declaring ourselves angels. Left. Right. It really doesn’t matter. Humans are bigots, prone to strip away complexity and nuance—the very things required to solve modern social problems—to better indulge our sense of moral superiority.

For me, Theodore Beale (aka, Vox Day) and Benjanun Sriduangkaew (aka, acrackedmoon) demonstrated a moral version of the Dunning-Kruger effect, how the bigger the clown, the more inclined they are to think themselves angels. My strategy with Beale was simply to show the buffoonery that lay at the heart of his noxious set of views. And he eventually obliged, explaining why, despite the way his claims epitomize bias, he could nevertheless declare himself the winner of the magical belief lottery:

Oh, I don’t know. Out of nearly 7 billion people, I’m fortunate to be in the top 1% in the planet with regards to health, wealth, looks, brains, athleticism, and nationality. My wife is slender, beautiful, lovable, loyal, fertile, and funny. I meet good people who seem to enjoy my company everywhere I go.

He. Just. Is. Superior.

A king clown, you could say, lucky, by grace of God.

Benjanun Sriduangkaew, on the other hand, posed more of a challenge, since she was, when all was said and done, a troll in addition to a clown. In hindsight, however, I actually regard my blog war with her as the far more successful one simply because she was so successful. My schtick, remember, is to show people how they are the Mad Fanatic in some measure, large or small. Even though Sriduangkaew’s tactics consisted of little more than name-calling, even though her condemnations were based on reading the first six pages of my first book, a very large number of ‘progressive’ individuals were only too happy to join in, and to viscerally demonstrate the way moral outrage cares nothing for reasons or casualties. What’s a false positive when traitors are in our midst? All that mattered was that I was one of them according to so-and-so. I would point out over and over how they were simply making my argument for me, demonstrating how moral groupthink deteriorates into punishing strangers, and feeling self-righteous afterward. I would receive tens of thousands of hits on my posts, and less than a dozen clicks on the links I provided citing the relevant research. It was nothing short of phantasmagorical. I was, in some pathetic, cultural backwoods way, the target of a witch-hunt.

(The only thing I regret is that several of my friends became entangled, some jumping ship out fear (sending me ‘please relent’ letters), others, like Peter Watts, for the sin of calling the insanity insanity.)

It’s worth noting in passing that some Three Pound Brain regulars actually tried to get Beale and Sriduangkaew together. Beale, after all, actually held the views she so viciously attributed to me, Morgan, and others. He was the real deal—openly racist and misogynistic—and his blog had more followers than all of her targets combined. Sriduangkaew, on the other hand, was about as close to Beale’s man-hating feminist caricature as any feminist could be. But… nothing. Like competing predators on the savannah, they circled on opposite sides of the herd, smelling one another, certainly, but never letting their gaze wander from their true prey. It was as if, despite the wildly divergent content of their views, they recognized they were the same.

So here we stand a couple of years after the fray. Sriduangkaew, as it turns out, was every bit as troubled as she sounded, and caused others far, far more grief than she ever caused me. Beale, on other hand, has been kind enough to demonstrate yet another one of my points with his recent attempt to suborn the Hugos. Stories of individuals gaming the Hugos are notorious, so in a sense the only thing that makes Beale’s gerrymandering remarkable is the extremity of his views. How? people want to know. How could someone so ridiculously bigoted come to possess any influence in our ‘enlightened’ day and age?

Here we come to the final, and perhaps most problematic moral clown in this sad and comedic tale: the Humanities Academic.

I’m guessing that a good number of you reading this credit some English professor with transforming you into a ‘critical thinker.’ Too bad there’s no such thing. This is what makes the Humanities Academic a particularly pernicious Mad Fanatic: they convince clowns—that is, humans like you and me—that we need not be clowns. They convince cohort after cohort of young, optimistic souls that buying into a different set of flattering conceits amounts to washing the make-up off, thereby transcending the untutored ‘masses’ (or what more honest generations called the rabble). And this is what makes their particular circus act so pernicious: they frame assumptive moral superiority—ingroup elitism—as the result of hard won openness, and then proceed to judge accordingly.

So consider what Philip Sandifer, “a PhD in English with no small amount of training in postmodernism” thinks of Beale’s Hugo shenanigans:

To be frank, it means that traditional sci-fi/fantasy fandom does not have any legitimacy right now. Period. A community that can be this effectively controlled by someone who thinks black people are subhuman and who has called for acid attacks on feminists is not one whose awards have any sort of cultural validity. That sort of thing doesn’t happen to functional communities. And the fact that it has just happened to the oldest and most venerable award in the sci-fi/fantasy community makes it unambiguously clear that traditional sci-fi/fantasy fandom is not fit for purpose.

Simply put, this is past the point where phrases like “bad apples” can still be applied. As long as supporters of Theodore Beale hold sufficient influence in traditional fandom to have this sort of impact, traditional fandom is a fatally poisoned well. The fact that a majority of voices in fandom are disgusted by it doesn’t matter. The damage has already been done at the point where the list of nominees is 68% controlled by fascists.

The problem, Sandifer argues, is institutional. Beale’s antics demonstrate that the institution of fandom is all but dead. The implication is that the science fiction and fantasy community ought to be ashamed, that it needs to gird its loins, clean up its act.

Many of you, I’m sure, find Sandifer’s point almost painfully obvious. Perhaps you’re thinking those rumours about Bakker being a closet this or that must be true. I am just another clown, after all. But catch that moral reflex, if you can, because if you give in, you will be unable—as a matter of empirical fact—to consider the issue rationally.

There’s a far less clownish (ingroupish) way to look at this imbroglio.

Let’s say, for a moment, that readership is more important than ‘fandom’ by far. Let’s say, for a moment, that the Hugos are no more or less meaningful than any other ingroup award, just another mechanism that a certain bunch of clowns uses to confer prestige on those members who best exemplify their self-regarding values—a poor man’s Oscars, say.

And let’s suppose that the real problem facing the arts community lies in the impact of technology on cultural and political groupishness, on the way the internet and preference-parsing algorithms continue to ratchet buyers and sellers into ever more intricately tuned relationships. Let’s suppose, just for instance, that so-called literary works no longer reach dissenting audiences, and so only serve to reinforce the values of readers…

That precious few of us are being challenged anymore—at least not by writing.

The communicative habitat of the human being is changing more radically than at any time in history, period. The old modes of literary dissemination are dead or dying, and with them all the simplistic assumptions of our literary past. If writing that matters is writing that challenges, the writing that matters most has to be writing that avoids the ‘preference funnel,’ writing that falls into the hands of those who can be outraged. The only writing that matters, in other words, is writing that manages to span significant ingroup boundaries.

If this is the case, then Beale has merely shown us that science fiction and fantasy actually matter, that as a writer, your voice can still reach people who can (and likely will) be offended… as well as swayed, unsettled, or any of the things Humanities clowns claim writing should do.

Think about it. Why bother writing stories with progressive values for progressives only, that is, unless moral entertainment is largely what you’re interested in? You gotta admit, this is pretty much the sum of what passes for ‘literary’ nowadays.

Everyone’s crooked is someone else’s straight—that’s the dilemma. Since all moral interpretations are fundamentally underdetermined, there is no rational or evidential means to compel moral consensus. Pretty much anything can be argued when it comes to questions of value. There will always be Beales and Sriduangkaews, individuals adept at rationalizing our bigotries—always. And guess what? the internet has made them as accessible as fucking Wal-Mart. This is what makes engaging them so important. Of course Beale needs to be exposed—but not for the benefit of people who already despise his values. Such ‘exposure’ amounts to nothing more than clapping one another on the back. He needs to be exposed in the eyes of his own constituents, actual or potential. The fact that the paths leading to bigotry run downhill makes the project of building stairs all the more crucial.

‘Legitimacy,’ Sandifer says. Legitimacy for whom? For the likeminded—who else? But that, my well-educated friend, is the sound-proofed legitimacy of the Booker, or the National Book Awards—which is to say, the legitimacy of the irrelevant, the socially inert. The last thing this accelerating world needs is more ingroup ejaculate. The fact that Beale managed to pull this little coup is proof positive that science fiction and fantasy matter, that we dwell in a rare corner of culture where the battle of ideas is for… fucking… real.

And you feel ashamed.

Reason, Bondage, Discipline

by rsbakker

We can understand all things by her; but what she is we cannot apprehend.

–Robert Burton, Anatomy of Melancholy, 1652


So I was rereading Ray Brassier’s account of Churchland and eliminativism in his watershed Nihil Unbound: Enlightenment and Extinction the other day and I thought it worth a short post given the similarities between his argument and Ben’s. I’ve already considered his attempt to rescue subjectivity from the neurobiological dismantling of the self in “Brassier’s Divided Soul.” And in “The Eliminativistic Implicit II: Brandom in the Pool of Shiloam,” I dissected the central motivating argument for his brand of normativism (the claim that the inability of natural cognition to substitute for intentional cognition means that only intentional cognition can theoretically solve intentional cognition), showing how it turns on metacognitive neglect and thus can only generate underdetermined claims. Here I want to consider Brassier’s problematic attempt to domesticate the challenge posed by scientific reason, and to provision traditional philosophy with a more robust sop.

In Nihil Unbound, Brassier casts Churchland’s eliminativism as the high water mark of disenchantment, but reads his appeal to pragmatic theoretical virtues as a concession to the necessity of a deflationary normative metaphysics. He argues (a la Sellars) that even though scientific theories possess explanatory priority over manifest claims, manifest claims nevertheless possess conceptual parity. The manifest self is the repository of requisite ‘conceptual resources,’ what anchors the ‘rational infrastructure’ that makes us intelligible to one another as participants in the game of giving and asking for reasons—what allows, in other words, science to be a self-correcting exercise.

What makes this approach so attractive is the promise of providing transcendental constraint absent ontological tears. Norms, reasons, inferences, and so on, can be understood as pragmatic functions, things that humans do, as opposed to something belonging to the catalogue of nature. This has the happy consequence of delimiting a supra-natural domain of knowledge ideally suited to the kinds of skills philosophers already possess. Pragmatic functions are real insofar as we take them to be real, but exist nowhere else, and so cannot possibly be the object of scientific study. They are ‘appearances merely,’ albeit appearances that make systematic, and therefore cognizable, differences in the real world.

Churchland’s eliminativism, then, provides Brassier with an exemplar of scientific rationality and the threat it poses to our prescientific self-understanding that also exemplifies the systematic dependence of scientific rationality on pragmatic functions that cannot be disenchanted on pain of scuttling the intelligibility of science. What I want to show is how in the course of first defending and then critiquing Churchland, Brassier systematically misconstrues the challenge eliminativism poses to all philosophical accounts of meaning. Then I want to discuss how his ‘thin transcendentalism’ actually requires this misconstrual to get off the ground.

The fact that Brassier treats Churchland’s eliminativism as exemplifying scientific disenchantment means that he thinks the project is coherent as far as it goes, and therefore denies the typical tu quoque arguments used to dismiss eliminativism more generally. Intentionalists, he rightly points out, simply beg the question when accusing eliminativists of ‘using beliefs to deny the reality of beliefs.’

“But the intelligibility of [eliminative materialism] does not in fact depend upon the reality of ‘belief’ and ‘meaning’ thus construed. For it is precisely the claim that ‘beliefs’ provide the necessary form of cognitive content, and that propositional ‘meaning’ is thus the necessary medium for semantic content, that the eliminativist denies.” (15)

The question is, What are beliefs? The idea that the eliminativist must somehow ‘presuppose’ one of the countless, underdetermined intentionalist accounts of belief to be able to intelligibly engage in ‘belief talk’ amounts to claiming that eliminativism has to be wrong because intentionalism is right. The intentionalist, in other words, is simply begging the question.

The real problem that Churchland faces is the problem that all ‘scientistic eliminativism’ faces: theoretical mutism. Cognition is about getting things right, so any account of cognition lacking the resources to explain its manifest normative dimension is going to seem obviously incomplete. And indeed, this is the primary reason eliminative materialism remains a fringe position in psychology and philosophy of mind today: it quite simply cannot account for what, pretheoretically, seems to be the most salient feature of cognition.

The dilemma faced by eliminativism, then, is dialectical, not logical. Theory-mongering in cognitive science is generally abductive, a contest of ‘best explanations’ given the intuitions and scientific evidence available. So far as eliminativism has no account of things like the normativity of cognition, then it is doomed to remain marginal, simply because it has no horse in the race. As Kriegel says in Sources of Intentionality, eliminativism “does very poorly on the task of getting the pretheoretically desirable extension right” (199), fancy philosopher talk for ‘it throws the baby out with the bathwater.’

But this isn’t quite the conclusion Brassier comes to. The first big clue comes in the suggestion that Churchland avoids the tu quoque because “the dispute between [eliminative materialism] and [folk psychology] concerns the nature of representations, not their existence” (16). Now although it is the case that possessing an alternative theory makes it easier to recognize the question-begging nature of the tu quoque, the tu quoque is question-begging regardless. Churchland need only be skeptical to deny rather than affirm the myriad, underdetermined interpretations of belief one finds in intentional philosophy. He no more need specify any alternative theory to use the word ‘belief’ than my five-year old daughter does. He need only assert that the countless intentionalist interpretations are wrong, and that the true nature of belief will become clear once cognitive science matures. It just so happens that Churchland has a provisional neuroscientific account of representation.

As an eliminativist, having a theoretical horse in the race effectively blocks the intuition that you must be riding one of the myriad intentional horses on the track, but the intuition is faulty all the same. Having a theory of meaning is a dialectical advantage, not a logical necessity. And yet nowhere does Brassier frame the problem in these terms. At no point does he distinguish the logical and dialectical aspects of Churchland’s situation. On the contrary, he clearly thinks that Churchland’s neurocomputational alternative is the only thing rescuing his view. In other words, he conflates the dialectical advantage of possessing an alternate theory of meaning with logical necessity.

And as we quickly discover, this oversight is instrumental to his larger argument. Brassier, it turns out, is actually a fan of the tu quoque—and a rather big one at that. Rather than recognizing that Churchland’s problem is abductive, he frames it more abstrusely as a “latent tension between his commitment to scientific realism on the one hand, and his adherence to a metaphysical naturalism on the other” (18). As I mentioned above, Churchland finds himself in a genuine dialectical bind insofar as accounts of cognition that cannot explain ‘getting things right’ (or other apparent intentional properties of cognition) seems to get the ‘pretheoretically desirable extension’ wrong. This argumentative predicament is very real. Pretheoretically, at least, ‘getting things right’ seems to be the very essence of cognition, so the dialectical problem posed is about as serious as can be. So long as intentional phenomena as they appear remain part of the pretheoretically desirable extension of cognitive science, then Churchland is going to have difficulty convincing others of his view.

Brassier, however, needs the problem to be more than merely dialectical. He needs some way of transforming the dialectically deleterious inability to explain correctness into warrant for a certain theory of correctness—namely, some form of pragmatic functionalism. He needs, in other words, the tu quoque. He needs to show that Churchland, whether he knows it or not, requires the conceptual resources of the manifest image as a condition of understanding science as an intelligible enterprise. The way to show this requirement, Brassier thinks, is to show—you guessed it—the inability of Churchland’s neurocomputational account of representation to explain correctness. His inability to explain correctness, the assumption is, means he has no choice but to utilize the conceptual resources of the manifest image.

But as we’ve seen, the tu quoque begs the question against the eliminativist regardless of their ability to adduce alternative explanations for the phenomena at issue. Possessing an alternative simply makes the tu quoque easier to dismiss. Churchland is entirely within his rights to say, “Well, Ray, although I appreciate the exotic interpretation of theoretical virtue you’ve given, it makes no testable predictions, and it shares numerous family resemblance to countless other such, chronically underdetermined theories, so I think I’m better off waiting to see what the science has to say.”

It really is as easy as that. Only the normativist is appalled, because only they are impressed by their intuitions, the conviction that some kind of intentionalist account is the only game in town.

So ultimately, when Brassier argues that “[t]he trouble with Churchland’s naturalism is not so much that it is metaphysical, but that it is an impoverished metaphysics, inadequate to the task of grounding the relation between representation and reality” (25) he’s mistaking a dialectical issue with an inferential and ontological one, conflating a disadvantage in actual argumentative contexts (where any explanation is preferred to no explanation) with something much grander and far more controversial. He thinks that lacking a comprehensive theory of meaning automatically commits Churchland to something resembling his theory of meaning, a deflationary normative metaphysics, namely his own brand of pragmatic functionalism.

For the naturalist, lacking answers to certain questions can mean many different things. Perhaps the question is misguided. Perhaps we simply lack the information required. Perhaps we have the information, but lack the proper interpretation. Maybe the problem is metaphysical—who the hell knows? When listing these possibilities, ‘Perhaps the phenomena is supra-natural,’ is going to find itself somewhere near, ‘Maybe ghosts are real,’ or any other possibility that amounts to telling science to fuck off and go home! A priori claims on what science can and cannot cognize have a horrible track record, period. As Anthony Chemero wryly notes, “nearly everyone working in cognitive science is working on an approach that someone else has shown to be hopeless, usually by an argument that is more or less purely philosophical” (Radical Embodied Cognitive Science, 3).

Intentional cognition is heuristic cognition, a way to cognize systems without cognizing the operations of those systems. What Brassier calls ‘conceptual parity’ simply pertains to the fact that intentional cognition possesses its own adaptive ecologies. It’s a ‘get along’ system, not a ‘get it right’ system, which is why, as a rule, we resort to it in ‘get along’ situations. The sciences enjoy ‘explanatory priority’ because they cognize systems via cognizing the operations of those systems: they solve on the basis of information regarding what is going on. They constitute a ‘get it right’ system. The question that Brassier and other normativists need to answer is why, if intentional cognition is the product of a system that systematically ignores what’s going on, we should think it could provide reliable theoretical cognition regarding what’s going on. How can a get along system get itself right? The answer quite plainly seems to be that it can’t, that the conundrums and perpetual disputation that characterize all attempts to solve intentional cognition via intentional cognition are exactly what we should expect.

Maybe the millennial discord is just a coincidence. Maybe it isn’t a matter of jamming the stick to find gears that don’t exist. Either way, the weary traveller is entitled to know how many more centuries are required, and, if these issues will never find decisive resolution, why they should continue the journey. After all, science has just thrown down the walls of the soul. Billions are being spent to transform the tsunami of data into better instruments of control. Perhaps tilting yet one more time at problems that have defied formulation, let alone solution, for thousands of the years is what humanity needs…

Perhaps the time has come to consider worst case scenarios–for real.

Which brings us to the moral: You can’t concede that science monopolizes reliable theoretical cognition then swear up and down that some chronically underdetermined speculative account somehow makes that reliability possible, regardless of what the reliability says!  The apparent conceptual parity between manifest and scientific images is something only the science can explain. This allows us to see just how conservative Brassier’s position is. Far from pursuing the “conceptual ramifications entailed by a metaphysical radicalization of eliminativism” (31), Brassier is actually arguing for the philosophical status quo. Far from following reason no matter where it leads, he is, like so many philosophers before him, playing another version of the ‘domain boundary game,’ marshalling what amounts to a last ditch effort to rescue intentional philosophy from the depredations of science. Or as he himself might put it, devising another sop.

As he writes,

“At this particular historical juncture, philosophy should resist the temptation to install itself within one of the rival images… Rather, it should exploit the mobility that is one of the rare advantages of abstraction in order to shuttle back and forth between images, establishing conditions of transposition, rather than synthesis, between the speculative anomalies thrown up within the order of phenomenal manifestation, and the metaphysical quandaries generated by the sciences’ challenge to the manifest order.” 231

Isn’t this just another old, flattering trope? Philosophy as fundamental broker, the medium that allows the dead to speak to the living, and the living to speak to the dead? As I’ve been arguing for quite sometime, the facts on the ground simply do not support anything so sunny. Science will determine the relation between the manifest and the scientific images, the fate of ‘conceptual parity,’ because science actually has explanatory priority. The dead decide, simply because nothing has ever been alive, at least not the way our ancestors dreamed.

Bleaker than Bleak (by Paul J. Ennis)

by rsbakker

Bleak theory accepts that it itself is almost entirely wrong. However, precisely on the basis that it accepts humans are almost always wrong about how it goes with the world and so what are the chances of this theory being right? In this paradoxical, confused sense it is a theory of human fallibility. Or the inability of humans to see themselves for what they are, even when, as per contemporary neuroscience, we kind of know (have you not yet heard the “good” news that you are not what you think you are?). We kind of know because we are beginning to see ourselves from the third-person perspective. Subjectivity is devolving into objectivity and objectivity entails seeing things clearly, even if not transparently. That opacity, always there in the subject-object distinction, is collapsing and the consequences are bleak. The second reality-appearance “appeared” as a crack we cracked. It has been going on ever since. Consider the insanity of the entire post-Kantian tradition and the in-itself – is it not just an expression of what it feels like when you recognise what was once a “transparent cage” (Sartre) of looking directly at the world is a hallucination, a real one, all the same.

We cannot outpace this very blindspot that renders us a self or a subject. We are deluded about our beliefs or intentions (a given, so to speak), but more significantly we are deluded that somehow we can ‘recursively’ leap ‘over our own shoulders’ and see not just the trick, as Bakker might put it, but something substantial. Rather than just a model or a process withholding information from “you” yourself. Your own brain lies to you. It hides noise (‘data-reduction’) so that you do not collapse into a schizophrenia of buzzing information. This much Bergson, Deleuze, and Meillassoux have suggested is a most horrifying possibility. If all the data of the world flowed in you would be at one with matter, but what would you hear? Do you even want to countenance what that might involve? Hell is all around you. Your brain is just trying its best to stop you being lit on fire.

Everything is pretty patterns (Ladyman and Ross) and you are too. The problem with patterns is that sometimes they clash. If the brain has been hacked together it’s bound to be buggy as hell. Look at your computer. One subpersonal process goes askew and you need it fixed. The technician tries a few things, maybe it works, or maybe it does not. Maybe, as in severe cases of schizophrenia or depression, you just have a crappy system. I’ve said before that consciousness is the holocaust of happiness, meant sincerely, not lightly, and by this I mean that if the conditions or constraints that created a self never came together, in just the way it has for us, there would never have been any conscious suffering. Consciousness is the final correlate of all human suffering. You can blame almost anything else, but had “we” (is it really “us”) never believed we should be stable, integrated selves none of the bugs that followed would have appeared. Our world would have been a beautiful, empty, unthinking collection of material patterns: perhaps even a heaven of unthinking noise?

Chaos, as I am sure you have heard, is a ladder, but so too is evolution. Lifted up from the dregs of biology into cultural evolution we came to see what nothing else could see. Some foolishly believed this was a gift. Civilisation was realised. When in reality each one was built on war. Philosophers know how to dance around this problem: we can think our way, collectively, toward a more rational, constrained future. Except collective intelligence most often works best when deployed toward destructive ends: where do you find the most creative minds? The war-room. ‘War, everywhere I look…’ (Tormentor). To make it explicit, so to speak, if you want new masters, as Lacan said, you will find them. Look into the dead eyes of those who desire freedom and there rests fear. Fear that they will build a palace of reason only for the stability so hard fought for to collapse under the weight of the chronic irrationalism of the baser human aspect, untameable, unpredictable, and unknowable. History books are the evidence you stack up to adduce this, but at least today we have learnt enough to include the accelerated process of decline into our calculations. We no longer fight our enemies. We kiss them on the mouth and ask if we can join them in the decadent decline in advance.

I know I should not speak like this. What a waste to spend your time reasoning about the impossibility of one day sticking the hook in and indexing some little part of reality that, Tetris-like, delivers temporary respite. Only, of course, here comes more bricks. As I feel, always in my very bones, what I know is coming, the far-off end (it is never close enough), bleak theory morphs into even and ever bleaker theory, sometimes just bleak, once bleaker than black, but now bleaker than bleak. Rust Cohle, in True Detective, at one points let’s his interrogators know: ‘I know who I am. And after all these years, there’s a victory in that.’ It is the most paradoxical of victories. The “pyrrhic” victory of traditional philosophy, found in thinkers as diverse as Husserl and Meillassoux, where one gains a foothold on the world after a long struggle. The question bleak theory asks, adrift the perennial tradition, is whether knowing who we are will result in precisely the inverse of the oldest goal of philosophical self-knowledge: we cannot understand ourselves except as that entity which cannot truly know itself. Know thyself? Perhaps all along it has been the wrong question.

The tradition of philosophy always hinges on a subtle revision of position and orientation. This is the generative process whereby, for instance, the ambiguity of postmodern philosophy culminates in a counter-revolution of rational normativity. This is our contemporary example, but it is found everywhere. Heidegger ontologising phenomenology. Hegel gobbling up the Kantian noumena. Today there is possibly another: one that, again to evoke Rust Cohle, means to ‘start asking the right fucking questions.’ Not about what we are, but what we are not: “transcendental egos,” “subjects,” or “selves.” Perhaps not even “agents,” but I leave that problem for other minds to debate. I know what I am, a ‘disinterested onlooker’ (Husserl), but deluded that I am unconcerned.

True madness lies ahead for our species. Normativity, humanism, anti-reductionism, anything not bathed in the acid of neuroscience are all contributing to a sharpening of the knives. Building dams to keep the coming dissolution at bay they will render the shattering of the illusion that much harsher, harder. We are not going to Mars. We are going to go out of our minds.


[ Dr. Paul J. Ennis is a Research Fellow in the School of Business, Trinity College Dublin. He is the author of Continental Realism (Zero Books, 2011), co-editor with Peter Gratton of the Meillassoux Dictionary (Edinburgh University Press, 2014) and co-editor with Tziovanis Georgakis of Heidegger in the Twenty-First Century (Springer, 2015). A version of bleak theory, ‘Bleak,’ first appeared in the DVD booklet for A Spell to Ward off the Darkness (Soda Pictures, 2014).]


Are Minds like Witches? The Catastrophe of Scientific Progress (by Ben Cain)

by rsbakker

machine brain


As scientific knowledge has advanced over the centuries, informed people have come to learn that many traditional beliefs are woefully erroneous. There are no witches, ghosts, or disease-causing demons, for example. But are cognitive scientists currently on the verge of showing also that belief in the ordinarily-defined human self is likewise due to a colossal misunderstanding, that there are no such things as meaning, purpose, consciousness, or personal self-control? Will the assumption of personhood itself one day prove as ridiculous as the presumption that some audacious individuals can make a pact with the devil?

Progress and a World of Mechanisms

According to this radical interpretation of contemporary science, everything is natural and nature consists of causal relationships between material aggregates that form systems or mechanisms. The universe is thus like an enormous machine except that it has no intelligent designer or engineer. Atoms evolve into molecules, stars into planets, and at least one planet has evolved life on its surface. But living things are really just material objects with no special properties. The only efficacious or real property in nature, very generally speaking, is causality, and thus the real question is always just what something can do, given its material structure, initial conditions, and the laws of nature. As one of the villains of The Matrix Reloaded declares, “We are slaves to causality.” Thus, instead of there being people or conscious, autonomous minds who use symbols to think about things and to achieve their goals, there are only mechanisms, which is to say forces acting on complex assemblies of material components, causing the system to behave in one way rather than another. Just as the sun acts on the Earth’s water cycle, causing oceans to evaporate and thus forming clouds that eventually rain and return the water via snowmelt runoff and groundwater flow to the oceans, the environment acts on an animal’s senses, which send signals to its brain whereupon the brain outputs a more or less naturally selected response, depending on whether the genes exercise direct or indirect control over their host. Systems interacting with systems, as dictated by natural laws and probabilities—that’s all there is, according to this interpretation of science.

How, then, do myths form that get the facts so utterly wrong? Myths in the pejorative sense form as a result of natural illusions. Omniscience isn’t given to lowly mammals. To compensate for their being thrown into the world without due preparation, as a result of the world’s dreadful godlessness, some creatures may develop the survival strategy of being excessively curious, which drives them often to err on the side not of caution but of creativity. We track not just the patterns that lead us to food or shelter, but myriad other structures on the off-chance that they’re useful. And as we evolve more intelligence than wisdom, we creatively interpret these patterns, filling the blanks in our experience with placeholder notions that indicate both our underlying ignorance and our presumptuousness. In the case of witches, for example, we mistake some hapless individual’s introversion and foreignness for some evil complicity in suffering that’s actually due merely to bad luck and to nature’s heartlessness. Given enough bumbling and sanctimony, that lack of information about a shy foreigner results in the burning of a primate for allegedly being a witch. A suitably grotesque absurdity for our monstrously undead universe.

And in the corresponding case of personhood itself, the lack of information about the brain causes our inquisitive species to reify its ignorance, to mistake the void found by introspection for spirit or mind which our allegedly wise philosophers then often interpret as being all that’s ultimately real. That is, we try to control ourselves along with our outer environment, to enhance our fitness to carry our genes, but because our brain didn’t evolve to reveal its mechanisms to themselves, the brain outputs nonsense to satisfy its curiosity, and so the masses mislead themselves with fairytales about the supernatural property of personhood, misinterpreting the lack of inner access as being miraculous direct acquaintance with oneself by something called self-consciousness. We mislead ourselves into concluding that the self is more than the brain that can’t understand its operations without scientific experimentation. Instead, we’re seduced into dogmatizing that our blindness to our neural self is actually magical access to a higher, virtually immaterial self.

Personhood and the Natural Reality of Illusions

So much for the progressive interpretation of science. I believe, however, that this interpretation is unsustainable. The serpent’s jaws come round again to close on the serpent’s own tail, and so we’re presented with yet another way to go spectacularly wrong; that is, the radical, progressive naturalist joins the deluded supernaturalist in an extravagant leap of logic. To see this, realize that the above picture of nature can be no picture at all. To speak of a picture, a model, a theory, or a worldview, or even of thinking or speaking in general, as these words are commonly defined is, of course, forbidden to the austere naturalist. There are no symbols in this interpretation which is no interpretation; there are only phases in the evolution of material systems, objects caught between opposing forces that change according to ceteris paribus laws which are not really laws. Roughly speaking—and remember that there’s no such thing as speaking—there’s only causality in nature. There are no intentional or normative properties, no reference, purpose, or goodness or badness.

In the unenlightened mode of affecting material systems, this “means” that if you interpret scientific progress as entailing that there are no witches, demons, or people in general, in the sense that the symbols for these entities are vacuous, whereas other symbols enjoy meaningful status such as the science-friendly words, “matter,” “force,” “law,” “mechanism,” “evolution,” and so forth, you’ve fallen into the same trap that ensnares the premodern ignoramus who fails to be humbled by her grievous knowledge deficit. All symbols are equally bogus, that is, supernatural, according to the foregoing radical naturalism. Thus, this radical must divest herself not just of the premodern symbols, but of the scientific ones as well—assuming, that is, she’s bent on understanding these symbols in terms of the naïve notion of personhood which, by hypothesis, is presently being made obsolete by science. So for example, if I say, “Science has shown that there are no witches, and the commonsense notion of the mind is likewise empty,” the radical naturalist is hardly free to interpret this as saying that premodern symbols are laughable whereas modern scientific ones are respectable. In fact, strictly speaking, she fails to be a thoroughgoing eliminativist as soon as she assumes that I’ve thereby said anything at all. All speaking is illusion, for the radical naturalist; there are only forces acting on material systems, causing those systems to behave, to exercise their material capacities, whereupon the local effects might feed back into a larger system, leading to cycles of average collective behaviour. There is no way of magically capturing that mechanistic reality in symbolic form; instead, there’s just the illusion of doing so.

How, then, should scientific progress be understood, given that there’s no such things as scientific theories, progress, or understanding, as these things are commonly defined? In short, what’s the uncommon, enlightened way of understanding science (which is actually no sort of understanding)? What’s the essence of postmodern, scientific mysticism, as we might think of it? In other words, what will the posthuman be doing once her vision is unclouded with illusions of personhood and so is filled with mechanisms as such? The answer must be put in terms, once again, of causality. Scientific enlightenment is a matter (literally) of being able to exercise greater control over certain systems than is afforded by those who lack scientific tools. In short, assuming we define ourselves as a species in terms of the illusions of a supernatural self, the posthuman who embraces radical naturalism and manages to clear her head of the cognitive vices that generate those illusions will be something of a pragmatist. She’ll think in terms of impersonal systems acting and reacting to each other and being forced into this or that state, and she’ll appreciate how she in turn is driven by her biochemical makeup and evolutionary history to survive by overpowering and reshaping her environment, aided by this or that trait or tool.

Radical, eliminativistic naturalism thus implies some version of pragmatism. The version not implied would be one that defines usefulness in terms of the satisfaction of personal desires. (And, of course, there would really be some form of causality instead of any logical implication.) But the point is that for the eliminativist, an illusion-free individual would think purely in terms of causality and of materialistic advantage based on a thorough knowledge of the instrumental value of systems. She’d be pushed into this combative stance by her awareness that she’s an animal that’s evolved with that survivalist bias, and so her scientific understanding wouldn’t be neutral or passive, but supplemented by a more or less self-interested evaluation of systems. She’d think in terms of mechanisms, yes, but also of their instrumental value to her or to something with which she’s identified, although she wouldn’t assume that anyone’s survival, including hers, is objectively good.

For example, the radical naturalist might think of systems as posing problems to be solved. The posthuman, then, would be busy solving problems, using her knowledge to make the environment more conducive to her. She wouldn’t think of her knowledge as consisting of theories made up of symbols; instead, she’d see her brain and its artificial extensions as systems that enable her to interact successfully with other systems. The success in question would be entirely instrumental, a matter of engineering with no presumption that the work has any ultimate value. There could be no approval or disapproval, because there would be no selves to make such judgments, apart from any persistence of a deluded herd of primates. The re-engineered system would merely work as designed, and the posthuman would thereby survive and be poised to meet new challenges. This would truly be work for work’s sake.

What, then, should the enlightened pragmatist say about the dearth of witches? Can she sustain the sort of positivistic progressivism with which I began this article? Would she attempt to impact her environment by making sounds that are naively interpreted as meaning that science has shown there are no witches? No, she would “say” only that the neural configuration leading to behaviour associated with the semantic illusion that certain symbols correspond to witchy phenomena has causes and effects A and B, whereas the neural configuration leading to so-called enlightened, modern behaviour, often associated with the semantic illusion that certain other symbols correspond to the furious buying and selling of material goods and services and to equally tangible, presently-conventional behaviour thus has causes and effects C and D. Again, if everything must be perceived in terms of causality, the neural states causing certain primates to be burned as witches should be construed solely in terms of their causes and effects. In short, the premodern, allegedly savage illusion of witchcraft loses its sting of embarrassment, because that illusion evidently had causal power and thus a degree of reality. Cognitive illusions aren’t nothing at all; they’re effects of vices like arrogance, self-righteousness, impertinence, irrationality, and so forth, and they help to shape the real world. There’s no enlightened basis for any normative condemnation of such an illusion. All that matters is the pragmatic, instrumental judgment of something’s effectiveness at solving a problem.

Yes, if there’s no such thing as the meaning of a symbol, there are no witches, in that there’s no relation of non-correspondence between “witch” and creatures that would fit the description. Alas, this shouldn’t comfort the radical naturalist since there can likewise be no negative semantic relation between “symbol” and symbols to make sense of that statement about the nonexistence of witches. If naturalism forces us to give up entirely on the idea of intentionality, we mustn’t interpret the question of something’s nonexistence as being about a symbol’s failure to pick out something (since there would be no such thing as a symbol in the first place). And if we say there are no symbols, just as there are no witches or ghosts or emergent and autonomous minds, we likewise mustn’t think this is due merely to any semantic failure.

What, then, must nonexistence be, according to radical naturalism? It must be just relative powerlessness. To say that there are no witches “means” that the neural states involved in behaviour construed in terms of witchcraft are relatively powerless to systematically or reliably impact their environment. Note that this needn’t imply that the belief in witches is absolutely powerless. After all, religious institutions have subdued their flocks for millennia based on the ideology of demons, witches and the like, and so the pragmatist mustn’t pretend she can afford to “say” that witches have a purely negative ontological status. Again, just because there aren’t really any witches doesn’t mean there’s no erroneous belief in witchcraft, and that belief itself can have causal power. The belief might even conceivably lead to a self-fulfilling prophecy in which case something like witchcraft will someday come into being. At any rate, the belief in witches opens up problems to be solved by engineering (whether to side with the oppressive Church or to overthrow it, etc.), and that would be the enlightened posthuman’s only concern with respect to witches.

Indeed, a radical naturalist who understands the cataclysmic implications of scientific progress has no epistemic basis whatsoever for belittling the causal role of a so-called illusion like witchcraft. Again, some neural states have causes and effects A and B while others have causes and effects C and D—and that’s it as far as objective reality is concerned. On top of this, at best, there’s pragmatic instrumentalism, which raises the question merely of the usefulness of the belief in witches. Is that belief entirely useless? Obviously not, as Western history attests. Is the belief in witches immoral or beneath our dignity as secular humanists? The question should be utterly irrelevant, since morality and dignity are themselves illusions, given radical naturalism; moreover, the “human” in “humanist” must be virtually empty. What an enlightened person could say with integrity is just that the belief in witches benefits some primates more than others, by helping to establish a dominance hierarchy.

The same goes for the nonexistence of minds, personhood, consciousness, semantic meaning, or purpose. If these things are illusions, so what? Illusions can have causal power, and the radical naturalist must distinguish between causal relations solely by assigning them their instrumental value, noting that some effects help some primates to survive by solving certain problems, while hindering others. Illusions are thus real enough for the truly radical naturalist. In particular, if the brain tries to discover its mechanisms through introspection and naturally comes up empty, that need not be the end of the natural process. The cognitive blind spot delivers an illusion of mentality or of immaterial spirituality, which in turn causes primates to act as if there were such things as cultures consisting of meaningful symbols, moral values and the like. We’d be misled into creating something that nevertheless exists as our creation. Just as the whole universe might have popped into existence from nothing, according to quantum mechanics, cognitive science might entail that personhood develops from the introspective experience of an inner emptiness. In fact, we’re not empty, because our heads are full of brain matter. But the tool of introspection can be usefully misapplied, as it evidently causes the whole panoply of culture-dependent behaviours.

What is it, then, to call personhood a mere illusion? What’s the difference between illusion and reality, for the radical naturalist, given that both can have causal power in the domain of material systems? If we say that illusions depend on ignorance of certain mechanisms, this turns all mechanisms into illusions and deprives us of so-called reality, assuming none of us is omniscient. As long as we select which mechanisms and processes to attend to in our animalistic dealings with the environment, we all live in bubble worlds based on that subjectivity which thus has quasi-transcendental status. To illustrate, notice that when the comedian Bill Maher mocks the Fox News viewer for living in the Fox Bubble and for being ignorant of the “real world,” Maher forgets that he too lives in a culture, albeit in a liberal rather than a conservative one, and that he doesn’t conceive of everything with the discipline of strict impersonality or objectivity, as though he were the posthuman mystic.

What seems to be happening here is that the radical naturalist is liable to identify with a science-centered culture and thus she’s quick to downgrade the experience of those who prefer the humanities, including philosophy, religion, and art. From the science-centered perspective, we’re fundamentally animals caught in systems of causality, but we nevertheless go on to create cultures in our bumbling way, blissfully ignorant of certain mechanistic realities and driven by cognitive vices and biases as we allow ourselves to be mesmerized by the “illusion” of a transcendent, immaterial self.  But there’s actually no basis here for any value judgment one way or the other. From a barebones scientific “perspective,” the institution of science is as illusory as witchcraft. All that’s real are configurations of material elements that evolve in orderly ways—and witchcraft and personhood are free to share in that reality as illusions. Judging by the fact that the idea of witches has evidently caused some people to be treated accordingly and that the idea of the personal self has caused us to create a host of artificial, cultural worlds within the indifferent natural one, there appears to be more than enough reality to go around.


Get every new post delivered to your Inbox.

Join 696 other followers