Three Pound Brain

No bells, just whistling in the dark…

The Lesser Sound and Fury

by rsbakker

So a storm blew through last Tuesday night, a real storm, the kind we haven’t seen in a couple of years at least. I was just finishing up a disastrous night of NHL 13 (because NHL 14 is a rip off) on PS3 (because my PS4 is a paperweight) with my buds out back in the garage. Frank fled. Ken strolled. ‘Good night, Motherfucker.’ ‘Goodnight.’ The sky was alive, strobes glimpsed through the Dark Lord’s laundry, thunder rattling the teeth of the world, booming across houses lined up like molars. I sat on the front porch to watch, squinting more for the booze than for the wind. There had been talk of tornados, but I wasn’t buying it, having lived in Tennessee. No, just a storm. We just don’t get the parking lot heat they need to germinate, let alone to feed. The air lacked the required energy.

The rain fell like gravel. Straight down. Euclidean rain, I thought.

But there was nothing linear about the lightning. The first strike ripped fabric too fundamental to be seen. The second had me out of my stupor as much as out of my seat, blinking for the instantaneous execution of night and shadow. Everything revealed God’s way: too quick to be grasped by eyes so small as these.

I stood, another animal floating in solution. I laughed the laugh of monkeys too stupid to cower. I thought of ancient fools.

The rain fell like gravel, massing across all the terrestrial surfaces, hard enough to shatter into sand, hanging like dust across ankles in summer fields. Then it faded, trailed into silence with analogue perfection, and I found myself standing in a glazed nocturnal world, everything turgid… shivering for the high-altitude chill.

I locked up the house, crawled into bed. I lay in bed listening to the passage of thunder… the far side of some cataclysmic charge. I watched white splash across the skylights.

And then came the blitz.

BOOM!

Something—an artillery shell pilfered from some World War I magazine from the sounds of it—exploded just a few blocks over. The house shook everyone awake.

BOOM!

Closer than the last—even nature believes in the strategic value of carpet bombing.

We huddled together, our small family of three, grinning for terror and wonder. I spoke brave words I can no longer remember.

BOOM!

Loud enough to crack wood, to swear in front of little children.

The next morning I awoke to the smell of a five year old farting. It seemed a miracle that everything was intact and sodden—no hoary old trees torn from their sockets, no branches hanging necks broken from powerlines. It seemed miraculous that a beast so vast could stomp through our neighbourhood with nary a casualty. Not a shrub. Not one drowned squirrel.

Only my fucking modem, and a week to remember what it was like, back before all this lesser sound and fury.

In the Shadow of Summer…

by rsbakker

Just an update, given that the inevitable summer disruption of my habitual routines seems to have already commenced.

First, SFF World aficionados were kind enough to include The Prince of Nothing in their top twenty all time best fantasy series.

Second, I gave a version of a science fiction novella I wrote entitled White Rain Country to my agent, who loves it. Hopefully, I should have some publication news soon.

I also submitted the short story solicited by the esteemed Midwest Studies in Philosophy for their special issue on philosophy and science fiction. Apparently there’s some worries about the ‘uncooked’ nature of the content, so some kind revisions might be necessary.

And lastly, things keep dragging on with my publishers regarding The Unholy Consult. My delay turning the manuscript in and the quick turnover of editorial staff in the industry means that no one was up to speed on the series–but six months on from submission, and still we have no word. My fear (not my agent’s) is that they might be re-evaluating their commitment to the series–the way all publishers are reviewing their commitments to their midlist authors. I know for a fact that other publishers are interested in snapping the series up, so there’s no need to organize a wake, but who knows what kind of delay would result. Perhaps shooting them emails explaining why they should believe this series will continue growing might help? I dunno.

The market only grows more and more crowded, and still there’s nothing quite like The Second Apocalypse. Distinction is key in this day and age…

I hope!

A Bestiary of Future Literatures (Reprise)

by rsbakker

[Apropos my presentation in Denmark, I thought it worthwhile reposting this little gem from the TPB vault…]

With the collapse of mainstream literary fiction as a commercially viable genre in 2036 and its subsequent replacement with Algorithmic Sentimentalism, so-called ‘human literature’ became an entirely state and corporate funded activity. Freed from market considerations, writers could concentrate on accumulating the ingroup prestige required to secure so-called ‘non-reciprocal’ sponsors. In the wake of the new sciences, this precipitated an explosion of ‘genres,’ some self-consciously consolatory, others bent on exploring life in the wake of the so-called ‘Semantic Apocalypse,’ the scientific discrediting of meaning and morality that remains the most troubling consequence of the ongoing (and potentially never-ending) Information Enlightenment.

Amar Stevens, in his seminal Muse: The Exorcism of the Human, famously declared this the age of ‘Post-semanticism,’ where, as he puts it, “writers write with the knowledge that they write nothing” (7). He maps post-semantic literature according to its ‘meaning stance,’ the attitude it takes to the experience of meaning both in the text and the greater world, dividing it into four rough categories: 1) Nostalgic Prosemanticism, which he describes as “a paean to a world that never was” (38); 2) Revisionary Prosemanticism, which attempts “to forge new meaning, via forms of quasi-Nietzschean affirmation, out of the sciences of the soul” (122); 3) Melancholy Antisemanticism, which “embraces the death of meaning as an irredeemable loss” (243); and 4) Neonihilism, which he sees as “the gleeful production of novel semantic illusions via the findings of cognitive neuroscience” (381).

Stevens ends Muse with his famous declaration of the ‘death of literature’:

“For the sum of human history, storytelling, or ‘literature,’ has framed our identity, ordered our lives, and graced our pursuits with the veneer of transcendence. It seemed to be the baseline, the very ‘sea-level’ of what it meant to be human. But now that science has drained the black waters, we can see we have been stranded on lonely peaks all along, and that the wholesome family of meaning was little more than an assemblage of unrelated strangers. We were as ignorant of literature as you are ignorant of the monstrous complexities concealed by these words. Until now, all literature was confabulation, lies that we believed. Until now, we could enthral one another in good conscience. At last we can see there was never any such thing as ‘literature,’ realize that it was of a piece with the trick of perspective we once called the soul” (498)

Algorithmic Sentimentalism: Freely disseminated computer-generated fiction based on the neuronarrative feedback work of Dr. Hilary Kohl, designed to maximize the possibilities of product placement while engendering the ‘mean peak narrative response,’ or story-telling pleasure. Following the work of neurolinguist Pavol Berman, whose ‘Whole Syntax Theory’ is credited with transforming linguistics into a properly natural science, Kohl developed the imaging techniques that allowed her to isolate what she called Subexperiential Narrative Grammar (SNG), and so, like Berman before her, provided narratology with its scientific basis. “Once we were able to isolate the relevant activation architecture, the grammar and its permutations became clear as a road map,” she explained in a 2035 OWN interview. “Then it was simply a matter of imaging people while they read the story-telling greats, and deriving the algorithms needed to generate new heart-touching and gut-wrenching novels.

In 2033, she founded the publishing startup, Muse, releasing algorithmically produced novels for free and generating revenue through the sale of corporate product placements. Initial skepticism was swept away in 2034, when Imp, the story of a small boy surviving the tribulations of ‘social sanctioning’ in a Haitian public school, won the Pulitzer, the National Book Award, and was short-listed for the Man Booker. In 2040, Muse purchased Bertelsmann to become the largest publisher in the world.

In a recent Salon interview, Kohl claimed to have developed what she called Submorphic Adaptation Algorithms that could “eventually replace all literature, even the so-called avante garde fringe.” In a rebuttal piece that appeared in the New York Times, she outraged academics by claiming “Shakespeare only seems deep because we can’t see past the skin of what is really going on, and what has been all along.”

Mundane Fantasy (aka, the ‘mundane fantastic’ or ‘nostalgic realism’ in academic circles): According to Stevens, the primary nostalgic prosemantic genre, the “vestigial remnant of what was once the monumental edifice of mainstream literary fiction” (39).

Technorealism: According to Stevens, the primary revisionary pro-semantic genre, where the traditional narrative form remains as “something to be gamed and/or problematized” (Muse, 45) in the context of “imploding social realities” (46).

Neuroexperimentalism: Movement founded by Gregor Shin, which uses data-mining to isolate so-called ‘orthogonalities,’ a form of lexical and sentential ‘combinetrics’ that generate utterly novel semantic effects.

Impersonalism: A major literary school (commonly referred to as ‘It Lit’) centred around the work of Michel Grant (who famously claims to be the illegitimate son of the late Michel Houellebecq, even though DNA evidence has proved otherwise), which has divided into a least two distinct movements, Hard Impersonalism, where no intentional concepts are used whatsoever, and Soft Impersonalism, where only the so-called ‘Intentionalities of the Self’ are eschewed.

New Absurdism: A growing, melancholy anti-semantic movement inspired by the writing of Tuck Gingrich, noted for what Stevens calls, “its hysterical anti-realism.” Mira Gladwell calls it the ‘meta-meta’ – or ‘meta-squared’ – for the way it continually takes itself as its object of reference. In “One for One for One,” a small position piece published in The New Yorker, the famously reclusive Gingrich seems to argue (the text is notoriously opaque) that “meta-inclusionary satire” constitutes a form of communication that algorithmic generation can never properly duplicate. To date, neither Muse nor Penguin-Narratel have managed to disprove his claim. A related genre called Anthroplasticism has recently found an enthusiastic audience in literary science departments across eastern China and, ironically enough, the southern USA.

Extinctionism: The so-called ‘cryptic school’ thought by many to be algorithmic artifacts, both because of the volume and anonymity of texts available. However, Sheila Siddique, the acclaimed author of Without, has recently claimed connection to the school, stating that the anonymity of authorship is crucial to the authenticity of the genre, which eschews all notions of agency.

Writing After the Death of Meaning

by rsbakker

[Presented June 2nd, 2015, for the Posthuman Aesthetics Research Group at Aarhus University]

Abstract: For centuries now, science has been making the invisible visible, thus revolutionizing our understanding of and power over different traditional domains of knowledge. Fairly all the speculative phantoms have been exorcised from the world, ‘disenchanted,’ and now, at long last, the insatiable institution has begun making the human visible for what it is. Are we the last ancient delusion? Is the great, wheezing heap of humanism more an artifact of ignorance than insight? We have ample reason to think so, and as the cognitive sciences creep ever deeper into our biological convolutions, the ‘worst case scenario’ only looms darker on the horizon. To be a writer in this age is stand astride this paradox, to trade in communicative modes at once anchored to our deepest notions of authenticity and in the process of being dismantled or worse, simulated. If writing is a process of making visible, communicating some recognizable humanity, how does it proceed in an age where everything is illuminated and inhuman? All revolutions require experimentation, but all too often experimentation devolves into closed circuits of socially inert production and consumption. The present revolution, I will argue, requires cultural tools we do not yet possess (or know how to use), and a sensibility that existing cultural elites can only regard as anathema. Writing in the 21st century requires abandoning our speculative past, and seeing ‘literature’ as praxis in a time of unprecedented crisis, as ‘cultural triage.’ Most importantly, writing after the death of meaning means communicating to what we in fact are, and not to the innumerable conceits of obsolescent tradition.

So, we all recognize the revolutionary potential of technology and the science that makes it possible. This is just to say that we all expect science will radically remake those traditional domains that fall within its bailiwick. Likewise, we all appreciate that the human is just such a domain. We all realize that some kind of revolution is brewing…

The only real question is one of how radically the human will be remade. Here, everyone differs, and in quite predictable ways. No matter what position people take, however, they are saying something about the cognitive status of traditional humanistic thought. Science makes myth of traditional ontological claims, relegates them to the history of ideas. So all things being equal we should suppose that science will make myth of traditional ontological claims regarding the human as well. Declaring that traditional ontological claims regarding the human will not suffer the fate of other traditional ontological claims more generally, amounts to declaring that all things are not equal when it comes to the human, that in this one domain at least, traditional modes of cognition actually tell us what is the case.

Let’s call this pole of argumentation humanistic exceptionalism. Any position that contends or assumes that science will not fundamentally revolutionize our understanding of the human supposes that something sets the human apart. Not surprisingly, given the underdetermined nature of the subject-matter, the institutionally entrenched nature of the humanities, and the human propensity to rationalize conceit and self-interests, the vast majority of theorists find themselves occupying this pole. There are, we now know, many, many ways to argue exceptionalism, and no way whatsoever to decisively arbitrate between any them.

What all of them have in common, I think it’s fair to say, is the signature theoretical function they accord to meaning. Another feature they share is a common reliance on pejoratives to police the boundaries of their discourse. Any time you encounter the terms ‘scientism’ or ‘positivism’ or ‘reductionism’ deployed without any corresponding consideration of the case against traditional humanism, you are almost certainly reading an exceptionalist discourse. One of the great limitations of committing to status-quo underdetermined discourses, of course, is the infrequency with which adherents encounter the limits of their discourse, and thus run afoul the same fluency and only game in town effects that render all dogmatic pieties self-perpetuating.

My artistic and philosophical project can be fairly summarized, I think, as a sustained critique of humanistic exceptionalism, an attempt to reveal these positions as the latest (and therefore most difficult to recognize) attempts to intellectually rationalize what are ultimately run-of-the-mill conceits, specious ways to set humanity—or select portions of it at least—apart from nature.

I occupy the lonely pole of argumentation, the one that says humans are not ontologically special in any way, and that accordingly, we should expect the scientific revolution of the human to be as profound as the scientific revolution of any other domain. My whole career is premised on arguing the worst case scenario, the future where humanity finds itself every bit as disenchanted—every bit as debunked—as the cosmos.

I understand why my pole of the debate is so lonely. One of the virtues of my position, I think anyway, lies in its ability to explain its own counter-intuitiveness.

Think about it. What does it mean to say meaning is dead? Surely this is metaphorical hyperbole, or worse yet, irresponsible alarmism. What could my own claims mean otherwise?

‘Meaning,’ on my account, will die two deaths, one theoretical or philosophical, the other practical or functional. Where the first death amounts to a profound cultural upheaval on a par with, say, Darwin’s theory of evolution, the second death amounts to a profound biological upheaval, a transformation of cognitive habitat more profound than any humanity has ever experienced.

‘Theoretical meaning’ simply refers to the endless theories of intentionality humanity has heaped on the question of the human. Pretty much the sum of traditional philosophical thought on the nature of humanity. And this form of meaning I think is pretty clearly dead. People forget that every single cognitive scientific discovery amounts to a feature of human nature that human nature is prone to neglect. We are, as a matter of empirical fact, fundamentally blind to what we are and what we do. Like traditional theoretical claims belonging to other domains, all traditional theoretical claims regarding the human neglect the information driving scientific interpretations. The question is one of what this naturally neglected information—or ‘NNI’—means.

The issue NNI poses for the traditional humanities is existential. If one grants that the sum of cognitive scientific discovery is relevant to all senses of the human, you could safely say the traditional humanities are already dwelling in a twilight of denial. The traditionalist’s strategy, of course, is to subdivide the domain, to adduce arguments and examples that seem to circumscribe the relevance of NNI. The problem with this strategy, however, is that it completely misconstrues the challenge that NNI poses. The traditional humanities, as cognitive disciplines, fall under the purview of cognitive sciences. One can concede that various aspects of humanity need not account for NNI, yet still insist that all our theoretical cognition of those aspects does…

And quite obviously so.

The question, ‘To what degree should we trust ‘reflection upon experience’?’ is a scientific question. Just for example, what kind of metacognitive capacities would be required to abstract ‘conditions of possibility’ from experience? Likewise, what kind of metacognitive capacities would be required to generate veridical descriptions of phenomenal experience? Answers to these kinds of questions bear powerfully on the viability of traditional semantic modes of theorizing the human. On the worst case scenario, the answers to these and other related questions are going to systematically discredit all forms of ‘philosophical reflection’ that fail to take account of NNI.

NNI, in other words, means that philosophical meaning is dead.

‘Practical meaning’ refers to the everyday functionality of our intentional idioms, the ways we use terms like ‘means’ to solve a wide variety of practical, communicative problems. This form of meaning lives on, and will continue to do so, only with ever-diminishing degrees of efficacy. Our everyday intentional idioms function effortlessly and reliably in a wide variety of socio-communicative contexts despite systematically neglecting everything cognitive science has revealed. They provide solutions despite the scarcity of data.

They are heuristic, part of a cognitive system that relies on certain environmental invariants to solve what would otherwise be intractable problems. They possess adaptive ecologies. We quite simply could not cope if we were to rely on NNI, say, to navigate social environments. Luckily, we don’t have to, at least when it comes to a wide variety of social problems. So long as human brains possess the same structure and capacities, the brain can quite literally ignore the brain when solving problems involving other brains. It can leap to conclusions absent any natural information regarding what actually happens to be going on.

But, to riff on Uncle Ben, with great problem-solving economy comes great problem-making potential. Heuristics are ecological; they require that different environmental features remain invariant. Some insects, most famously moths, use ‘transverse orientation,’ flying at a fixed angle to the moon to navigate. Porch lights famously miscue this heuristic mechanism, causing the insect to chase the angle into the light. The transformation of environments, in other words, has cognitive consequences, depending on the kind of short cut at issue. Heuristic efficiency means dynamic vulnerability.

And this means not only that heuristics can be short-circuited, they can also be hacked. Think of the once omnipresent ‘bug zapper.’ Or consider reed warblers, which provide one of the most dramatic examples of heuristic vulnerability nature has to offer. The system they use to recognize eggs and offspring is so low resolution (and therefore economical) that cuckoos regularly parasitize their nests, leaving what are, to human eyes, obviously oversized eggs and (brood-killing) chicks that the warbler dutifully nurses to adulthood.

All cognitive systems, insofar as they are bounded, possess what might be called a Crash Space describing all the possible ways they are prone to break down (as in the case of porch lights and moths), as well as an overlapping Cheat Space describing all the possible ways they can be exploited by competitors (as in the case of reed warblers and cuckoos, or moths and bug-zappers).

The death of practical meaning simply refers to the growing incapacity of intentional idioms to reliably solve various social problems in radically transformed sociocognitive habitats. Even as we speak, our environments are becoming more ‘intelligent,’ more prone to cue intentional intuitions in circumstances that quite obviously do not warrant them. We will, very shortly, be surrounded by countless ‘pseudo-agents,’ systems devoted to hacking our behaviour—exploiting the Cheat Space corresponding to our heuristic limits—via NNI. Combined with intelligent technologies, NNI has transformed consumer hacking into a vast research programme. Our social environments are transforming, our native communicative habitat is being destroyed, stranding us with tools that will increasingly let us down.

Where NNI itself delegitimizes traditional theoretical accounts of meaning (by revealing the limits of reflection), it renders practical problem-solving via intentional idioms (practical meaning) progressively more ineffective by enabling the industrial exploitation of Cheat Space. Meaning is dead, both as a second-order research programme and, more alarmingly, as a first-order practical problem-solver. This—this is the world that the writer, the producer of meaning, now finds themselves writing in as well as writing to. What does it mean to produce ‘content’ in such a world? What does it mean to write after the death of meaning?

This is about as open as a question can be. It reveals just how radical this particular juncture in human thought is about to become. Everything is new, here folks. The slate is wiped clean.

[I used the following possibilities to organize the subsequent discussion]

Post-Posterity Writing

The Artist can no longer rely on posterity to redeem ingroup excesses. He or she must either reach out, or risk irrelevance and preposterous hypocrisy. Post-semantic writing is post-posterity writing, the production of narratives for the present rather than some indeterminate tomorrow.

High Dimensional Writing

The Artist can no longer pretend to be immaterial. Nor can they pretend to be something material magically interfacing with something immaterial. They need to see the apparent lack of dimensionality pertaining to all things ‘semantic’ as the product of cognitive incapacity, not ontological exceptionality. They need to understand that thoughts are made of meat. Cognition and communication are biological processes, open to empirical investigation and high dimensional explanations.

Cheat Space Writing

The Artist must exploit Cheat Spaces as much as reveal Cheat Spaces. NNI is not simply an industrial and commercial resource; it is also an aesthetic one.

Cultural Triage

The Artist must recognize that it is already too late, that the processes involved cannot be stopped, let alone reversed.  Extremism is the enemy here, the attempt to institute, either via coercive simplification (a la radical Islam, for instance) or via technical reduction (a la totalized surveillance, for instance), Orwellian forms of cognitive hygiene.

More Disney than Disney World: Semiotics as Theoretical Make-believe (II)

by rsbakker

III: The Gilded Stage

We are one species among 8.7 million, organisms embedded in environments that will select us the way they have our ancestors for 3.8 billion years running. Though we are (as a matter of empirical fact) continuous with our environments, the information driving our environmental behaviour is highly selective. The selectivity of our environmental sensitivities means that we are encapsulated, both in terms of the information available to our brain, and in terms of the information available for consciousness. Encapsulation simply follows from the finite, bounded nature of cognition. Human cognition is the product of ancestral human environments, a collection of good enough fixes for whatever problems those environments regularly posed. Given the biological cost of cognition, we should expect that our brains have evolved to derive as much information as possible from whatever signals available, to continually jump to reproductively advantageous conclusions. We should expect to be insensitive to the vast majority of information in our environments, to neglect everything save information that had managed to get our ancestors born.

As it turns out, shrewd guesswork carried the cognitive day. The correlate of encapsulated information access, in other words, is heuristic cognitive processing, a tendency to always see more than there really is.

So consider the streetscape from above once again:

Southwest Orange-20150421-00452

This looks like a streetscape only because the information provided generally cues the existence of hidden dimensions, which in this case simply do not exist. Since the cuing is always automatic and implicit, you just are looking down a street. Change your angle of access and the illusion of hidden dimensions—which is to say, reality—abruptly evaporates. The impossible New York skyline is revealed as counterfeit.

Southwest Orange-20150421-00453

Let’s call a stage any environment that reliably cues the cognition of alternate environments. On this definition, a stage could be the apparatus of a trapdoor spider, say, or a nest parasitized by a cuckoo, or a painting, or an epic poem, or yes, Disney World—any environment that reliably triggers the cognition of some environment other than the environment actually confronting some organism.

As the inclusion of the spider and the cuckoo should suggest, a stage is a biological phenomenon, the result of some organism cognizing one environment as another environment. Stages, in other words, are not semantic. It is simply the case that beetles sensing environments absent spiders will blunder into trapdoor spiders. It’s simply the case that some birds, sensing chicks, will feed those chicks, even if one of them happens to be a cuckoo. It is simply the case that various organisms exploit the cognitive insensitivities of various other organisms. One need not ascribe anything so arcane as ‘false beliefs’ to birds and beetles to make sense of their exploitation. All they need do is function in a way typically cued by one family of (often happy) environments in a different (often disastrous) environment.

Stages are rife throughout the natural world simply because biological cognition is so expensive. All cognition can be exploited because all cognition is bounded, dependant on taking innumerable factors for granted. Probabilistic guesses have to be made always and everywhere, such are the exigencies of survival and reproduction. Competing species need only happen upon ways to trigger those guesses in environments reproductively advantageous to them, and selection will pace out a new niche, a position in what might be called manipulation space.

The difficulty with qualifying a stage as a biological phenomenon, however, is that I included intentional artifacts such as narratives, paintings, and amusement parks as examples of stages above. The problem with this is that no one knows how to reconcile the biological with the intentional, how to fit meaning into the machinery of life.

And yet, as easy as it is to anthropomorphize the cuckoo’s ‘treachery’ or the trapdoor spider’s ‘cunning’—to infuse our biological examples with meaning—it seems equally easy to ‘zombify’ narrative or painting or Disney World. Hearing the Iliad, for instance, is a prodigious example of staging, insofar as it involves the serial cognition of alternate environments via auditory cues embedded in an actual, but largely neglected, environment. One can easily look at the famed cave paintings of Chauvet, say, as a manipulation of visual cues that automatically triggers the cognition of absent things, in this case, horses:

chauvet horses

But if narrative and painting are stages so far as ‘cognizing alternate environments’ goes, the differences between things like the Iliad or Chauvet and things like trapdoor spiders and cuckoos are nothing less than astonishing. For one, the narrative and pictorial cuing of alternative environments is only partial; the ‘alternate environment’ is entertained as opposed to experienced. For another, the staging involved in the former is communicative, whereas the staging involved in the latter is not. Narratives and paintings mean things, they possess ‘symbolic significance,’ or ‘representational content,’ whereas the predatory and parasitic stages you find in the natural world do not. And since meaning resists biological explanation, this strongly suggests that communicative staging resists biological explanation.

But let’s press on, daring theorists that we are, and see how far our ‘zombie stage’ can take us. The fact is, the ‘manipulation space’ intrinsic to bounded cognition affords opportunities as well as threats. In the case of Chauvet, for instance, you can almost feel the wonder of those first artists discovering the relations between technique and visual effect, ways to trick the eye into seeing what was not there there. Various patterns of visual information cue cognitive machinery adapted to solve environments absent those environments. Flat surfaces become windows.

Let’s divvy things up differently, look at cognition and metacognition in terms of multiple channels of information availability versus cognitive capacity. On this account, staging need not be complete: as with Chauvet, the cognition of alternate environments can be partial, localized within the present environment. And as with Chauvet, this embedded staging can be instrumentalized, exploited for various kinds of effects. Just how the cave paintings at Chauvet were used will always be a matter of archaeological speculation, but this in itself tells us something important about the kind of stage we’re now talking about: namely, their specificity. We share the same basic cognitive mechanisms as the original creators and consumers of the Horses, for instance, but we share nothing of their individual histories. This means the stage we step onto encountering them is bound to differ, perhaps radically, from the stage they stepped onto encountering them in the Upper Paleolithic. Since no individuals share precisely the same history, this means that all embedded stages are unique in some respect.

The potential evolutionary value of embedded stages, the kind of ‘cognitive double-vision’ peculiar to humans, seems relatively clear. If you can draw a horse you can show a fellow hunter what to look for, what direction to approach it, where to strike with a spear, how to carve the joints for efficient transportation, and so on. Embedding, in other words, allows organisms to communicate cognitive relationships to actual environments by cuing the cognition of that environment absent that environment. Embedding also allows organisms to communicate cognitive relationships to nonexistent environments as well. If you can draw a cave bear, you can just as easily deceive as teach a potential competitor. And lastly, embedding allows organisms to game their own cognitive systems. By experimenting with patterns of visual information, they can trigger a wide variety of different responses, triggering wonder, lust, fear, amusement, and so on. The cave paintings at Chauvet include what is perhaps the oldest example of pictorial ‘porn’ (in this case, a vulva formed by a bull overlapping a lion) for a reason.

chauvet vulva

Humans, you could say, are the staging animal, the animal capable of reorganizing and coordinating their cognitive comportments via the manipulation of available information into cues, those patterns prone to trigger various heuristic systems ‘out of school.’ Research into episodic memory reveals an intimate relation between the constructive (as opposed to veridical) nature of episodic memory and the ability to imagine future environments. Apparently the brain does not so much record events as it ransacks them, extracting information strategic to solving future environments. Nothing demonstrates the profound degree to which the brain is invested in strategic staging as the default or task-negative network. Whenever we find ourselves disengaged from some ongoing task, our brains, far from slowing down, switch modes and begin processing alternate, typically social, environments. We ‘daydream,’ or ‘ruminate,’ or ‘fantasize,’ activities almost as metabolically expensive as performing focussed tasks. The resting brain is a staging brain—a story-telling brain. It has literally evolved to cue and manipulate its own cognitive systems, to ‘entertain’ alternate environments, laying down priors in the absence of genuine experience to better manage surprise.

Language looms large over all this, of course, as the staging device par excellence. Language allows us to ‘paint a picture,’ or cue various cognitive systems, at any time. Via language, multiple humans can coordinate their behaviours to provide a single solution; they can engage their environments at ever more strategic joints, intervene in ways that reliably generate advantageous outcomes. Via language, environmental comportments can be compared, tested as embedded stages, which is to say, on the biological cheap. And the list goes on. The upshot is that language, like cave paintings, puts human cognition at the disposal of human cognition

And—here’s the thing—while remaining utterly blind to the structure and dynamics of human cognition.

The reason for this is simple: the biological complexity required to cognize environments is simply too great to be cognized as environmental. We see the ash and pigment smeared across the stone, we experience (the illusion of) horses, and we have no access whatsoever to the machinery in between. Or to phrase it in zombie terms, humans access environmental information, ash and pigment, which cues cognitive comportments to different environmental information, horses, in the absence of any cognitive comportment to this process. In fact, all we see are horses, effortlessly and automatically; it actually requires effort to see the ash and pigment! The activated environment crowds the actual environment from the focus to the fringe. The machinery that makes all this possible doesn’t so much as dimple the margin. We neglect it. And accordingly, what inklings we have strike us as all there is.

The question of signification is as old as philosophy: how the hell do nonexistent horses leap from patterns of light or sound? Until recently, all attempts to answer this question relied on observations regarding environmental cues, the resulting experience, and the environment cued. The sign, the soul, and the signified anchored our every speculative analysis simply because, short baffling instances of neuropathology, the machinery responsible never showed its hand.

Our cognitive comportment to signification, in other words, looked like:

Southwest Orange-20150421-00452

Which is to say, a stage.

Because we’re quite literally ‘hardwired’ into this position, we have no way of intuiting the radically impoverished (because specialized) nature of the information made available. We cannot trudge on the perpendicular to see what the stage looks like from different angles—we cannot alter our existing cognitive comportments. Thus, what might be called the semiotic stage strikes us as the environment, or anything but a stage. So profound is the illusion that the typical indicators of informatic insufficiency, the inability to leverage systematically effective behaviour, the inability to command consensus, are habitually overlooked by everyone save the ‘folk’ (ironically enough). Sign, soul, and signified could only take us so far. Despite millennia of philosophical and psychological speculation, despite all the myriad regimentations of syntax and semantics, language remains a mystery. Controversy reigns—which is to say, we as yet lack any decisive scientific account of language.

But then science has only begun the long trudge on the perpendicular. The project of accessing and interpreting the vast amounts of information neglected by the semiotic stage is just getting underway.

Since all the various competing semiotic theories are based on functions posited absent any substantial reference to the information neglected, the temptation is to assume that those functions operate autonomously, somehow ‘supervene’ upon the higher dimensional story coming out cognitive neuroscience. This has a number of happy dialectical consequences beyond simply proofing domains against cognitive scientific encroachments. Theoretical constraints can even be mapped backward, with the assumption that neuroscience will vindicate semiotic functions, or that semiotic functions actually help clarify neuroscience. Far from accepting any cognitive scientific constraints, they can assert that at least one of their multiple stabs in the dark pierces the mystery of language in the heart, and is thus implicitly presupposed in all communicative acts. Heady stuff.

Semiotics, in other words, would have you believe that either this

Southwest Orange-20150421-00452

is New York City as we know it, and will be vindicated by the long cognitive neuroscientific trudge on the perpendicular, or that it’s a special kind of New York City, one possessing no perpendicular to trudge—not unlike, surprise-surprise, assumptions regarding the first-person or intentionality in general.

On this account, the functions posited are sometimes predictive, sometimes not, and even when they are predictive (as opposed to merely philosophical), they are clearly heuristic, low-dimensional ways of tracking extremely complicated systems. As such, there’s no reason to think them inexplicably—magically—‘autonomous,’ and good reason to suppose why it might seem that way. Sign, soul, and signified, the blinkered channels that have traditionally informed our understanding of language, appear inviolable precisely because they are blinkered—since we cognize via those channels, the limits of those channels cannot be cognized: the invisibility of the perpendicular becomes its impossibility.

These are precisely the kinds of errors we should expect speaking animals to make in the infancy of their linguistic self-understanding. You might even say that humans were doomed to run afoul ‘theoretical hyperrealities’ like semiotics, discursive Disney Worlds…

Except that in Disney World, of course, the stages are advertised as stages, not inescapable or fundamental environments. Aside from policy level stuff, I have no idea how Disney World or Disney corporation systematically contributes to the subversion of social justice, and neither, I would submit, does any semiotician living. But I do think I know how to fit Disney into a far larger, and far more disturbing set of trends that have seized society more generally. To see this, we have to leave semiotics behind…

More Disney than Disney World: Semiotics as Theoretical Make-believe

by rsbakker

Southwest Orange-20150415-00408

I: SORCERERS OF THE MAGIC KINGDOM (a.k.a. THE SEMIOTICIAN)

Ask a humanities scholar their opinion of Disney and they will almost certainly give you some version of Louis Marin’s famous “degenerate utopia.”

And perhaps they should. Far from a harmless amusement park, Disney World is a vast commercial enterprise, one possessing, as all corporations must, a predatory market agenda. Disney also happens to be in the meaning business, selling numerous forms of access to their propriety content, to their worlds. Disney (much like myself) is in the alternate reality game. Given their commercial imperatives, their alternate realities primarily appeal to children, who, branded at so young an age, continue to fetishize their products well into adulthood. This generational turnover, combined with the acquisition of more and more properties, assures Disney’s growing cultural dominance. And their messaging is obviously, even painfully, ideological, both escapist and socially conservative, designed to systematically neglect all forms of impersonal conflict.

I think we can all agree on this much. But the humanities scholar typically has something more in mind, a proclivity to interpret Disney and its constituents in semiotic terms, as a ‘veil of signs,’ a consciousness constructing apparatus designed to conceal and legitimize existing power inequities. For them, Disney is not simply apologetic as opposed to critical, it also plays the more sinister role of engendering and reinforcing hyperreality, the seamless integration of simulation and reality into disempowering perspectives on the world.

So as Baudrillard claims in Simulacra and Simulations:

The Disneyland imaginary is neither true nor false: it is a deterrence machine set up in order to rejuvenate in reverse the fiction of the real. Whence the debility, the infantile degeneration of this imaginary. It is meant to be an infantile world, in order to make us believe that the adults are elsewhere, in the ‘real’ world, and to conceal the fact that the real childishness is everywhere, particularly among those adults who go there to act the child in order to foster illusions of their real childishness.

Baudrillard sees the lesson as an associative one, a matter of training. The more we lard reality with our representations, Baudrillard believes, the greater the violence done. So for him the great sin of Disneyland lay not so much in reinforcing ideological derangements via simulation, but in completing the illusion of an ideologically deranged world. It is the lie within the lie, he would have us believe, that makes the second lie so difficult to see through. The sin here is innocence, the kind of belief that falls out of cognitive incapacity. Why do kids believe in magic? Arguably, because they don’t know any better. By providing adults a venue for their children to believe, Disney has also provided them evidence of their own adulthood. Seeing through Disney’s simulations generates the sense of seeing through all illusions, and therefore, seeing the real.

Disney, in other words, facilitates ‘hyperreality’—a semiotic form of cognitive closure—by rendering consumers blind to their blindness. Disney, on the semiotic account, is an ideological neglect machine. Its primary social function is to provide cognitive anaesthesia to the masses, to keep them as docile and distracted as possible. Let’s call this the ‘Disney function,’ or Df. For humanities scholars, as a rule, Df amounts to the production of hyperreality, the politically pernicious conflation of simulation and reality.

In what follows, I hope to demonstrate what might seem a preposterous figure/field inversion. What I want to argue is that the semiotician has Df all wrong—Disney is actually a far more complicated beast—and that the production of hyperreality, if anything, belongs to his or her own interpretative practice. My claim, in other words, is that the ‘politically pernicious conflation of simulation and reality’ far better describes the social function of semiotics than it does Disney.

Semiotics, I want to suggest, has managed to gull intellectuals into actively alienating the very culture they would reform, leading to the degeneration of social criticism into various forms of moral entertainment, a way for jargon-defined ingroups to transform interpretative expertise into demonstrations of manifest moral superiority. Piety, in effect. Semiotics, the study of signs in life, allows the humanities scholar to sit in judgment not just of books, but of text,* which is to say, the entire world of meaning. It constitutes what might be called an ideological Disney World, only one that, unlike the real Disney World, cannot be distinguished from the real.

I know from experience the kind of incredulity these kinds of claim provoke from the semiotically minded. The illusion, as I know first-hand, is that complete. So let me invoke, for the benefit of those smirking down at these words, the same critical thinking mantra you train into your students, and remind you that all institutions are self-regarding, all institutions cultivate congratulatory myths, and to suggest that the notion of some institution set apart, some specialized cabal possessing practices inoculated against the universal human assumption of moral superiority, is implausible through and through. Or at least worth suspicion.

You are almost certainly deluded in some respect. What follows merely illustrates how. Nothing magical protects you from running afoul your cognitive shortcomings the same as the rest of humanity. As such, it really could be the case that you are the more egregious sorcerer, and that your world-view is the real ‘magic kingdom.’ If this idea truly is as preposterous as it feels, then you should have little difficulty understanding it on its own terms, and dismantling it accordingly.

.

II: INVESTIGATING THE CRIME SCENE

Sign and signified, simulation and simulated, appearance and reality: these dichotomies provide the implicit conceptual keel for all ideologically motivated semiotic readings of culture. This instantly transforms Disney, a global industrial enterprise devoted to the production of alternate realities, into a paradigmatic case. The Walt Disney Corporation, as fairly every child in the world knows, is in the simulation business. Of course, this alone does not make Disney ‘bad.’ As an expert interpreter of signs and simulations, the semiotician has no problem with deviations from reality in general, only those deviations prone to facilitate particular vested interests. This is the sense in which the semiotic project is continuous with the Enlightenment project more generally. It presumes that knowledge sets us free. Semioticians hold that some appearances—typically those canonized as ‘art’—actually provide knowledge of the real, whereas other appearances serve only to obscure the real, and so disempower those who run afoul them.

The sin of the Walt Disney Corporation, then, isn’t that it sells simulations, it’s that it sells disempowering simulations. The problem that Disney poses the semiotician, however, is that it sells simulations as simulations, not simulations as reality. The problem, in other words, is that Disney complicates their foundational dichotomy, and in ways that are not immediately clear.

You see microcosms of this complication everywhere you go in Disney World, especially where construction or any other ‘illusion dispelling’ activities are involved. Sights such as this:

Southwest Orange-20150415-00412

where pre-existing views are laminated across tarps meant to conceal some machination that Disney would rather not have you see, struck me as particularly bizarre. Who is being fooled here? My five year old even asked why they would bother painting trees rather than planting them. Who knows, I told her. Maybe they were planting trees. Maybe they were building trees such as this:

Southwest Orange-20150419-00433

Everywhere you go you stumble across premeditated visual obstructions, or the famous, omnipresent gates labelled ‘CAST MEMBERS ONLY.’ Everywhere you go, in other words, you are confronted with obvious evidence of staging, or what might be called premeditated information environments. As any magician knows, the only way to astound the audience is to meticulously control the information they do and do not have available. So long as absolute control remains technically infeasible, they often fudge, relying on the audience’s desire to be astounded to grease the wheels of their machinations.

One finds Disney’s commitment to the staging credo tacked here and there across the very walls raised to enforce it:

Southwest Orange-20150422-00458

Walt Disney was committed to the notion of environmental immersion, with the construction of ‘stages’ that were good enough, given various technical and economic limitations, to kindle wonder in children and generosity in their parents. Almost nobody is fooled outright, least of all the children. But most everyone is fooled enough. And this is the only thing that matters, when any showman tallies their receipts at the end of the day: staging sufficiency, not perfection. The visibility of artifice will be forgiven, even revelled in, so long as the trick manages to carry the day…

No one knows this better than the cartoonist.

The ‘Disney imaginary,’ as Baudrillard calls it, is first and foremost a money making machine. For parents of limited means, the mechanical regularity with which Disney has you reaching for your wallet is proof positive that you are plugged into some kind of vast economic machine. And making money, it turns out, doesn’t require believing, it requires believing enough—which is to say, make-believe. Disney World can revel in its artificiality because artificiality, far from threatening the primary function of the system, actually facilitates it. Children want cartoons; they genuinely prefer low-dimensional distortions of reality over reality. Disney is where cartoons become flesh and blood, where high dimension replicas of low-dimension constructs are staged as the higher dimensional truth of those constructs. You stand in line to have your picture taken with a phoney Tinkerbell that you say is real to play this extraordinary game of make-believe with your children.

To the extent that make-believe is celebrated, the illusion is celebrated as benign deception. You walk into streets like this:

Southwest Orange-20150421-00452

that become this:

Southwest Orange-20150421-00453

as you trudge from the perpendicular. The staged nature of the stage is itself staged within the stage as something staged. This is the structure of the Indiana Jones Stunt Spectacular, for instance, where the audience is actually transformed into a performer on a stage staged as a stage (a movie shoot). At every turn, in fact, families are confronted with this continual underdetermination of the boundaries between ‘real’ and not ‘real.’ We watched a cartoon Crush (the surfer turtle from Finding Nemo) do an audience interaction comedy routine (we nearly pissed ourselves). We had a bug jump out of the screen and spray us with acid (water) beneath that big ass tree above (we laughed and screamed). We were skunked twice. The list goes on and on.

All these ‘attractions’ both celebrate and exploit the narrative instinct to believe, the willingness to overlook all the discrepancies between the fantastic and the real. No one is drugged and plugged into the Disney Matrix against their will; people pay, people who generally make far less than tenured academics, to play make-believe with their children.

So what are we to make of this peculiar articulation of simulations and realities? What does it tell us about Df?

The semiotic pessimist, like Baudrillard, would say that Disney is subverting your ability to reliably distinguish the real from the not real, rendering you a willing consumer of a fictional reality filled with fictional wars. Umberto Eco, on the other hand, suggests the problem is one of conditioning consumer desire. By celebrating the unreality of the real, Disney is telling “us that faked nature corresponds much more to our daydream demands” (Travels in Hyperreality, 44). Disney, on his account, whets the wrong appetite. For both, Disney is both instrumental to and symptomatic of our ideological captivity.

The optimist, on the other hand, would say they’re illuminating the contingency of the real (a.k.a. the ‘power of imagination’), training the young to never quite believe their eyes. On this view, Disney is both instrumental to and symptomatic of our semantic creativity (even as it ruthlessly polices its own intellectual properties). According to the apocryphal quote often attributed to Walt Disney, “If you can dream it, you can do it.”

This is the interpretative antinomy that hounds all semiotic readings of the ‘Disney function.’ The problem, put simply, is that interpretations falling out of the semiotic focus on sign and signified, simulation and simulated, cannot decisively resolve whether self-conscious simulation a la Disney serves, in balance, more to subvert or to conserve prevailing social inequities.

All such high altitude interpretation of social phenomena is bound to be underdetermined, of course, simply because the systems involved are far, far, too complicated. Ironically, the theorist has to make due with cartoons, which is to say skewed idealizations of the phenomena involved, and simply hope that something of the offending dynamic shines through. But what I would like to suggest is that semiotic cartoons are particularly problematic in this regard, particularly apt to systematically distort the phenomena they claim to explicate, while—quite unlike Disney’s representations—concealing their cartoonishness.

To understand how and why this is the case, we need to consider the kinds of information the ‘semiotic stage’ is prone to neglect…

 

Updated Updates…

by rsbakker

My Posthuman Aesthetics Research Group talk has been pushed back to June 2nd. I blame it on administrative dyslexia and bad feet, which is to say… me. So, apologies all, and a heartfelt thanks to Johannes Poulsen and comrades for hitting the reset button.

Updates…

by rsbakker

Regarding the vanishing American e-books, my agent tells me that Overlook has recently switched distributors, and that the kerfuffle will be sorted out shortly. If you decide to pass this along, please take the opportunity to shame those who illegally download. I’m hanging on by my fingernails, here, and yet the majority of hits I get whenever I do my weekly vanity Google are for links to illegal downloads of my books. I increasingly meet fools who seem to think they’re ‘sticking it to the man’ by illegally downloading, when in fact, what they’re doing is driving commercially borderline artists–that is, those artists dedicated to sticking it to the man–to the food bank.

As for pub dates, still no word from either Overlook (who will also be handling the Canadian edition) or Orbit. Sorry guys.

Also, I’ll be in Denmark to give a seminar entitled, “Writing After the Death of Meaning,” for the Posthuman Aesthetics Research Group (a seriously cool handle!) at Aarhus University on the thirteenth of this month. I realized writing this that I had simply assumed it wasn’t open to the public, but when I reviewed my correspondence, I couldn’t discover any reason for assuming this short its billing as a ‘seminar.’ I’ve emailed my host asking for clarification, just in case any of you happen to be twiddling your thumbs in Denmark next Wednesday.

Le Cirque de le Fou

by rsbakker

Crusty

There’s nothing better than a blog to confront you with the urge to police appearances. Given the focus on hypocrisy at Three Pound Brain, I restrict myself to blocking only those comments that seemed engineered to provoke fear. But as a commenter on other blogs, I’ve had numerous comments barred on the basis of what was pretty clearly argumentative merit. I remember on Only Requires Hate, I asked Benjanun Sriduangkaew what criteria she used to distinguish spurious charges of misogyny from serious ones, a comment that never made the light of day. I’ve also seen questions I had answered rewritten in a way that made my answers look ridiculous. I’ve even had the experience of entire debates suddenly vanishing in the aether!

Clowns don’t like having their make-up pointed out to them–at least not by a clown as big as me! This seems to be particularly the case among those invested in the academic humanities. At least these are the forums the least inclined to let my questions past moderation.

This, combined with the problems arising from the vicissitudes of the web, convinced me way back to use Word documents to create a record I could go back to if I needed to.

So, for your benefit and mine, here’s a transcript of how the comment thread to Shaun Duke’s response to “Hugos Weaving” (which proved to be a record-breaking post) should read:

.

BAKKER: So you agree that genre both reaches out and connects. But you trust that ‘literature’ does as well, even though you have no evidence of this. Like Beale, you have a pretty optimistic impression of yourself and your impact and the institutions you identify with. You find the bureaucracies problematic (like Beale), but you have no doubt the value system is sound (again like Beale). You accost your audiences with a wide variety of interpretative tactics (like Beale), and even though they all serve your personal political agenda (again, like Beale), you think that diversity counts for something (again, like Beale). You think your own pedagogic activity in no way contributes to your society’s social ills (like Beale), that you are doing your bit to make the world a better place (again, like Beale).

So what is the difference between you and Beale? Pragmatically, at least, you both look quite similar. What makes the ‘critical thinking’ you teach truly critical, as opposed to his faux critical thinking? Where and how does your institution criticize and revise its own values? Does it take care to hire genuine critics such as myself, or does it write them off (the way all institutions do) as outgroup bozos, as one of ‘them’?

More importantly, what science do you and your colleagues use to back up your account of ‘critical thinking’? Or are you all just winging it?

Your department doesn’t sound much different than mine, 20 years back, except that genre is perhaps accorded a more prominent role (you have to get those butts in seats, now, for funding). The only difference I can see is that you genuinely believe in it, take genuine pride in belonging to such a distinguished and enlightened order… the way any ingroup soldier should. But if you and your institution is so successful, how do you explain the phenomena of conservative creep? Even conservative commentators are astounded how the Great Recession actually seems to have served right wing interests.

.

DUKE: This is the point where we part company. I am happy to have a discussion with you about my perspectives of academia, even if you disagree. I’m even happy to defend what I do and its value. But I will not participate in a discussion with someone who makes a disingenuous (and fallacious) comparison between myself and a someone like Beale. The comparison, however rhetorical, is offensive and, frankly, unnecessarily rude.

Have a good day.

.

[excised]

BAKKER: Perfect! This is what the science shows us: ‘critical’ always almost means ‘critical of the other.’ Researchers have found this dynamic in babies, believe it or not. We can call ourselves ‘critical thinkers,’ but really this is just cover for using the exact same socio-cognitive toolbox as those we impugn. Group identification, as you’ve shown us once again, is primary among those tools. By pointing out the parallels between you and Beale, I identified you with him, and this triggers some very basic intuitions, those tasked with policing group boundaries and individual identities. You feel ‘disgusted,’ or ‘indignant.’

Again, like Beale.

Don’t you see Shaun? The point isn’t to bait or troll you. The point is to show you the universality of the moral cognitive mechanisms at work in all such confrontations between groups of humans. Beale isn’t some odious, alien invader, he is our most tragic, lamentable SELF. Bigotry is a bullet we can only dodge by BITING. Of course you’re a bigot, as am I. Of course you write off others, other views, without understanding them in the least. Of course you essentialize, naturalize. Of course you spend your days passing judgement for the entertainment of others and yourself. Of course you are anything but a ‘critical thinker.’

You’re human. Nothing magical distinguishes you from Beale.

.

Shaun does not want to be an ingroup clown. No one reading this wants to be an ingroup clown. It is troubling, to say the least, that the role deliberative cognition plays in moral problem-solving is almost entirely strategic. But it is a fact, one that explains the endless mire surrounding ethical issues. Pretending will not make it otherwise.

If Shaun knew anything scientific about critical thinking, he would have recognized what he was doing, he would have acknowledged the numerous ways groupishness necessarily drives his discourse. But he doesn’t. Since teaching critical thinking stands high among his group’s mythic values, interlocutors such as myself put him into a jam. If he doesn’t actually know anything about critical thinking, then odds are he’s simply in the indoctrination business (just as his outgroup competitors claim). The longer he engages someone just as clownish, but a little more in the scientific know, the more apparent this becomes. The easiest way to prevent contradiction is to shut down contrary voices. The best way to shut down contrary voices, is to claim moral indignation.

Demonizing Beale is the easy road. The uncritical, self-congratulatory one. You kick him off your porch, tell him to throw his own party. Then you spend the afternoon laughing him off with your friends, those little orgies of pious self-congratulation that we all know so well. You smile, teeth gleaming, convinced that justice has been done and the party saved. Meanwhile the bass booms ever louder across the street. More and more cars line up.

But that’s okay, because life is easier among good-looking friends who find you good-looking as well.

Hugos Weaving

by rsbakker

Red Skull

So the whole idea behind Three Pound Brain, way back when, was to open a waystation between ‘incompatible empires,’ to create a forum where ingroup complacencies are called out and challenged, where our native tendency to believe flattering bullshit can be called to account. To this end, I instigated two very different blog wars, one against an extreme ‘right’ figure in the fantasy community, Theodore Beale, another against an extreme ‘left’ figure, Benjanun Sriduangkaew. All along the idea was to expose these individuals, to show, at least for those who cared to follow, how humans were judging machines, prone to rationalize even the most preposterous and odious conceits. Humans are hardwired to run afoul pious delusion. The science is only becoming more definitive in this regard, I assure you. We are, each and every one of us, walking, talking, yardsticks. Unfortunately, we also have a tendency to affix spearheads to our rules, to confuse our sense of exceptionality and entitlement with the depravity and criminality of others—and to make them suffer.

When it comes to moral reasoning, humans are incompetent clowns. And in an age where high-school students are reengineering bacteria for science fairs, this does not bode well for the future. We need to get over ourselves—and now. Blind moral certainty is no longer a luxury our species can afford.

Now we all watch the news. We all appreciate the perils of moral certainty in some sense, the need to be wary of those who believe too hard. We’ve all seen the ‘Mad Fanatic’ get his or her ‘just desserts’ in innumerable different forms. The problem, however, is that the Mad Fanatic is always the other guy, while we merely enjoy the ‘strength of our convictions.’ Short of clinical depression at least, we’re always—magically you might say—the obvious ‘Hero.’

And, of course, this is a crock of shit. In study after study, experiment after experiment, researchers find that, outside special circumstances, moral argumentation and explanation are strategic—with us being none the wiser! (I highly recommend Joshua Greene’s Moral Tribes or Jonathan Haidt’s The Righteous Mind for a roundup of the research). It may feel like divine dispensation, but dollars to donuts it’s nothing more than confabulation. We are programmed to advance our interests as truth; we’d have no need of Judge Judy otherwise!

It is the most obvious invisible thing. But how do you show people this? How do you get humans to see themselves as the moral fool, as the one automatically—one might even say, mechanically—prone to rationalize their own moral interests, unto madness in some cases. The strategy I employ in my fantasy novels is to implicate the reader, to tweak their moral pieties, and then to jam them the best I can. My fantasy novels are all about the perils of moral outrage, the tragedy of willing the suffering of others in the name of some moral verity, and yet I regularly receive hate mail from morally outraged readers who think I deserve to suffer—fear and shame, in most cases, but sometimes death—for having written whatever it is they think I’ve written.

The blog wars were a demonstration of a different sort. The idea, basically, was to show how the fascistic impulse, like fantasy, appeals to a variety of inborn cognitive conceits. Far from a historical anomaly, fascism is an expression of our common humanity. We are all fascists, in our way, allergic to complexity, suspicious of difference, willing to sacrifice strangers on the altar of self-serving abstractions. We all want to master our natural and social environments. Public school is filled with little Hitlers—and so is the web.

And this, I wanted to show, is the rub. Before the web, we either kept our self-aggrandizing, essentializing instincts to ourselves or risked exposing them to the contradiction of our neighbours. Now, search engines assure that we never need run critical gauntlets absent ready-made rationalizations. Now we can indulge our cognitive shortcomings, endlessly justify our fears and hatreds and resentments. Now we can believe with the grain our stone-age selves. The argumentative advantage of the fascist is not so different from the narrative advantage of the fantasist: fascism, like fantasy, cues cognitive heuristics that once proved invaluable to our ancestors. To varying degrees, our brains are prone to interpret the world through a fascistic lens. The web dispenses fascistic talking points and canards and ad hominems for free—whatever we need to keep our clown costumes intact, all the while thunderously declaring ourselves angels. Left. Right. It really doesn’t matter. Humans are bigots, prone to strip away complexity and nuance—the very things required to solve modern social problems—to better indulge our sense of moral superiority.

For me, Theodore Beale (aka, Vox Day) and Benjanun Sriduangkaew (aka, acrackedmoon) demonstrated a moral version of the Dunning-Kruger effect, how the bigger the clown, the more inclined they are to think themselves angels. My strategy with Beale was simply to show the buffoonery that lay at the heart of his noxious set of views. And he eventually obliged, explaining why, despite the way his claims epitomize bias, he could nevertheless declare himself the winner of the magical belief lottery:

Oh, I don’t know. Out of nearly 7 billion people, I’m fortunate to be in the top 1% in the planet with regards to health, wealth, looks, brains, athleticism, and nationality. My wife is slender, beautiful, lovable, loyal, fertile, and funny. I meet good people who seem to enjoy my company everywhere I go.

He. Just. Is. Superior.

A king clown, you could say, lucky, by grace of God.

Benjanun Sriduangkaew, on the other hand, posed more of a challenge, since she was, when all was said and done, a troll in addition to a clown. In hindsight, however, I actually regard my blog war with her as the far more successful one simply because she was so successful. My schtick, remember, is to show people how they are the Mad Fanatic in some measure, large or small. Even though Sriduangkaew’s tactics consisted of little more than name-calling, even though her condemnations were based on reading the first six pages of my first book, a very large number of ‘progressive’ individuals were only too happy to join in, and to viscerally demonstrate the way moral outrage cares nothing for reasons or casualties. What’s a false positive when traitors are in our midst? All that mattered was that I was one of them according to so-and-so. I would point out over and over how they were simply making my argument for me, demonstrating how moral groupthink deteriorates into punishing strangers, and feeling self-righteous afterward. I would receive tens of thousands of hits on my posts, and less than a dozen clicks on the links I provided citing the relevant research. It was nothing short of phantasmagorical. I was, in some pathetic, cultural backwoods way, the target of a witch-hunt.

(The only thing I regret is that several of my friends became entangled, some jumping ship out fear (sending me ‘please relent’ letters), others, like Peter Watts, for the sin of calling the insanity insanity.)

It’s worth noting in passing that some Three Pound Brain regulars actually tried to get Beale and Sriduangkaew together. Beale, after all, actually held the views she so viciously attributed to me, Morgan, and others. He was the real deal—openly racist and misogynistic—and his blog had more followers than all of her targets combined. Sriduangkaew, on the other hand, was about as close to Beale’s man-hating feminist caricature as any feminist could be. But… nothing. Like competing predators on the savannah, they circled on opposite sides of the herd, smelling one another, certainly, but never letting their gaze wander from their true prey. It was as if, despite the wildly divergent content of their views, they recognized they were the same.

So here we stand a couple of years after the fray. Sriduangkaew, as it turns out, was every bit as troubled as she sounded, and caused others far, far more grief than she ever caused me. Beale, on other hand, has been kind enough to demonstrate yet another one of my points with his recent attempt to suborn the Hugos. Stories of individuals gaming the Hugos are notorious, so in a sense the only thing that makes Beale’s gerrymandering remarkable is the extremity of his views. How? people want to know. How could someone so ridiculously bigoted come to possess any influence in our ‘enlightened’ day and age?

Here we come to the final, and perhaps most problematic moral clown in this sad and comedic tale: the Humanities Academic.

I’m guessing that a good number of you reading this credit some English professor with transforming you into a ‘critical thinker.’ Too bad there’s no such thing. This is what makes the Humanities Academic a particularly pernicious Mad Fanatic: they convince clowns—that is, humans like you and me—that we need not be clowns. They convince cohort after cohort of young, optimistic souls that buying into a different set of flattering conceits amounts to washing the make-up off, thereby transcending the untutored ‘masses’ (or what more honest generations called the rabble). And this is what makes their particular circus act so pernicious: they frame assumptive moral superiority—ingroup elitism—as the result of hard won openness, and then proceed to judge accordingly.

So consider what Philip Sandifer, “a PhD in English with no small amount of training in postmodernism” thinks of Beale’s Hugo shenanigans:

To be frank, it means that traditional sci-fi/fantasy fandom does not have any legitimacy right now. Period. A community that can be this effectively controlled by someone who thinks black people are subhuman and who has called for acid attacks on feminists is not one whose awards have any sort of cultural validity. That sort of thing doesn’t happen to functional communities. And the fact that it has just happened to the oldest and most venerable award in the sci-fi/fantasy community makes it unambiguously clear that traditional sci-fi/fantasy fandom is not fit for purpose.

Simply put, this is past the point where phrases like “bad apples” can still be applied. As long as supporters of Theodore Beale hold sufficient influence in traditional fandom to have this sort of impact, traditional fandom is a fatally poisoned well. The fact that a majority of voices in fandom are disgusted by it doesn’t matter. The damage has already been done at the point where the list of nominees is 68% controlled by fascists.

The problem, Sandifer argues, is institutional. Beale’s antics demonstrate that the institution of fandom is all but dead. The implication is that the science fiction and fantasy community ought to be ashamed, that it needs to gird its loins, clean up its act.

Many of you, I’m sure, find Sandifer’s point almost painfully obvious. Perhaps you’re thinking those rumours about Bakker being a closet this or that must be true. I am just another clown, after all. But catch that moral reflex, if you can, because if you give in, you will be unable—as a matter of empirical fact—to consider the issue rationally.

There’s a far less clownish (ingroupish) way to look at this imbroglio.

Let’s say, for a moment, that readership is more important than ‘fandom’ by far. Let’s say, for a moment, that the Hugos are no more or less meaningful than any other ingroup award, just another mechanism that a certain bunch of clowns uses to confer prestige on those members who best exemplify their self-regarding values—a poor man’s Oscars, say.

And let’s suppose that the real problem facing the arts community lies in the impact of technology on cultural and political groupishness, on the way the internet and preference-parsing algorithms continue to ratchet buyers and sellers into ever more intricately tuned relationships. Let’s suppose, just for instance, that so-called literary works no longer reach dissenting audiences, and so only serve to reinforce the values of readers…

That precious few of us are being challenged anymore—at least not by writing.

The communicative habitat of the human being is changing more radically than at any time in history, period. The old modes of literary dissemination are dead or dying, and with them all the simplistic assumptions of our literary past. If writing that matters is writing that challenges, the writing that matters most has to be writing that avoids the ‘preference funnel,’ writing that falls into the hands of those who can be outraged. The only writing that matters, in other words, is writing that manages to span significant ingroup boundaries.

If this is the case, then Beale has merely shown us that science fiction and fantasy actually matter, that as a writer, your voice can still reach people who can (and likely will) be offended… as well as swayed, unsettled, or any of the things Humanities clowns claim writing should do.

Think about it. Why bother writing stories with progressive values for progressives only, that is, unless moral entertainment is largely what you’re interested in? You gotta admit, this is pretty much the sum of what passes for ‘literary’ nowadays.

Everyone’s crooked is someone else’s straight—that’s the dilemma. Since all moral interpretations are fundamentally underdetermined, there is no rational or evidential means to compel moral consensus. Pretty much anything can be argued when it comes to questions of value. There will always be Beales and Sriduangkaews, individuals adept at rationalizing our bigotries—always. And guess what? the internet has made them as accessible as fucking Wal-Mart. This is what makes engaging them so important. Of course Beale needs to be exposed—but not for the benefit of people who already despise his values. Such ‘exposure’ amounts to nothing more than clapping one another on the back. He needs to be exposed in the eyes of his own constituents, actual or potential. The fact that the paths leading to bigotry run downhill makes the project of building stairs all the more crucial.

‘Legitimacy,’ Sandifer says. Legitimacy for whom? For the likeminded—who else? But that, my well-educated friend, is the sound-proofed legitimacy of the Booker, or the National Book Awards—which is to say, the legitimacy of the irrelevant, the socially inert. The last thing this accelerating world needs is more ingroup ejaculate. The fact that Beale managed to pull this little coup is proof positive that science fiction and fantasy matter, that we dwell in a rare corner of culture where the battle of ideas is for… fucking… real.

And you feel ashamed.

Follow

Get every new post delivered to your Inbox.

Join 656 other followers