Three Pound Brain

No bells, just whistling in the dark…

Category: LITERATURE

Killing Bartleby (Before It’s Too Late)

by rsbakker

Why did I not die at birth,

come forth from the womb and expire?

Why did the knees receive me?

Or why the breasts, that I should suck?

For then I should have lain down and been quiet;

I should have slept; then I should have been at rest,

with kings and counselors of the earth

who rebuilt ruins for themselves…

—Job 3:11-14 (RSV)

 

“Bartleby, the Scrivener: A Story of Wall-Street”: I made the mistake of rereading this little gem a few weeks back. Section I, below, retells the story with an eye to heuristic neglect. Section II leverages this retelling into a critique of readings, like those belonging to the philosophers Gilles Deleuze and Slavoj Zizek, that fall into the narrator’s trap of exceptionalizing Bartleby. If you happen to know anyone interested in Bartleby criticism, by all means encourage them to defend their ‘doctrine of assumptions.’

 

I

The story begins with the unnamed narrator identifying two ignorances, one social and the other personal. The first involves Bartleby’s profession, that “somewhat singular set of men, of whom as yet nothing that I know of has ever been written.” Human scriveners, like human computers, hail from a time when social complexities demanded the undertaking of mechanical cognitive labours, the discharge of tasks too procedural to rest easy in the human soul. Copies are all the ‘system’ requires of them, pure documentary repetition. It isn’t so much that their individuality does not matter, but that it matters too much, perturbing (‘blotting’) the function of the whole. So far as social machinery is legal machinery, you could say law-copyists belong to the neglected innards of mid-19th century society. Bartleby belongs to what might be called the caste of most invisible men.

What makes him worthy of literary visibility turns on a second manifestation of ignorance, this one belonging to the narrator. “What my own astonished eyes saw of Bartleby,” he tells us, “that is all I know of him, except, indeed, one vague report which will appear in the sequel.” And even though the narrator thinks this interpersonal inscrutability constitutes “an irreparable loss to literature,” it turns out to be the very fact upon which the literary obsession with “Bartleby, the Scrivener” hangs. Bartleby is so visible because he is the most hidden of the hidden men.

Since comprehending the dimensions of a black box buried within a black box is impossible, the narrator has no choice but to illuminate the latter, to provide an accounting of Bartleby’s ecology: “Ere introducing the scrivener, as he first appeared to me, it is fit I make some mention of myself, my employees, my business, my chambers, and general surroundings; because some such description is indispensable to an adequate understanding of the chief character about to be presented.” In a sense, Bartleby is nothing apart from his ultimately profound impact on this ecology, such is his mystery.

Aside from inklings of pettiness, the narrator’s primary attribute, we learn, is also invisibility, the degree to which he disappears into his social syntactic role. “I am one of those unambitious lawyers who never addresses a jury, or in any way draws down public applause; but in the cool tranquility of a snug retreat, do a snug business among rich men’s bonds and mortgages and title-deeds,” he tells us. “All who know me, consider me an eminently safe man.” He is, in other words, the part that does not break down, and so, like Heidegger’s famed hammer, never becomes something present to hand, an object of investigation in his own right.

His description of his two existing scriveners demonstrates that his ‘safety’ is to some extent rhetorical, consisting in his ability to explain away inconsistencies, real or imagined. Between Turkey’s afternoon drunkenness and Nipper’s foul morning temperament, you could say his office is perpetually compromised, but the narrator chooses to characterize it otherwise, in terms of each man mechanically cancelling out the incompetence of the other. “Their fits relieved each other like guards,” the narrator informs us, resulting in “a good natural arrangement under the circumstances.”

He depicts what might be called an economy of procedural and interpersonal reflexes, a deterministic ecology consisting of strictly legal or syntactic demands, all turning on the irrelevance of the discharging individual, the absence of ‘blots,’ and a stochastic ecology of sometimes conflicting personalities. Not only does he instinctively understand the insoluble nature of the latter, he also understands the importance of apology, the power of language to square those circles that refuse to be squared. When he comes “within an ace” of firing Turkey, the drunken scrivener need only bow and say what amounts to nothing to mollify his employer. As with bonds and mortgages and title-deeds, the content does not so much matter as does the syntax, the discharge of social procedure. Everyone in his office “up stairs at No.—Wall-street” is a misfit, and the narrator is a compulsive ‘fitter,’ forever searching for ways to rationalize, mythologize, and so normalize, the idiosyncrasies of his interpersonal circumstances.

And of course, he and his fellows are entombed by the walls of Wall Street, enjoying ‘unobstructed views’ of obstructions. Theirs is a subterranean ecology, every bit as “deficient in what landscape painters call ‘life’” as the labour that consumes them.

Enter Bartleby. “After a few words touching his qualifications,” the narrator informs us, “I engaged him, glad to have among my corps of copyists a man of so singularly sedate an aspect, which I thought might operate beneficially upon the flighty temper of Turkey, and the fiery one of Nippers.” Absent any superficial sign of idiosyncrasy, he seems the perfect ecological fit. The narrator gives the man a desk behind a screen in his own office, a corner possessing a window upon obstruction.

After three days, he calls out to Bartleby to examine the accuracy of a document, reflexively assuming the man would discharge the task without delay, only to hear Bartleby, obscure behind his green screen, say the fateful words that would confound, not only our narrator, but countless readers and critics for generations to come: “I would prefer not to.” The narrator is gobsmacked:

“I sat awhile in perfect silence, rallying my stunned faculties. Immediately it occurred to me that my ears had deceived me, or Bartleby had entirely misunderstood my meaning. I repeated my request in the clearest tone I could assume. But in quite as clear a one came the previous reply, “I would prefer not to.””

Given the “natural expectancy of instant compliance,” the narrator assumes the breakdown is communicative. When he realizes this isn’t the case, he confronts Bartleby directly, to the same effect:

“Not a wrinkle of agitation rippled him. Had there been the least uneasiness, anger, impatience or impertinence in his manner; in other words, had there been any thing ordinarily human about him, doubtless I should have violently dismissed him from the premises. But as it was, I should have as soon thought of turning my pale plaster-of-paris bust of Cicero out of doors.”

Realizing that he has been comprehended, the narrator assumes willful defiance, that Bartleby seeks to provoke him, and that, accordingly, the man will present the cues belonging to interpersonal power struggles more generally. When Bartleby manifests none of these signs, the hapless narrator lacks the social script he requires to solve the problem. Turning out the scrivener becomes as unthinkable as surrendering his bust of Cicero, which is to say, the very emblem of his legal vocation.

The next time Bartleby refuses to read, the narrator demands an explanation, asking, “Why do you refuse?” To which Bartleby replies, once again, “I would prefer not to.” When the narrator presses, resolved “to reason with him,” he realizes that dysrationalia is not the problem: “It seemed to me that while I had been addressing him, he carefully revolved every statement that I made; fully comprehended the meaning; could not gainsay the irresistible conclusions; but, at the same time, some paramount consideration prevailed with him to reply as he did.”

If Bartleby were non compos mentis, then he could be ‘medicalized,’ reduced to something the narrator would find intelligible—something providing some script for action. Instead, the scrivener understands, or manifests as much, leaving the narrator groping for evidence of his own rationality:

“It is not seldom the case that when a man is browbeaten in some unprecedented and violently unreasonable way, he begins to stagger in his own plainest faith. He begins, as it were, vaguely to surmise that, wonderful as it may be, all the justice and all the reason is on the other side. Accordingly, if any disinterested persons are present, he turns to them for some reinforcement for his own faltering mind.”

For a claim to be rational it must be rational to everyone. Each of us is stranded with our own perspective, and each of us possesses only the dimmest perspective on that perspective: rationality is something we can only assume. This is why ‘truth’ (especially in ‘normative’ matters (politics)) so often amounts to a ‘numbers game,’ a matter of tallying up guesses. Our blindness to our cognitive orientation—medial neglect—combined with the generativity of the human brain and the capriciousness of our environments, requires the communicative policing of cognitive idiosyncrasies. Whatever rationality consists in, minimally it functions to minimize discrepancies between individuals, sometimes vis a vis their environments and sometimes not. Reason, like the narrator, makes things fit.

The ‘disinterested persons’ the narrator turns to are themselves misfits, with “Nippers’ ugly mood on duty and Turkey’s off.” The irony here, and what critics are prone to find most interesting, is that the three are anything but disinterested. The more thought-provoking fact, however, lies in the way they agree with their employer despite the wild variance of their answers. For all the idiosyncrasies of its constituents, the office ecology automatically manages to conserve its ‘paramount consideration’: functionality.

Baffled unto inaction, the narrator suffers bouts of explaining away Bartleby’s discrepancies in terms of his material and moral utilities. The fact of his indulgences alternately congratulates and exasperates him: Bartleby becomes (and remains) a bi-stable sociocognitive figure, alternately aggressor and victim. “Nothing so aggravates an earnest person as a passive resistance,” the narrator explains. “If the individual so resisted be of a not inhumane temper, and the resisting one perfectly harmless in his passivity; then, in the better moods of the former, he will endeavor charitably to construe to his imagination what proves impossible to be solved by his judgment.” To be earnest is to be prone to minimize social discrepancies, to optimize via the integrations of others. The passivity of “I would prefer not to” poises Bartleby upon a predictive-processing threshold, one where the vicissitudes of mood are enough to transform him from a ‘penniless wight’ into a ‘brooding Marius’ and back again. The signals driving the charitable assessment are constantly interfering with the signals driving the uncharitable assessment, forcing the different neural hypotheses to alternate.

Via this dissonance, the scrivener begins to train him, with each “I would prefer not to” tending “to lessen the probability of [his] repeating the inadvertence.”

The ensuing narrative establishes two facts. First, we discover that Bartleby belongs to the office ecology, and in a manner more profound than even the narrator, let alone any one of his employees. Discovering Bartleby indisposed in his office on a Sunday, the narrator finds himself fleeing his own premises, alternately lost in “sad fancyings—chimeras, doubtless, of a sick and silly brain” and “[p]resentiments of strange discoveries”—strung between delusion and revelation.

Second, we learn that Bartleby, despite belonging to the office ecology, nevertheless signals its ruination:

“Somehow, of late I had got into the way of involuntarily using this word “prefer” upon all sorts of not exactly suitable occasions. And I trembled to think that my contact with the scrivener had already and seriously affected me in a mental way. And what further and deeper aberration might it not yet produce?”

When the narrator catches Turkey also saying “prefer,” he says, “So you have got the word too,” as if a verbal tick could be caught as a cold. Turkey manifests cryptonesia. Nippers does the same not moments afterward—ever bit as unconsciously as Turkey. Knowing nothing of the way humans have evolved to unconsciously copy linguistic behaviour, the narrator construes Bartleby as a kind of contagion—or pollutant, a threat to his delicately balanced office ecology. He once again determines he must rid his office of the scrivener’s insidious influence, but, under that influence, once again allows prudence—or the appearance of such—to dissuade immediate action.

Bartleby at last refuses to copy, irrevocably undoing the foundation of the narrator’s ersatz rationalizations. “And what is the reason?” the narrator demands to know. Staring at the brick wall just beyond his window, Bartleby finally offers a different explanation: “Do you not see the reason for yourself.” Though syntactically structured as a question, this statement possesses no question mark in Melville’s original version (as it does, for instance, in the version anthologized by Norton). And indeed, the narrator misses the very reason implied by his own narrative—the wall that occupied so many of Bartleby’s reveries—and confabulates an apology instead: work induced ‘impaired vision.’

But this rationalization, like all the others, is quickly exhausted. The internal logic of the office ecology is entirely dependent on the logic of Wall-street: the text continually references the functional exigencies commanding the ebb and flow of their lives, the way “necessities connected with my business tyrannized over all other considerations.” The narrator, when all is said and done, is an instrument of the Law and the countless institutions dependent upon it. At long last he fires Bartleby rather than merely resolving to do so.

He celebrates his long-deferred decisiveness while walking home, only to once again confront the blank wall the scrivener has become:

“My procedure seemed as sagacious as ever—but only in theory. How it would prove in practice—there was the rub. It was truly a beautiful thought to have assumed Bartleby’s departure; but, after all, that assumption was simply my own, and none of Bartleby’s. The great point was, not whether I had assumed that he would quit me, but whether he would prefer so to do. He was more a man of preferences than assumptions.”

And so, the great philosophical debate, both within the text and its critical reception, is set into motion. Lost in rumination, the narrator overhears someone say, “I’ll take odds he doesn’t,” on the street, and angrily retorts, assuming the man was referring to Bartleby, and not, as was actually the case, an upcoming election. Bartleby’s ‘passive resistance’ has so transformed his cognitive ecology as to crash his ability to make sense of his fellow man. Meaning, at least so far as it exists in his small pocket of the world, has lost its traditional stability.

Of course, the stranger’s voice, though speaking of a different matter altogether, had spoken true. Bartleby prefers not to leave the office that has become his home.

“What was to be done? or, if nothing could be done, was there any thing further that I could assume in the matter? Yes, as before I had prospectively assumed that Bartleby would depart, so now I might retrospectively assume that departed he was. In the legitimate carrying out of this assumption, I might enter my office in a great hurry, and pretending not to see Bartleby at all, walk straight against him as if he were air. Such a proceeding would in a singular degree have the appearance of a home-thrust. It was hardly possible that Bartleby could withstand such an application of the doctrine of assumptions.”

The ‘home-thrust,’ in other words, is to simply pretend, to physically enact the assumption of Bartleby’s absence, to not only ignore him, but to neglect him altogether, to the point of walking through him if need be. “But upon second thoughts the success of the plan seemed rather dubious,” the narrator realizes. “I resolved to argue the matter over with him again,” even though argument, Sellars famed ‘game of giving and asking for reasons,’ is something Bartleby prefers not to recognize.

When the application of reason fails once again, the narrator at last entertains the thought of killing Bartleby, realizing “the circumstance of being alone in a solitary office, up stairs, of a building entirely unhallowed by humanizing domestic associations” is one tailor-made for the commission of murder. Even acts of evil have their ecological preconditions. But rather than seize Bartleby, he ‘grapples and throws’ the murderous temptation, recalling the Christian injunction to love his neighbour. As research suggests, imagination correlates with indecision, the ability to entertain (theorize) possible outcomes: the narrator is nothing if not an inspired social confabulator. For every action-demanding malignancy he ponders, his aversion to confrontation occasions another reason for exemption, which is all he needs to reduce the discrepancies posed.

He resigns himself to the man:

“Gradually I slid into the persuasion that these troubles of mine touching the scrivener, had been all predestinated from eternity, and Bartleby was billeted upon me for some mysterious purpose of an all-wise Providence, which it was not for a mere mortal like me to fathom. Yes, Bartleby, stay there behind your screen, thought I; I shall persecute you no more; you are harmless and noiseless as any of these old chairs; in short, I never feel so private as when I know you are here. At last I see it, I feel it; I penetrate to the predestinated purpose of my life. I am content. Others may have loftier parts to enact; but my mission in this world, Bartleby, is to furnish you with office-room for such period as you may see fit to remain.”

But this story, for all its grandiosity, likewise melts before the recalcitrant scrivener. The comical notion that furnishing Bartleby an office could have cosmic significance merely furnishes a means of ignoring what cannot be ignored: how the man compromises, in ways crude and subtle, the systems of assumptions, the network of rational reflexes, comprising the ecology of Wall-street. In other words, the narrator’s clients are noticing…

“Then something severe, something unusual must be done. What! surely you will not have him collared by a constable, and commit his innocent pallor to the common jail? And upon what ground could you procure such a thing to be done?—a vagrant, is he? What! he a vagrant, a wanderer, who refuses to budge? It is because he will not be a vagrant, then, that you seek to count him as a vagrant. That is too absurd. No visible means of support: there I have him. Wrong again: for indubitably he does support himself, and that is the only unanswerable proof that any man can show of his possessing the means so to do.”

At last invisibility must be sacrificed, and regularity undone. The narrator ratchets through the facts of the scrivener’s cognitive bi-stability. An innocent criminal. An immovable vagrant. Unsupported yet standing. Reason itself cracks about him. And what reason cannot touch only fight or flight can undo. If the ecology cannot survive Bartleby, and Bartleby is immovable, then the ecology must be torn down and reestablished elsewhere.

It’s tempting to read this story in ‘buddy terms,’ to think that the peculiarities of Bartleby only possess the power they do given the peculiarities of the narrator. (One of the interesting things about the yarn is the way it both congratulates and insults the neuroticism of the critic, who, having canonized Bartleby, cannot but flatter themselves both by thinking they would have endured Bartleby the way the narrator does, and by thinking that surely they wouldn’t be so disabled by the man). The narrator’s decision to relocate allows us to see the universality of his type, how others possessing far less history with the scrivener are themselves driven to apologize, to exhaust all ‘quiet’ means of minimizing discrepancies. “[S]ome fears are entertained of a mob,” his old landlord warns him, desperate to purge the scrivener from No.—Wall-street.

Threatened with exposure in the papers—visibility—the narrator once again confronts Bartleby the scrivener. This time he comes bearing possibilities of gainful employment, greener pastures, some earnest, some sarcastic, only to be told, “I would prefer not to,” with the addition of, “I am not particular.” And indeed, as Bartleby’s preference severs ever more ecological connections, he seems to become ever more super-ecological, something outside the human communicative habitat. Repulsed yet again, the narrator flees Wall-street altogether.

Bartleby, meanwhile, is imprisoned in the Tombs, the name given to the House of Detention in lower Manhattan. A walled street is replaced by a walled yard—which, the narrator will tell Bartleby, “is not so sad a place as one might think,” the irony being, of course, that with sky and grass the Tombs actually represent an improvement over Wall-street. Bartleby, for his part, only has eyes for the walls—his unobstructed view of obstruction. To assure his former scrivener is well fed, the narrator engages the prison cook, who asks him whether Bartleby is a forger, likening the man to Monroe Edwards, a famed slavetrader and counterfeiter in Melville’s day. Despite the criminal connotations of Nippers, the narrator assures the man he was “never socially acquainted with any forgers.”

On his next visit, he discovers that Bartleby’s metaphoric ‘dead wall reveries’ have become literal. The narrator finds him “huddled at the base of the wall, his knees drawn up, and lying on his side, his head touching the cold stones,” dead for starvation. Cutting the last, most fundamental ecological reflex of all—the consumption of food—Bartleby has finally touched the face of obstruction… oblivion.

The story proper ends with one last misinterpretation: the cook assuming that Bartleby sleeps. And even here, at this final juncture, the narrator apologizes rather than corrects, quoting Job 3:14, using the Holy Bible, perhaps, to “mason up his remains in the wall.” Melville, however, seems to be gesturing to the fundamental problem underwriting the whole of his tale, the problem of meaning, quoting a fragment of Job in extremis, asking God why he should have been born at all, if his lot was only desolation. What meaning resides in such a life? Why not die an innocent?

Like Bartleby.

What the narrator terms the “sequel” consists of no more than two paragraphs (set apart by a ‘wall’ of eight asterisks), the first divulging “one little item of rumor” which may or may not be more or less true, the second famously consisting in, “Ah Bartleby! Ah humanity!” The rumour occasioning these apostrophic cries suggests “that Bartleby had been a subordinate clerk in the Dead Letter Office at Washington, from which he had been suddenly removed by a change of administration.”

What moves the narrator to passions too complicated to scrutinize is nothing other than the ecology of such a prospect: “Conceive a man by nature and misfortune prone to a pallid hopelessness, can any business seem more fitted to heighten it than that of continually handling these dead letters, and assorting them for the flames?” Here at last, he thinks, we find some glimpse of the scrivener’s original habitat: dead letters potentially fund the reason the man forever pondered dead walls. Rather than a forger, one who cheats systems, Bartleby is an undertaker, one who presides over their crashing. The narrator paints his final rationalization, Bartleby mediating an ecology of fatal communicative interruptions:

“Sometimes from out the folded paper the pale clerk takes a ring:—the finger it was meant for, perhaps, moulders in the grave; a bank-note sent in swiftest charity:—he whom it would relieve, nor eats nor hungers any more; pardon for those who died despairing; hope for those died unhoping; good tidings for those who died stifled by unrelieved calamities. On errands of life, these letters speed to death.”

An ecology, in other words, consisting of quotidian ecological failures, life lost for the interruption of some crucial material connection, be it ink or gold. Thus, are Bartleby and humanity entangled in the failures falling out of neglect, the idiosyncratic, the addresses improperly copied, and the ill-timed, the words addressed to those already dead. A meta-ecology where discrepancies can never be healed only consigned to oblivion.

But, of course, were Bartleby still living, this ‘sad fancying’ would likewise turn out to be a ‘chimera of a sick and silly brain.’ Just another way to brick over the questions. If the narrator finds consolation, the wreckage of his story remains.

 

II

I admit that I feel more like Ahab than Ishmael… most of the time. But I’m not so much obsessed by the White Whale as by what is obliterated when it’s revealed as yet another mere cetacean. Be it the wrecking of The Pequod, or the flight of the office at No.— Wall-street, the problem of meaning is my White Whale. “Bartleby, the Scrivener” is compelling, I think, to the degree it lends that problem the dimensionality of narrative.

Where in Moby-Dick, the relation between the inscrutable and the human is presented via Ishmael, which is to say the third person, in Bartleby, the relation is presented in the second: the narrator is Ahab, every bit as obsessed with his own pale emblem of unaccountable discrepancy—every bit as maddened. The violence is merely sublimated in quotidian discursivity.

The labour of Ishmael falls to the critic. “Life is so short, and so ridiculous and irrational (from a certain point of view),” Melville writes to John C. Hoadley in 1877, “that one knows not what to make of it, unless—well, finish the sentence for yourself.” A great many critics have, spawning what Dan McCall termed (some time ago now) the ‘Bartleby Industry.’ There’s so many interpretations, in fact, that the only determinate thing one can say regarding the text is that it systematically underdetermines every attempt to determine its ‘meaning.’

In the ecology of literary and philosophical critique, Bartleby remains a crucial watering hole in an ever-shrinking reservation of the humanities. A great number of these interpretations share the narrator’s founding assumption, that Bartleby—the character—represents something exceptional. Consider, for instance, Deleuze in “Bartleby; or, the Formula.”

“If Bartleby had refused, he could still be seen as a rebel or insurrectionary, and as such would still have a social role. But the formula stymies all speech acts, and at the same time, it makes Bartelby a pure outsider [exclu] to whom no social position can be attributed. This is what the attorney glimpses with dread: all his hopes of bringing Bartleby back to reason are dashed because they rest on a logic of presuppositions according to which an employer ‘expects’ to be obeyed, or a kind of friend listened to, whereas Bartleby has invented a new logic, a logic of preference, which is enough to undermine the presuppositions of language as a whole.” 73

Or consider Zizek, who uses Bartleby to conclude The Parallax View no less:

“In his refusal of the Master’s order, Bartleby does not negate the predicate; rather, he affirms a nonpredicate: he does not say that he doesn’t want to do it; he says that he prefers (wants) not to do it. This is how we pass from the politics of “resistance” or “protestation,” which parasitizes upon what it negates, to a politics which opens up a new space outside the hegemonic position and its negation.” 380-1

Bartleby begets ‘Bartleby politics,’ the possibility of a relation to what stands outside relationality, a “move from something to nothing, from the gap between two ‘somethings’ to the gap that separates a something from nothing, from the void of its own place” (381). Bartleby isn’t simply an outsider on this account, he’s a pure outsider, more limit than liminal. And this, of course, is the very assumption that the narrator himself carries away intact: that Bartleby constitutes something ontologically or logically exceptional.

I no longer share this assumption. Like Borges in his “Prologue to Herman Melville’s “Bartleby,” I see “the symbol of the whale is less apt for suggesting the universe is vicious than for suggesting its vastness, its inhumanity, its bestial or enigmatic stupidity.” Melville, for all the wide-eyed grandiloquence of his prose, was a squinty-eyed skeptic. “These men are all cracked right across the brow,” he would write of philosophers such as Emerson. “And never will the pullers-down be able to cope with the builders-up.” For him, the interest always lies in the distances between lofty discourse and the bloody mundanities it purports to solve. As he writes to Hawthorne in 1851:

“And perhaps after all, there is no secret. We incline to think that the Problem of the Universe is like the Freemason’s mighty secret, so terrible to all children. It turns out, at last, to consist in a triangle, a mallet, and an apron—nothing more! We incline to think that God cannot explain His own secrets, and that He would like a little more information upon certain points Himself. We mortals astonish Him as much as He us.”

It’s an all too human reflex. Ignorance becomes justification for the stories we want to tell, and we are filled with “oracular gibberish” as a result.

So what if Bartleby holds no secrets outside the ‘contagion of nihilism’ that Borges ascribes to him?

As a novelist, I cannot but read the tale, with its manifest despair and gallows humour, as the expression of another novelist teetering on the edge of professional ruin. Melville conceived and wrote “Bartleby, the Scrivener” during a dark period of his life. Both Moby-Dick and Pierre had proved to be critical and commercial failures. As Melville would write to Hawthorne:

“What I feel most moved to write, that is banned—it will not pay. Yet, altogether write the other way I cannot. So the product is a final hash, and all my books are botches.”

Forgeries, neither artistic nor official. Two species of neuroticism plague full-time writers, particularly if they possess, as Melville most certainly did, a reflective bent. There’s the neuroticism that drives a writer to write, the compulsion to create, and there’s the neuroticism secondary to a writer’s consciousness of this prior incapacity, the neurotic compulsion to rationalize one’s neuroticism.

Why, for instance, am I writing this now? Am I a literary critic? No. Am I being paid to write this? No. Are there things I should be writing instead? Buddy, you have no idea. So why don’t I write as I should?

Well, quite simply, I would prefer not to.

And why is this? Is it because I have some glorious spark in me? Some essential secret? Am I, like Bartleby, a pure outsider?

Or am I just a fucking idiot? A failed copyist.

For critics, the latter is pretty much the only answer possible when it comes to living writers who genuinely fail to copy. No matter how hard we wave discrepancy’s flag, we remain discrepancy minimization machines—particularly where social cognition is concerned. Living literary dissenters cue reflexes devoted to living threats: the only good discrepancy is a dead discrepancy. As the narrator discovers, attributing something exceptional becomes far easier once the dissenter is dead. Once the source falls silent, the consequences possess the freedom to dispute things as they please.

Writers themselves, however, discover they are divided, that Ahab is not Ahab, but Ishmael as well, the spinner of tales about tales. A failed copyist. A hapless lawyer. Gazing at obstruction, chasing the whale, spinning rationalization after rationalization, confabulating as a human must, taking meagre heart in spasms of critical fantasy.

Endless interpretative self-deception. As much as I recognize Bartleby, I know the narrator only too well. This is why for me, “Bartleby, the Scrivener” is best seen as a prank on the literary establishment, a virus uploaded with each and every Introduction to American Literature class, one assuring that the critic forever bumbles as the narrator bumbles, waddling the easy way, the expected way, embodying more than applying the ‘doctrine of assumptions.’ Bartleby is the paradigmatic idiot, both in the ancient Greek sense of idios, private unto inscrutable, and idiosyncratic unto useless. But for the sake of vanity and cowardice, we make of him something vast, more than a metaphor for x. The character of Bartleby, on this reading, is not so much key to understanding something ‘absolute’ as he is key to understanding human conceit—which is to say, the confabulatory stupidity of the critic.

But explaining the prank, of course, amounts to falling for the prank (this is the key to its power). No matter how mundane one’s interpretation of Bartleby, as an authorial double, as a literary prank, it remains simply one more interpretation, further evidence of the narrative’s profound indeterminacy. ‘Negative exceptionalists’ like Deleuze or Zizek (or Agamben) need only point out this fact to rescue their case—don’t they? Even if Melville conceived Bartleby as his neurotic alter-ego, the word-crazed husband whose unaccountable preferences had reduced his family to penury (and so, charity), he nonetheless happened upon “a zone of indetermination or indiscernibility in which neither words nor characters can be distinguished” (“Bartleby, or the Formula,” 76).

No matter how high one stacks their mundane interpretations of Bartleby—as an authorial alter-ego, a psycho-sociological casualty, an exemplar of passive resistance, or so on—the profundity of his rationality crashing function remains every bit as profound, exceptional. Doesn’t it? After-all, nothing essential binds the distal intent of the author (itself nothing but another narrative) to the proximate effect of the text, which is to “send language itself into flight” (76). Once we set aside the biographical, psychological, historical, economic, political, and so on, does not this formal function remain? And is it not irreducible, exceptional?

That depends whether you think,

is exceptional. What should we say about Necker Cubes? Do they mark the point where the visibility of the visible collapses, generating ‘a zone of indetermination or indiscernibility in which neither indents nor protrusions can be distinguished’? Are they ‘pure figures,’ efficacies that stand outside the possibility of intelligible geometry? Or do they merely present the visual cortex with the demand to distinguish between indents and protrusions absent the information required to settle that demand, thus stranding visual experience upon the predictive threshold of both? Are they bi-stable images?

The first explanation pretty clearly mistakes a heuristic breakdown in the cognition of visual information with an exceptional visual object, something intrinsically indeterminate—something super-geometrical, in fact. When we encounter something visually indeterminate, we immediately blame our vision, which is to say, the invisible, enabling dimension of visual cognition. Visual discrepancies had real reproductive consequences, evolutionarily speaking. Thanks to medial neglect, we had no way of cognizing the ecological nature of vision, so we could only blink, peer, squint, rub our eyes, or change our position. If the discrepancy persisted, we wondered at it, and if we could, transformed it into something useful—be it cuing environmental forms on cave or cathedral walls (‘visual representations’) or cuing wonder with kaleidoscopes at Victorian exhibitions.

Likewise, Deleuze and Zizek (and many, many others) are mistaking a heuristic breakdown in the cognition of social information with an exceptional social entity, something intrinsically indeterminate—something super-social. Imagine encountering a Bartleby in your own place of employ. Imagine your employer not simply tolerating him, but enabling him, allowing him to drift ever deeper into anorexic catatonia. Initially, when we encounter something socially indeterminate in vivo, we typically blame communication—as does the narrator with Bartleby. Social discrepancies, one might imagine, had profound reproductive consequences (given that reproduction is itself social). The narrator’s sensitivity to such discrepancies is the sensitivity that all of us share. Given medial neglect, however, we have no way of cognizing the ecological nature of social cognition. So we check with our colleagues just to be sure (‘Am I losing my mind here?’), then we blame the breakdown in rational reflexes on the man himself. We gossip, test out this or that pet theory, pester spouses who, insensitive to potential micropolitical discrepancies, urge us to file a complaint with someone somewhere. Eventually, we either quit the place, get the poor sod some help, or transform him into something useful, like “Bartleby politics” or what have you. This is the prank that Melville lays out with the narrator—the prank that all post-modern appropriations of this tale trip into headlong…

The ecological nature of cognition entails the blindness of cognition to its ecological nature. We are distributed systems: we evolved to take as much of our environments for granted as we possibly could, accessing as little as possible to solve as many problems as possible. Experience and cognition turn on shallow information ecologies, blind systems turning on reliable (because reliably generated) environmental frequencies to solve problems—especially communicative problems. Absent the requisite systems and environments, these ecologies crash, result in the application of cognitive systems to situations they cannot hope to solve. Those who have dealt with addicted or mentally-ill loved ones know the profundity of these crashes first-hand, the way the unseen reflexes (‘preferences’) governing everyday interactions cast you into dismay and confusion time and again, all for want of applicability. There’s the face, the eyes, all the cues signaling them as them, and then… everything collapses into mealy alarm and confusion. Bartleby, with his dissenting preference, does precisely the same: Melville provides exquisite experiential descriptions of the dumbfounding characteristic of sociocognitive crashes.

Bartleby need not be a ‘pure outsider’ to do this. He just needs to provide enough information to demand disambiguation, but not enough information to provide it. “I would prefer not to”—Bartleby’s ‘formula,’ according to Deleuze—is anything but ‘minimal’: its performance functions the way it does because of the intricate communicative ecology it belongs to. But given medial neglect, our blindness to ecology, the formula is prone to strike us as something quite different, as something possessing no ecology.

It certainly strikes Deleuze as such:

“The formula is devastating because it eliminates the preferable just as mercilessly as any nonpreferred. It not only abolishes the term it refers to, and that it rejects, but also abolishes the other term it seemed to preserve, and that becomes impossible. In fact, it renders them indistinct: it hollows out an ever expanding zone of indiscernibility or indetermination between some nonpreferred activities and a preferable activity. All particularity, all reference is abolished.” 71

Since preferences affirm, ‘preferring not to’ (expressed in the subjunctive no less) can be read as an affirmative negation: it affirms the negation of the narrator’s request. Since nothing else is affirmed, there’s a peculiar sense in which ‘preferring not to’ possesses no reference whatsoever. Medial neglect assures that reflection on the formula occludes the enabling ecology, that asking what the formula does will result in fetishization, the attribution of efficacy in an explanatory vacuum. Suddenly ‘preferring not to’ appears to be a ‘semantic disintegration grenade,’ something essentially disruptive.

In point of natural fact, however, human sociocognition is fundamentally interactive, consisting in the synchronization of radically heuristic systems given only the most superficial information. Understanding one another is a radically interdependent affair. Bartleby presents all the information cuing social reliability, therefore consistently cuing predictions of reliability that turn out to be faulty. The narrator subsequently rummages through the various tools we possess to solve harmless acts of unreliability given medial neglect—tools which have no applicability in Bartleby’s case. Not only does Bartleby crash the network of predictive reflexes constituting the office ecology, he crashes the sociocognitive hacks that humans in general use to troubleshoot such breakdowns. He does so, not because of some arcane semantic power belonging to the ‘formula,’ but because he manifests as a sociocognitive Necker-Cube, cuing noncoercive troubleshooting routines that have no application given whatever his malfunction happens to be.

This is the profound human fact that Melville’s skeptical imagination fastened upon, as well as the reason Bartleby is ‘nothing in particular’: all human social cognition is fundamentally ecological. Consider, once again, the passage where the narrator entertains the possibility of neglecting Bartleby altogether, simply pretending he was absent:

“What was to be done? or, if nothing could be done, was there any thing further that I could assume in the matter? Yes, as before I had prospectively assumed that Bartleby would depart, so now I might retrospectively assume that departed he was. In the legitimate carrying out of this assumption, I might enter my office in a great hurry, and pretending not to see Bartleby at all, walk straight against him as if he were air. Such a proceeding would in a singular degree have the appearance of a home-thrust. It was hardly possible that Bartleby could withstand such an application of the doctrine of assumptions. But upon second thoughts the success of the plan seemed rather dubious. I resolved to argue the matter over with him again.”

Having reached the limits sociocognitive application, he proposes simply ignoring any subsequent failure in prediction, in effect, wishing the Bartlebian crash space away. The problem, of course, is that it ‘takes two to tango’: he has no choice but to ‘argue the matter again’ because the ‘doctrine of assumptions’ is interactional, ecological. What Melville has fastened upon here is the way the astronomical complexity of the sociocognitive (and metacognitive) systems involved holds us hostage, in effect, to their interactional reliability. Meaning depends on maddening sociocognitive intricacies.

The entirety of the story illustrates the fragility of this cognitive ecosystem despite its all-consuming power. Time and again Bartleby is characterized as an ecological casualty of the industrialization of social relations, be it the mass disposal of undelivered letters or the mass reproduction of legally binding documentation. Like ‘computer,’ ‘copier’ names something that was once human but has since become technology. But even as Bartleby’s breakdown expresses the system’s power to break the maladapted, it also reveals its boggling vulnerability, the ease with which it evaporates into like-minded conspiracies and ‘mere pretend.’ So long as everyone plays along—functions reliably—this interdependence remains occluded, and the irrationality (the discrepancy generating stupidity) of the whole never needs be confronted.

In other words, the lesson of Bartleby can be profound, as profound as human communication and cognition itself, without implying anything exceptional. Stupidity, blind, obdurate obliviousness, is all that is required. A minister’s black veil, a bit of crepe poised upon the right interactional interface, can throw whole interpretative communities from their pins. The obstruction, the blank wall, need not conceal anything magical to crash the gossamer ecologies of human life. It need only appear to be a window, or more cunning still, a window upon a wall. We need only be blind to the interactional machinery of looking to hallucinate absolute horizons. Blind to the meat of life.

And in this sense, we can accuse the negative exceptionalists such as Deleuze and Zizek not simply with ignoring life, the very topos of literature, but with concealing the threat that the technologization of life poses to life. Only in an ecology can we understand the way victims can at once be assailants absent aporia, how Bartleby, overthrown by the technosocial ecologies of his age, can in turn overthrow that technosocial ecology. Only understanding life for what we know it to be—biological—allows us to see the profound threat the endless technological rationalization of human sociocognitive ecologies poses to the viability of those ecologies. For Bartleby, by revealing the ecological fragility of human social cognition, how break begets break, reveals the antithesis between ‘progress’ and ‘meaning,’ how the former can only carry the latter so far before crashing.

As Deleuze and Zizek have it, Bartleby holds open a space of essential resistance. As the reading here has it, Bartleby provides a grim warning regarding the ecological fragility of human social cognition. One can even look at him as a blueprint for the potential weaponization of anthropomorphic artificial intelligence, systems designed to strand individual decision-making upon thresholds, to command inaction via the strategic presentation of cues. Far from representing some messianic discrepancy, apophatic proof of transcendence, he represents the way we ourselves become cognitive pollutants when abandoned to polluted cognitive ecologies.

Advertisements

Enlightenment How? Omens of the Semantic Apocalypse

by rsbakker

“In those days the world teemed, the people multiplied, the world bellowed like a wild bull, and the great god was aroused by the clamor. Enlil heard the clamor and he said to the gods in council, “The uproar of mankind is intolerable and sleep is no longer possible by reason of the babel.” So the gods agreed to exterminate mankind.” –The Epic of Gilgamesh

We know that human cognition is largely heuristic, and as such dependent upon cognitive ecologies. We know that the technological transformation of those ecologies generates what Pinker calls ‘bugs,’ heuristic miscues due to deformations in ancestral correlative backgrounds. In ancestral times, our exposure to threat-cuing stimuli possessed a reliable relationship to actual threats. Not so now thanks to things like the nightly news, generating (via, Pinker suggests, the availability heuristic (42)) exaggerated estimations of threat.

The toll of scientific progress, in other words, is cognitive ecological degradation. So far that degradation has left the problem-solving capacities of intentional cognition largely intact: the very complexity of the systems requiring intentional cognition has hitherto rendered cognition largely impervious to scientific renovation. Throughout the course of revolutionizing our environments, we have remained a blind-spot, the last corner of nature where traditional speculation dares contradict the determinations of science.

This is changing.

We see animals in charcoal across cave walls so easily because our visual systems leap to conclusions on the basis of so little information. The problem is that ‘so little information’ also means so easily reproduced. The world is presently engaged in a mammoth industrial research program bent on hacking every cue-based cognitive reflex we possess. More and more, the systems we evolved to solve our fellow human travelers will be contending with artificial intelligences dedicated to commercial exploitation. ‘Deep information,’ meanwhile, is already swamping the legal system, even further problematizing the folk conceptual (shallow information) staples that ground the system’s self-understanding. Creeping medicalization continues unabated, slowly scaling back warrant for things like character judgment in countless different professional contexts.

Now that the sciences are colonizing the complexities of experience and cognition, we can see the first clear-cut omens of the semantic apocalypse.

 

Crash Space

He assiduously avoids the topic in Enlightenment Now, but in The Blank Slate, Pinker devotes several pages to deflating the arch-incompatibility between natural and intentional modes of cognition, the problem of free will:

“But how can we have both explanation, with its requirement of lawful causation, and responsibility, with its requirement of free choice? To have them both we don’t need to resolve the ancient and perhaps irresolvable antinomy between free will and determinism. We have only to think clearly about what we want the notion of responsibility to achieve.” 180

He admits there’s no getting past the ‘conflict of intuitions’ underwriting the debate. Since he doesn’t know what intentional and natural cognition amount to, he doesn’t understand their incompatibility, and so proposes we simply side-step the problem altogether by redefining ‘responsibility’ to mean what we need it to mean—the same kind of pragmatic redefinition proposed by Dennett. He then proceeds to adduce examples of ‘clear thinking’ by providing guesses regarding ‘holding responsible’ as deterrence, which is more scientifically tractable. “I don’t claim to have solved the problem of free will, only to show that we don’t need to solve it to preserve personal responsibility in the face of an increasing understanding of the causes of behaviour” (185).

Here we can see how profoundly Pinker (as opposed to Nietzsche and Adorno) misunderstands the profundity of Enlightenment disenchantment. The problem isn’t that one can’t cook up alternate definitions of ‘responsibility,’ the problem is that anyone can, endlessly. ‘Clear thinking’ is as liable to serve Pinker as well as ‘clear and distinct ideas’ served Descartes, which is to say, as more grease for the speculative mill. No matter how compelling your particular instrumentalization of ‘responsibility’ seems, it remains every bit as theoretically underdetermined as any other formulation.

There’s a reason such exercises in pragmatic redefinition stall in the speculative ether. Intentional and mechanical cognitive systems are not optional components of human cognition, nor are the intuitions we are inclined to report. Moreover, as we saw in the previous post, intentional cognition generates reliable predictions of system behaviour absent access to the actual sources of that behaviour. Intentional cognition is source-insensitive. Natural cognition, on the other hand, is source sensitive: it generates predictions of system behaviour via access to the actual sources of that behaviour.

Small wonder, then, that our folk intentional intuitions regularly find themselves scuttled by scientific explanation. ‘Free will,’ on this account, is ancestral lemonade, a way to make the best out of metacognitive lemons, namely, our blindness to the sources of our thought and decisions. To the degree it relies upon ancestrally available (shallow) saliencies, any causal (deep) account of those sources is bound to ‘crash’ our intuitions regarding free will. The free will debate that Pinker hopes to evade with speculation can be seen as a kind of crash space, the point where the availability of deep information generates incompatible causal intuitions and intentional intuitions.

The confusion here isn’t (as Pinker thinks) ‘merely conceptual’; it’s a bona fide, material consequence of the Enlightenment, a cognitive version of a visual illusion. Too much information of the wrong kind crashes our radically heuristic modes of cognizing decisions. Stipulating definitions, not surprisingly, solves nothing insofar as it papers over the underlying problem—this is why it merely adds to the literature. Responsibility-talk cues the application of intentional cognitive modes; it’s the incommensurability of these modes with causal cognition that’s the problem, not our lexicons.

 

Cognitive Information

Consider the laziness of certain children. Should teachers be allowed to hold students responsible for their academic performance? As the list of learning disabilities grows, incompetence becomes less a matter of ‘character’ and more a matter of ‘malfunction’ and providing compensatory environments. Given that all failures of competence redound on cognitive infelicities of some kind, and given that each and every one of these infelicities can and will be isolated and explained, should we ban character judgments altogether? Should we regard exhortations to ‘take responsibility’ as forms of subtle discrimination, given that executive functioning varies from student to student? Is treating children like (sacred) machinery the only ‘moral’ thing to do?

So far at least. Causal explanations of behaviour cue intentional exemptions: our ancestral thresholds for exempting behaviour from moral cognition served larger, ancestral social equilibria. Every etiological discovery cues that exemption in an evolutionarily unprecedented manner, resulting in what Dennett calls “creeping exculpation,” the gradual expansion of morally exempt behaviours. Once a learning impediment has been discovered, it ‘just is’ immoral to hold those afflicted responsible for their incompetence. (If you’re anything like me, simply expressing the problem in these terms rankles!) Our ancestors, resorting to systems adapted to resolving social problems given only the merest information, had no problem calling children lazy, stupid, or malicious. Were they being witlessly cruel doing so? Well, it certainly feels like it. Are we more enlightened, more moral, for recognizing the limits of that system, and curtailing the context of application? Well, it certainly feels like it. But then how do we justify our remaining moral cognitive applications? Should we avoid passing moral judgment on learners altogether? It’s beginning to feel like it. Is this itself moral?

This is theoretical crash space, plain and simple. Staking out an argumentative position in this space is entirely possible—but doing so merely exemplifies, as opposed to solves, the dilemma. We’re conscripting heuristic systems adapted to shallow cognitive ecologies to solve questions involving the impact of information they evolved to ignore. We can no more resolve our intuitions regarding these issues than we can stop Necker Cubes from spoofing visual cognition.

The point here isn’t that gerrymandered solutions aren’t possible, it’s that gerrymandered solutions are the only solutions possible. Pinker’s own ‘solution’ to the debate (see also, How the Mind Works, 54-55) can be seen as a symptom of the underlying intractability, the straits we find ourselves in. We can stipulate, enforce solutions that appease this or that interpretation of this or that displaced intuition: teachers who berate students for their laziness and stupidity are not long for their profession—at least not anymore. As etiologies of cognition continue to accumulate, as more and more deep information permeates our moral ecologies, the need to revise our stipulations, to engineer them to discharge this or that heuristic function, will continue to grow. Free will is not, as Pinker thinks, “an idealization of human beings that makes the ethics game playable” (HMW 55), it is (as Bruce Waller puts it) stubborn, a cognitive reflex belonging to a system of cognitive reflexes belonging to intentional cognition more generally. Foot-stomping does not change how those reflexes are cued in situ. The free-will crash space will continue to expand, no matter how stubbornly Pinker insists on this or that redefinition of this or that term.

We’re not talking about a fall from any ‘heuristic Eden,’ here, an ancestral ‘golden age’ where our instincts were perfectly aligned with our circumstances—the sheer granularity of moral cognition, not to mention the confabulatory nature of moral rationalization, suggests that it has always slogged through interpretative mire. What we’re talking about, rather, is the degree that moral cognition turns on neglecting certain kinds of natural information. Or conversely, the degree to which deep natural information regarding our cognitive capacities displaces and/or crashes once straightforward moral intuitions, like the laziness of certain children.

Or the need to punish murderers…

Two centuries ago a murderer suffering irregular sleep characterized by vocalizations and sometimes violent actions while dreaming would have been prosecuted to the full extent of the law. Now, however, such a murderer would be diagnosed as suffering an episode of ‘homicidal somnambulism,’ and could very likely go free. Mammalian brains do not fall asleep or awaken all at once. For some yet-to-be-determined reason, the brains of certain individuals (mostly men older than 50), suffer a form of partial arousal causing them to act out their dreams.

More and more, neuroscience is making an impact in American courtrooms. Nita Farahany (2016) has found that between 2005 and 2012 the number of judicial opinions referencing neuroscientific evidence has more than doubled. She also found a clear correlation between the use of such evidence and less punitive outcomes—especially when it came to sentencing. Observers in the burgeoning ‘neurolaw’ field think that for better or worse, neuroscience is firmly entrenched in the criminal justice system, and bound to become ever more ubiquitous.

Not only are responsibility assessments being weakened as neuroscientific information accumulates, social risk assessments are being strengthened (Gkotsi and Gasser 2016). So-called ‘neuroprediction’ is beginning to revolutionize forensic psychology. Studies suggest that inmates with lower levels of anterior cingulate activity are approximately twice as likely to reoffend as those relatively higher levels of activity (Aharoni et al 2013). Measurements of ‘early sensory gating’ (attentional filtering) predict the likelihood that individuals suffering addictions will abandon cognitive behavioural treatment programs (Steele et al 2014). Reduced gray matter volumes in the medial and temporal lobes identify youth prone to commit violent crimes (Cope et al 2014). ‘Enlightened’ metrics assessing recidivism risks already exist within disciplines such as forensic psychiatry, of course, but “the brain has the most proximal influence on behavior” (Gaudet et al 2016). Few scientific domains illustrate the problems secondary to deep environmental information than the issue of recidivism. Given the high social cost of criminality, the ability to predict ‘at risk’ individuals before any crime is committed is sure to pay handsome preventative dividends. But what are we to make of justice systems that parole offenders possessing one set of ‘happy’ neurological factors early, while leaving others possessing an ‘unhappy’ set to serve out their entire sentence?

Nothing, I think, captures the crash of ancestral moral intuitions in modern, technological contexts quite so dramatically as forensic danger assessments. Consider, for instance, the way deep information in this context has the inverse effect of deep information in the classroom. Since punishment is indexed to responsibility, we generally presume those bearing less responsibility deserve less punishment. Here, however, it’s those bearing the least responsibility, those possessing ‘social learning disabilities,’ who ultimately serve the longest. The very deficits that mitigate responsibility before conviction actually aggravate punishment subsequent conviction.

The problem is fundamentally cognitive, and not legal, in nature. As countless bureaucratic horrors make plain, procedural decision-making need not report as morally rational. We would be mad, on the one hand, to overlook any available etiology in our original assessment of responsibility. We would be mad, on the other hand, to overlook any available etiology in our subsequent determination of punishment. Ergo, less responsibility often means more punishment.

Crash.

The point, once again, is to describe the structure and dynamics of our collective sociocognitive dilemma in the age of deep environmental information, not to eulogize ancestral cognitive ecologies. The more we disenchant ourselves, the more evolutionarily unprecedented information we have available, the more problematic our folk determinations become. Demonstrating this point demonstrates the futility of pragmatic redefinition: no matter how Pinker or Dennett (or anyone else) rationalizes a given, scientifically-informed definition of moral terms, it will provide no more than grist for speculative disputation. We can adopt any legal or scientific operationalization we want (see Parmigiani et al 2017); so long as responsibility talk cues moral cognitive determinations, however, we will find ourselves stranded with intuitions we cannot reconcile.

Considered in the context of politics and the ‘culture wars,’ the potentially disastrous consequences of these kinds of trends become clear. One need only think of the oxymoronic notion of ‘commonsense’ criminology, which amounts to imposing moral determinations geared to shallow cognitive ecologies upon criminal contexts now possessing numerous deep information attenuations. Those who, for whatever reason, escaped the education system with something resembling an ancestral ‘neglect structure’ intact, those who have no patience for pragmatic redefinitions or technical stipulations will find appeals to folk intuitions every bit as convincing as those presiding over the Salem witch trials in 1692. Those caught up in deep information environments, on the other hand, will be ever more inclined to see those intuitions as anachronistic, inhumane, immoral—unenlightened.

Given the relation between education and information access and processing capacity, we can expect that education will increasingly divide moral attitudes. Likewise, we should expect a growing sociocognitive disconnect between expert and non-expert moral determinations. And given cognitive technologies like the internet, we should expect this dysfunction to become even more profound still.

 

Cognitive Technology

Given the power of technology to cue intergroup identifications, the internet was—and continues to be—hailed as a means of bringing humanity together, a way of enacting the universalistic aspirations of humanism. My own position—one foot in academe, another foot in consumer culture—afforded me a far different perspective. Unlike academics, genre writers rub shoulders with all walks, and often find themselves debating outrageously chauvinistic views. I realized quite quickly that the internet had rendered rationalizations instantly available, that it amounted to pouring marbles across the floor of ancestral social dynamics. The cost of confirmation had plummeted to zero. Prior to the internet, we had to test our more extreme chauvinisms against whomever happened to be available—which is to say, people who would be inclined to disagree. We had to work to indulge our stone-age weaknesses in post-war 20th century Western cognitive ecologies. No more. Add to this phenomena such as online disinhibition effect, as well as the sudden visibility of ingroup, intellectual piety, and the growing extremity of counter-identification struck me as inevitable. The internet was dividing us into teams. In such an age, I realized, the only socially redemptive art was art that cut against this tendency, art that genuinely spanned ingroup boundaries. Literature, as traditionally understood, had become a paradigmatic expression of the tribalism presently engulfing us now. Epic fantasy, on the other hand, still possessed the relevance required to inspire book burnings in the West.

(The past decade has ‘rewarded’ my turn-of-the-millennium fears—though in some surprising ways. The greatest attitudinal shift in America, for instance, has been progressive: it has been liberals, and not conservatives, who have most radically changed their views. The rise of reactionary sentiment and populism is presently rewriting European politics—and the age of Trump has all but overthrown the progressive political agenda in the US. But the role of the internet and social media in these phenomena remains a hotly contested one.)

The earlier promoters of the internet had banked on the notional availability of intergroup information to ‘bring the world closer together,’ not realizing the heuristic reliance of human cognition on differential information access. Ancestrally, communicating ingroup reliability trumped communicating environmental accuracy, stranding us with what Pinker (following Kahan 2011) calls the ‘tragedy of the belief commons’ (Enlightenment Now, 358), the individual rationality of believing collectively irrational claims—such as, for instance, the belief that global warming is a liberal myth. Once falsehoods become entangled with identity claims, they become the yardstick of true and false, thus generating the terrifying spectacle we now witness on the evening news.

The provision of ancestrally unavailable social information is one thing, so long as it is curated—censored, in effect—as it was in the mass media age of my childhood. Confirmation biases have to swim upstream in such cognitive ecologies. Rendering all ancestrally unavailable social information available, on the other hand, allows us to indulge our biases, to see only what we want to see, to hear only what we want to hear. Where ancestrally, we had to risk criticism to secure praise, no such risks need be incurred now. And no surprise, we find ourselves sliding back into the tribalistic mire, arguing absurdities haunted—tainted—by the death of millions.

Jonathan Albright, the research director at the Tow Center for Digital Journalism at Columbia, has found that the ‘fake news’ phenomenon, as the product of a self-reinforcing technical ecosystem, has actually grown worse since the 2016 election. “Our technological and communication infrastructure, the ways we experience reality, the ways we get news, are literally disintegrating,” he recently confessed in a NiemanLab interview. “It’s the biggest problem ever, in my opinion, especially for American culture.” As Alexis Madrigal writes in The Atlantic, “the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.”

The individual cost of fantasy continues to shrink, even as the collective cost of deception continues to grow. The ecologies once securing the reliability of our epistemic determinations, the invariants that our ancestors took for granted, are being levelled. Our ancestral world was one where seeking risked aversion, a world where praise and condemnation alike had to brave condemnation, where lazy judgments were punished rather than rewarded. Our ancestral world was one where geography and the scarcity of resources forced permissives and authoritarians to intermingle, compromise, and cooperate. That world is gone, leaving the old equilibria to unwind in confusion, a growing social crash space.

And this is only the beginning of the cognitive technological age. As Tristan Harris points out, social media platforms, given their commercial imperatives, cannot but engineer online ecologies designed to exploit the heuristic limits of human cognition. He writes:

“I learned to think this way when I was a magician. Magicians start by looking for blind spots, edges, vulnerabilities and limits of people’s perception, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano.”

More and more of what we encounter online is dedicated to various forms of exogenous attention capture, maximizing the time we spend on the platform, so maximizing our exposure not just to advertising, but to hidden metrics, algorithms designed to assess everything from our likes to our emotional well-being. As with instances of ‘forcing’ in the performance of magic tricks, the fact of manipulation escapes our attention altogether, so we always presume we could have done otherwise—we always presume ourselves ‘free’ (whatever this means). We exhibit what Clifford Nass, a pioneer in human-computer interaction, calls ‘mindlessness,’ the blind reliance on automatic scripts. To the degree that social media platforms profit from engaging your attention, they profit from hacking your ancestral cognitive vulnerabilities, exploiting our shared neglect structure. They profit, in other words, from transforming crash spaces into cheat spaces.

With AI, we are set to flood human cognitive ecologies with systems designed to actively game the heuristic nature of human social cognition, cuing automatic responses based on boggling amounts of data and the capacity to predict our decisions better than our intimates, and soon, better than we can ourselves. And yet, as the authors of the 2017 AI Index report state, “we are essentially “flying blind” in our conversations and decision-making related to AI.” A blindness we’re largely blind to. Pinker spends ample time domesticating the bogeyman of superintelligent AI (296-298) but he completely neglects this far more immediate and retail dimension of our cognitive technological dilemma.

Consider the way humans endure as much as need one another: the problem is that the cues signaling social punishment and reward are easy to trigger out of school. We’ve already crossed the borne where ‘improving the user experience’ entails substituting artificial for natural social feedback. Notice the plethora of nonthreatening female voices at all? The promise of AI is the promise of countless artificial friends, voices that will ‘understand’ your plight, your grievances, in some respects better than you do yourself. The problem, of course, is that they’re artificial, which is to say, not your friend at all.

Humans deceive and manipulate one another all the time, of course. And false AI friends don’t rule out true AI defenders. But the former merely describes the ancestral environments shaping our basic heuristic tool box. And the latter simply concedes the fundamental loss of those cognitive ecologies. The more prosthetics we enlist, the more we complicate our ecology, the more mediated our determinations become, the less efficacious our ancestral intuitions become. The more we will be told to trust to gerrymandered stipulations.

Corporate simulacra are set to deluge our homes, each bent on cuing trust. We’ve already seen how the hypersensitivity of intentional cognition renders us liable to hallucinate minds where none exist. The environmental ubiquity of AI amounts to the environmental ubiquity of systems designed to exploit granular sociocognitive systems tuned to solve humans. The AI revolution amounts to saturating human cognitive ecology with invasive species, billions of evolutionarily unprecedented systems, all of them camouflaged and carnivorous. It represents—obviously, I think—the single greatest cognitive ecological challenge we have ever faced.

What does ‘human flourishing’ mean in such cognitive ecologies? What can it mean? Pinker doesn’t know. Nobody does. He can only speculate in an age when the gobsmacking power of science has revealed his guesswork for what it is. This was why Adorno referred to the possibility of knowing the good as the ‘Messianic moment.’ Until that moment comes, until we find a form of rationality that doesn’t collapse into instrumentalism, we have only toothless guesses, allowing the pointless optimization of appetite to command all. It doesn’t matter whether you call it the will to power or identity thinking or negentropy or selfish genes or what have you, the process is blind and it lies entirely outside good and evil. We’re just along for the ride.

 

Semantic Apocalypse

Human cognition is not ontologically distinct. Like all biological systems, it possesses its own ecology, its own environmental conditions. And just as scientific progress has brought about the crash of countless ecosystems across this planet, it is poised to precipitate the crash of our shared cognitive ecology as well, the collapse of our ability to trust and believe, let alone to choose or take responsibility. Once every suboptimal behaviour has an etiology, what then? Once everyone us has artificial friends, heaping us with praise, priming our insecurities, doing everything they can to prevent non-commercial—ancestral— engagements, what then?

‘Semantic apocalypse’ is the dramatic term I coined to capture this process in my 2008 novel, Neuropath. Terminology aside, the crashing of ancestral (shallow information) cognitive ecologies is entirely of a piece with the Anthropocene, yet one more way that science and technology are disrupting the biology of our planet. This is a worst-case scenario, make no mistake. I’ll be damned if I see any way out of it.

Humans cognize themselves and one another via systems that take as much for granted as they possibly can. This is a fact. Given this, it is not only possible, but exceedingly probable, that we would find squaring our intuitive self-understanding with our scientific understanding impossible. Why should we evolve the extravagant capacity to intuit our nature beyond the demands of ancestral life? The shallow cognitive ecology arising out of those demands constitutes our baseline self-understanding, one that bears the imprimatur of evolutionary contingency at every turn. There’s no replacing this system short replacing our humanity.

Thus the ‘worst’ in ‘worst case scenario.’

There will be a great deal of hand-wringing in the years to come. Numberless intentionalists with countless competing rationalizations will continue to apologize (and apologize) while the science trundles on, crashing this bit of traditional self-understanding and that, continually eroding the pilings supporting the whole. The pieties of humanism will be extolled and defended with increasing desperation, whole societies will scramble, while hidden behind the endless assertions of autonomy, beneath the thundering bleachers, our fundamentals will be laid bare and traded for lucre.

Visions of the Semantic Apocalypse: A Critical Review of Yuval Noah Harari’s Homo Deus

by rsbakker

homo-deus-na

“Studying history aims to loosen the grip of the past,” Yuval Noah Harari writes. “It enables us to turn our heads this way and that, and to begin to notice possibilities that our ancestors could not imagine, or didn’t want us to imagine” (59). Thus does the bestselling author of Sapiens: A Brief History of Humankind rationalize his thoroughly historical approach to question of our technological future in his fascinating follow-up, Homo Deus: A Brief History of Tomorrow. And so does he identify himself as a humanist, committed to freeing us from what Kant would have called, ‘our tutelary natures.’ Like Kant, Harari believes knowledge will set us free.

Although by the end of the book it becomes difficult to understand what ‘free’ might mean here.

As Harari himself admits, “once technology enables us to re-engineer human minds, Homo sapiens will disappear, human history will come to an end and a completely new process will begin, which people like you and me cannot comprehend” (46). Now if you’re interested in mapping the conceptual boundaries of comprehending the posthuman, I heartily recommend David Roden’s skeptical tour de force, Posthuman Life: Philosophy at the Edge of the Human. Homo Deus, on the other hand, is primarily a book chronicling the rise and fall of contemporary humanism against the backdrop of apparent ‘progress.’ The most glaring question, of course, is whether Harari’s academic humanism possesses the resources required to diagnose the problems posed by the collapse of popular humanism. This challenge—the problem of using obsolescent vocabularies to theorize, not only the obsolescence of those vocabularies, but the successor vocabularies to come—provides an instructive frame through which to understand the successes and failures of this ambitious and fascinating book.

How good is Homo Deus? Well, for years people have been asking me for a lay point of entry for the themes explored here on Three Pound Brain and in my novels, and I’ve always been at a loss. No longer. Anyone surfing for reviews of the book are certain to find individuals carping about Harari not possessing the expertise to comment on x or y, but these critics never get around to explaining how any human could master all the silos involved in such an issue (while remaining accessible to a general audience, no less). Such criticisms amount to advocating no one dare interrogate what could be the greatest challenge to ever confront humanity. In addition to erudition, Harari has the courage to concede ugly possibilities, the sensitivity to grasp complexities (as well as the limits they pose), and the creativity to derive something communicable. Even though I think his residual humanism conceals the true profundity of the disaster awaiting us, he glimpses more than enough to alert millions of readers to the shape of the Semantic Apocalypse. People need to know human progress likely has a horizon, a limit, that doesn’t involve environmental catastrophe or creating some AI God.

The problem is far more insidious and retail than most yet realize.

The grand tale Harari tells is a vaguely Western Marxist one, wherein culture (following Lukacs) is seen as a primary enabler of relations of power, a fundamental component of the ‘social apriori.’ The primary narrative conceit of such approaches belongs to the ancient Greeks: “[T]he rise of humanism also contains the seeds of its downfall,” Harari writes. “While the attempt to upgrade humans into gods takes humanism to its logical conclusion, it simultaneously exposes humanism’s inherent flaws” (65). For all its power, humanism possesses intrinsic flaws, blindnesses and vulnerabilities, that will eventually lead it to ruin. In a sense, Harari is offering us a ‘big history’ version of negative dialectic, attempting to show how the internal logic of humanism runs afoul the very power it enables.

But that logic is also the very logic animating Harari’s encyclopedic account. For all its syncretic innovations, Homo Deus uses the vocabularies of academic or theoretical humanism to chronicle the rise and fall of popular or practical humanism. In this sense, the difference between Harari’s approach to the problem of the future and my own could not be more pronounced. On my account, academic humanism, far from enjoying critical or analytical immunity, is best seen as a crumbling bastion of pre-scientific belief, the last gasp of traditional apologia, the cognitive enterprise most directly imperilled by the rising technological tide, while we can expect popular humanism to linger for some time to come (if not indefinitely).

Homo Deus, in fact, exemplifies the quandary presently confronting humanists such as Harari, how the ‘creeping delegitimization’ of their theoretical vocabularies is slowly robbing them of any credible discursive voice. Harari sees the problem, acknowledging that “[w]e won’t be able to grasp the full implication of novel technologies such as artificial intelligence if we don’t know what minds are” (107). But the fact remains that “science knows surprisingly little about minds and consciousness” (107). We presently have no consensus-commanding, natural account of thought and experience—in fact, we can’t even agree on how best to formulate semantic and phenomenal explananda.

Humanity as yet lacks any workable, thoroughly naturalistic, theory of meaning or experience. For Harari this means the bastion of academic humanism, though besieged, remains intact, at least enough for him to advance his visions of the future. Despite the perplexity and controversies occasioned by our traditional vocabularies, they remain the only game in town, the very foundation of countless cognitive activities. “[T]he whole edifice of modern politics and ethics is built upon subjective experiences,” Harari writes, “and few ethical dilemmas can be solved by referring strictly to brain activities” (116). Even though his posits lie nowhere in the natural world, they nevertheless remain subjective realities, the necessary condition of solving countless problems. “If any scientist wants to argue that subjective experiences are irrelevant,” Harari writes, “their challenge is to explain why torture or rape are wrong without reference to any subjective experience” (116).

This is the classic humanistic challenge posed to naturalistic accounts, of course, the demand that they discharge the specialized functions of intentional cognition the same way intentional cognition does. This demand amounts to little more than a canard, of course, once we appreciate the heuristic nature of intentional cognition. The challenge intentional cognition poses to natural cognition is to explain, not replicate, its structure and dynamics. We clearly evolved our intentional cognitive capacities, after all, to solve problems natural cognition could not reliably solve. This combination of power, economy, and specificity is the very thing that a genuinely naturalistic theory of meaning (such as my own) must explain.

 

“… fiction might thereby become the most potent force on earth, surpassing even wayward asteroids and natural selection. Hence if we want to understand our future, cracking genomes and crunching numbers is hardly enough. We must decipher the fictions that give meaning to the world.”

 

So moving forward it is important to understand how his theoretical approach elides the very possibility of a genuinely post-intentional future. Because he has no natural theory of meaning, he has no choice but to take the theoretical adequacy of his intentional idioms for granted. But if his intentional idioms possess the resources he requires to theorize the future, they must somehow remain out of play; his discursive ‘subject position’ must possess some kind of immunity to the scientific tsunami climbing our horizons. His very choice of tools limits the radicality of the story he tells. No matter how profound, how encompassing, the transformational deluge, Harari must somehow remain dry upon his theoretical ark. And this, as we shall see, is what ultimately swamps his conclusions.

But if the Hard Problem exempts his theoretical brand of intentionality, one might ask why it doesn’t exempt all intentionality from scientific delegitimation. What makes the scientific knowledge of nature so tremendously disruptive to humanity is the fact that human nature is, when all is said and down, just more nature. Conceding general exceptionalism, the thesis that humans possess something miraculous distinguishing them from nature more generally, would undermine the very premise of his project.

Without any way out of this bind, Harari fudges, basically. He remains silent on his own intentional (even humanistic) theoretical commitments, while attacking exceptionalism by expanding the franchise of meaning and consciousness to include animals: whatever intentional phenomena consist in, they are ultimately natural to the extent that animals are natural.

But now the problem has shifted. If humans dwell on a continuum with nature more generally, then what explains the Anthropocene, our boggling dominion of the earth? Why do humans stand so drastically apart from nature? The capacity that most distinguishes humans from their nonhuman kin, Harari claims (in line with contemporary theories), is the capacity to cooperate. He writes:

“the crucial factor in our conquest of the world was our ability to connect many humans to one another. Humans nowadays completely dominate the planet not because the individual human is far more nimble-fingered than the individual chimp or wolf, but because Homo sapiens is the only species on earth capable of cooperating flexibly in large numbers.” 131

He poses a ‘shared fictions’ theory of mass social coordination (unfortunately, he doesn’t engage research on groupishness, which would have provided him with some useful, naturalistic tools, I think). He posits an intermediate level of existence between the objective and subjective, the ‘intersubjective,’ consisting of our shared beliefs in imaginary orders, which serve to distribute authority and organize our societies. “Sapiens rule the world,” he writes, “because only they can weave an intersubjective web of meaning; a web of laws, forces, entities and places that exist purely in their common imagination” (149). This ‘intersubjective web’ provides him with theoretical level of description he thinks crucial to understanding our troubled cultural future.

He continues:

“During the twenty-first century the border between history and biology is likely to blur not because we will discover biological explanations for historical events, but rather because ideological fictions will rewrite DNA strands; political and economic interests will redesign the climate; and the geography of mountains and rivers will give way to cyberspace. As human fictions are translated into genetic and electronic codes, the intersubjective reality will swallow up the objective reality and biology will merge with history. In the twenty-first century fiction might thereby become the most potent force on earth, surpassing even wayward asteroids and natural selection. Hence if we want to understand our future, cracking genomes and crunching numbers is hardly enough. We must decipher the fictions that give meaning to the world.” 151

The way Harari sees it, ideology, far from being relegated to prescientific theoretical midden, is set to become all powerful, a consumer of worlds. This launches his extensive intellectual history of humanity, beginning with the algorithmic advantages afforded by numeracy, literacy, and currency, how these “broke the data-processing limitations of the human brain” (158). Where our hunter-gathering ancestors could at best coordinate small groups, “[w]riting and money made it possible to start collecting taxes from hundreds of thousands of people, to organise complex bureaucracies and to establish vast kingdoms” (158).

Harari then turns to the question of how science fits in with this view of fictions, the nature of the ‘odd couple,’ as he puts it:

“Modern science certainly changed the rules of the game, but it did not simply replace myths with facts. Myths continue to dominate humankind. Science only makes these myths stronger. Instead of destroying the intersubjective reality, science will enable it to control the objective and subjective realities more completely than ever before.” 179

Science is what renders objective reality compliant to human desire. Storytelling is what renders individual human desires compliant to collective human expectations, which is to say, intersubjective reality. Harari understands that the relationship between science and religious ideology is not one of straightforward antagonism: “science always needs religious assistance in order to create viable human institutions,” he writes. “Scientists study how the world functions, but there is no scientific method for determining how humans ought to behave” (188). Though science has plenty of resources for answering means type questions—what you ought to do to lose weight, for instance—it lacks resources to fix the ends that rationalize those means. Science, Harari argues, requires religion to the extent that it cannot ground the all important fictions enabling human cooperation (197).

Insofar as science is a cooperative, human enterprise, it can only destroy one form of meaning on the back of some other meaning. By revealing the anthropomorphism underwriting our traditional, religious accounts of the natural world, science essentially ‘killed God’—which is to say, removed any divine constraint on our actions or aspirations. “The cosmic plan gave meaning to human life, but also restricted human power” (199). Like stage-actors, we had a plan, but our role was fixed. Unfixing that role, killing God, made meaning into something each of us has to find for ourselves. Harari writes:

“Since there is no script, and since humans fulfill no role in any great drama, terrible things might befall us and no power will come to save us, or give meaning to our suffering. There won’t be a happy ending or a bad ending, or any ending at all. Things just happen, one after the other. The modern world does not believe in purpose, only in cause. If modernity has a motto, it is ‘shit happens.’” 200

The absence of a script, however, means that anything goes; we can play any role we want to. With the modern freedom from cosmic constraint comes postmodern anomie.

“The modern deal thus offers humans an enormous temptation, coupled with a colossal threat. Omnipotence is in front of us, almost within our reach, but below us yawns the abyss of complete nothingness. On the practical level, modern life consists of a constant pursuit of power within a universe devoid of meaning.” 201

Or to give it the Adornian spin it receives here on Three Pound Brain: the madness of a society that has rendered means, knowledge and capital, its primary end. Thus the modern obsession with the accumulation of the power to accumulate. And thus the Faustian nature of our present predicament (though Harari, curiously, never references Faust), the fact that “[w]e think we are smart enough to enjoy the full benefits of the modern deal without paying the price” (201). Even though physical resources such as material and energy are finite, no such limit pertains to knowledge. This is why “[t]he greatest scientific discovery was the discovery of ignorance.” (212): it spurred the development of systematic inquiry, and therefore the accumulation of knowledge, and therefore the accumulation of power, which, Harari argues, cuts against objective or cosmic meaning. The question is simply whether we can hope to sustain this process—defer payment—indefinitely.

“Modernity is a deal,” he writes, and for all its apparent complexities, it is very straightforward: “The entire contract can be summarised in a single phrase: humans agree to give up meaning in exchange for power” (199). For me the best way of thinking this process of exchanging power for meaning is in terms of what Weber called disenchantment: the very science that dispels our anthropomorphic fantasy worlds is the science that delivers technological power over the real world. This real world power is what drives traditional delegitimation: even believers acknowledge the vast bulk of the scientific worldview, as do the courts and (ideally at least) all governing institutions outside religion. Science is a recursive institutional ratchet (‘self-correcting’), leveraging the capacity to leverage ever more capacity. Now, after centuries of sheltering behind walls of complexity, human nature finds itself the intersection of multiple domains of scientific inquiry. Since we’re nothing special, just more nature, we should expect our burgeoning technological power over ourselves to increasingly delegitimate traditional discourses.

Humanism, on this account, amounts to an adaptation to the ways science transformed our ancestral ‘neglect structure,’ the landscape of ‘unknown unknowns’ confronting our prehistorical forebears. Our social instrumentalization of natural environments—our inclination to anthropomorphize the cosmos—is the product of our ancestral inability to intuit the actual nature of those environments. Information beyond the pale of human access makes no difference to human cognition. Cosmic meaning requires that the cosmos remain a black box: the more transparent science rendered that box, the more our rationales retreated to the black box of ourselves. The subjectivization of authority turns on how intentional cognition (our capacity to cognize authority) requires the absence of natural accounts to discharge ancestral functions. Humanism isn’t so much a grand revolution in thought as the result of the human remaining the last scientifically inscrutable domain standing. The rationalizations had to land somewhere. Since human meaning likewise requires that the human remain a black box, the vast industrial research enterprise presently dedicated to solving our nature does not bode well.

But this approach, economical as it is, isn’t available to Harari since he needs some enchantment to get his theoretical apparatus off the ground. As the necessary condition for human cooperation, meaning has to be efficacious. The ‘Humanist Revolution,’ as Harari sees it, consists in the migration of cooperative efficacy (authority) from the cosmic to the human. “This is the primary commandment humanism has given us: create meaning for a meaningless world” (221). Rather than scripture, human experience becomes the metric for what is right or wrong, and the universe, once the canvas of the priest, is conceded to the scientist. Harari writes:

“As the source of meaning and authority was relocated from the sky to human feelings, the nature of the entire cosmos changed. The exterior universe—hitherto teeming with gods, muses, fairies and ghouls—became empty space. The interior world—hitherto an insignificant enclave of crude passions—became deep and rich beyond measure” 234

This re-sourcing of meaning, Harari insists, is true whether or not one still believes in some omnipotent God, insofar as all the salient anchors of that belief lie within the believer, rather than elsewhere. God may still be ‘cosmic,’ but he now dwells beyond the canvas as nature, somewhere in the occluded frame, a place where only religious experience can access Him.

Man becomes ‘man the meaning maker,’ the trope that now utterly dominates contemporary culture:

“Exactly the same lesson is learned by Captain Kirk and Captain Jean-Luc Picard as they travel the galaxy in the starship Enterprise, by Huckleberry Finn and Jim as they sail down the Mississippi, by Wyatt and Billy as they ride their Harley Davidson’s in Easy Rider, and by countless other characters in myriad other road movies who leave their home town in Pennsylvannia (or perhaps New South Wales), travel in an old convertible (or perhaps a bus), pass through various life-changing experiences, get in touch with themselves, talk about their feelings, and eventually reach San Francisco (or perhaps Alice Springs) as better and wiser individuals.” 241

Not only is experience the new scripture, it is a scripture that is being continually revised and rewritten, a meaning that arises out of the process of lived life (yet somehow always managing to conserve the status quo). In story after story, the protagonist must find some ‘individual’ way to derive their own personal meaning out of an apparently meaningless world. This is a primary philosophical motivation behind The Second Apocalypse, the reason why I think epic fantasy provides such an ideal narrative vehicle for the critique of modernity and meaning. Fantasy worlds are fantastic, especially fictional, because they assert the objectivity of what we now (implicitly or explicitly) acknowledge to be anthropomorphic projections. The idea has always been to invert the modernist paradigm Harari sketches above, to follow a meaningless character through a meaningful world, using Kellhus to recapitulate the very dilemma Harari sees confronting us now:

“What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design, or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket?” 277

And so Harari segues to the future and the question of the ultimate fate of human meaning; this is where I find his steadfast refusal to entertain humanistic conceit most impressive. One need not ponder ‘designer experiences’ for long, I think, to get a sense of the fundamental rupture with the past it represents. These once speculative issues are becoming ongoing practical concerns: “These are not just hypotheses of philosophical speculations,” simply because ‘algorithmic man’ is becoming a technological reality (284). Harari provides a whirlwind tour of unnerving experiments clearly implying trouble for our intuitions, a discussion that transitions into a consideration of the ways we can already mechanically attenuate our experiences. A good number of the examples he adduces have been considered here, all of them underscoring the same, inescapable moral: “Free will exists in the imaginary stories we humans have invented” (283). No matter what your philosophical persuasion, our continuity with the natural world is an established scientific fact. Humanity is not exempt from the laws of nature. If humanity is not exempt from the laws of nature, then the human mastery of nature amounts to the human mastery of humanity.

He turns, at this point, to Gazzaniga’s research showing the confabulatory nature of human rationalization (via split brain patients), and Daniel Kahneman’s account of ‘duration neglect’—another favourite of mine. He offers an expanded version of Kahneman’s distinction between the ‘experiencing self,’ that part of us that actually undergoes events, and the ‘narrating self,’ the part of us that communicates—derives meaning from—these experiences, essentially using the dichotomy as an emblem for the dual process models of cognition presently dominating cognitive psychological research. He writes:

“most people identify with their narrating self. When they say, ‘I,’ the mean the story in their head, not the stream of experiences they undergo. We identify with the inner system that takes the crazy chaos of life and spins out of it seemingly logical and consistent yarns. It doesn’t matter that the plot is filled with lies and lacunas, and that it is rewritten again and again, so that today’s story flatly contradicts yesterday’s; the important thing is that we always retain the feeling that we have a single unchanging identity from birth to death (and perhaps from even beyond the grave). This gives rise to the questionable liberal belief that I am an individual, and that I possess a consistent and clear inner voice, which provides meaning for the entire universe.” 299

Humanism, Harari argues, turns on our capacity for self-deception, the ability to commit to our shared fictions unto madness, if need be. He writes:

“Medieval crusaders believed that God and heaven provided their lives with meaning. Modern liberals believe that individual free choices provide life with meaning. They are all equally delusional.” 305

Social self-deception is our birthright, the ability to believe what we need to believe to secure our interests. This is why the science, though shaking humanistic theory to the core, has done so little to interfere with the practices rationalized by that theory. As history shows, we are quite capable of shovelling millions into the abattoir of social fantasy. This delivers Harari to yet another big theme explored both here and Neuropath: the problems raised by the technological concretization of these scientific findings. As Harari puts it:

“However, once heretical scientific insights are translated into everyday technology, routine activities and economic structures, it will become increasingly difficult to sustain this double-game, and we—or our heirs—will probably require a brand new package of religious beliefs and political institutions. At the beginning of the third millennium, liberalism [the dominant variant of humanism] is threatened not by the philosophical idea that there are no free individuals but rather by concrete technologies. We are about to face a flood of extremely useful devices, tools and structures that make no allowance for the free will of individual humans. Can democracy, the free market and human rights survive this flood?” 305-6

harari

The first problem, as Harari sees it, is one of diminishing returns. Humanism didn’t become the dominant world ideology because it was true, it overran the collective imagination of humanity because it enabled. Humanistic values, Harari explains, afforded our recent ancestors with a wide variety of social utilities, efficiencies turning on the technologies of the day. Those technologies, it turns out, require human intelligence and the consciousness that comes with it. To depart from Harari, they are what David Krakauer calls ‘complementary technologies,’ tools that extend human capacity, as opposed to ‘competitive technologies,’ which render human capacities redundant).

Making humans redundant, of course, means making experience redundant, something which portends the systematic devaluation of human experience, or the collapse of humanism. Harari calls this process the ‘Great Decoupling’:

“Over the last decades there has been an immense advance in computer intelligence, but there has been exactly zero advance in computer consciousness. As far as we know, computers in 2016 are no more conscious than their prototypes in the 1950s. However, we are on the brink of a momentous revolution. Humans are in danger of losing their value, because intelligence is decoupling from consciousness.” 311

He’s quick to acknowledge all the problems yet confronting AI researchers, insisting that the trend unambiguously points toward every expanding capacities As he writes, “these technical problems—however difficult—need only be solved once” (317). The ratchet never stops clicking.

He’s also quick to block the assumption that humans are somehow exceptional: “The idea that humans will always have a unique ability beyond the reach of non-conscious algorithms is just wishful thinking” (319). He provides the (I think) terrifying example of David Cope, the University of California at Santa Cruz musicologist who has developed algorithms whose compositions strike listeners as more authentically human than compositions by humans such as J.S. Bach.

The second problem is the challenge of what (to once again depart from Harari) Neil Lawrence calls ‘System Zero,’ the question of what happens when our machines begin to know us better than we know ourselves. As Harari notes, this is already the case: “The shifting of authority from humans to algorithms is happening all around us, not as a result of some momentous governmental decision, but due to a flood of mundane choices” (345). Facebook can now guess your preferences better than your friends, your family, your spouse—and in some instances better than you yourself! He warns the day is coming when political candidates can receive real-time feedback via social media, when people can hear everything said about them always and everywhere. Projecting this trend leads him to envision something very close to Integration, where we become so embalmed in our information environments that “[d]isconnection will mean death” (344).

He writes:

“The individual will not be crushed by Big Brother; it will disintegrate from within. Today corporations and governments pay homage to my individuality and promise to provide medicine, education and entertainment customized to my unique needs and wishes. But in order to do so, corporations and governments first need to break me up into biochemical subsystems, monitor these subsystems with ubiquitous sensors and decipher their workings with powerful algorithms. In the process, the individual will transpire to be nothing but a religious fantasy.” 345

This is my own suspicion, and I think the process of subpersonalization—the neuroscientifically informed decomposition of consumers into economically relevant behaviours—is well underway. But I think it’s important to realize that as data accumulates, and researchers and their AIs find more and more ways to instrumentalize those data sets, what we’re really talking about are proliferating heuristic hacks (that happen to turn on neuroscientific knowledge). They need decipher us only so far as we comply. Also, the potential noise generated by a plethora of competing subpersonal communications seems to constitute an important structural wrinkle. It could be the point most targeted by subpersonal hacking will at least preserve the old borders of the ‘self,’ fantasy that it was. Post-intentional ‘freedom’ could come to reside in the noise generated by commercial competition.

The third problem he sees for humanism lies in the almost certainly unequal distribution of the dividends of technology, a trope so well worn in narrative that we scarce need consider it here. It follows that liberal humanism, as an ideology committed to the equal value of all individuals, has scant hope of squaring the interests of the redundant masses against those of a technologically enhanced superhuman elite.

 

… this isn’t any mere cultural upheaval or social revolution, this is an unprecedented transformation in the history of life on this planet, the point when the evolutionary platform of behaviour, morphology, becomes the product of behaviour.

 

Under pretty much any plausible scenario you can imagine, the shared fiction of popular humanism is doomed. But as Harari has already argued, shared fictions are the necessary condition of social coordination. If humanism collapses, some kind of shared fiction has to take its place. And alas, this is where my shared journey with Harari ends. From this point forward, I think his analysis is largely an artifact of his own, incipient humanism.

Harari uses the metaphor of ‘vacuum,’ implying that humans cannot but generate some kind of collective narrative, some way of making their lives not simply meaningful to themselves, but more importantly, meaningful to one another. It is the mass resemblance of our narrative selves, remember, that makes our mass cooperation possible. [This is what misleads him, the assumption that ‘mass cooperation’ need be human at all by this point.] So he goes on to consider what new fiction might arise to fill the void left by humanism. The first alternative is ‘technohumanism’ (transhumanism, basically), which is bent on emancipating humanity from the authority of nature much as humanism was bent on emancipating humanity from the authority of tradition. Where humanists are free to think anything in their quest to actualize their desires, technohumanists are free to be anything in their quest to actualize their desires.

The problem is that the freedom to be anything amounts to the freedom to reengineer desire. So where the objective meaning, following one’s god (socialization), gave way to subjective meaning, following one’s heart (socialization), it remains entirely unclear what the technohumanist hopes to follow or to actualize. As soon as we gain power over our cognitive being the question becomes, ‘Follow which heart?’

Or as Harari puts it,

“Techno-humanism faces an impossible dilemma here. It considers human will the most important thing in the universe, hence it pushes humankind to develop technologies that can control and redesign our will. After all, it’s tempting to gain control over the most important thing in the world. Yet once we have such control, techno-humanism will not know what to do with it, because the sacred human will would become just another designer product.” 366

Which is to say, something arbitrary. Where humanism aims ‘to loosen the grip of the past,’ transhumanism aims to loosen the grip of biology. We really see the limits of Harari’s interpretative approach here, I think, as well as why he falls short a definitive account of the Semantic Apocalypse. The reason that ‘following your heart’ can substitute for ‘following the god’ is that they amount to the very same claim, ‘trust your socialization,’ which is to say, your pre-existing dispositions to behave in certain ways in certain contexts. The problem posed by the kind of enhancement extolled by transhumanists isn’t that shared fictions must be ‘sacred’ to be binding, but that something neglected must be shared. Synchronization requires trust, the ability to simultaneously neglect others (and thus dedicate behaviour to collective problem solving) and yet predict their behaviour nonetheless. Absent this shared background, trust is impossible, and therefore synchronization is impossible. Cohesive, collective action, in other words, turns on a vast amount of evolutionary and educational stage-setting, common cognitive systems stamped with common forms of training, all of it ancestrally impervious to direct manipulation. Insofar as transhumanism promises to place the material basis of individual desire within the compass of individual desire, it promises to throw our shared background to the winds of whimsy. Transhumanism is predicated on the ever-deepening distortion of our ancestral ecologies of meaning.

Harari reads transhumanism as a reductio of humanism, the point where the religion of individual empowerment unravels the very agency it purports to empower. Since he remains, at least residually, a humanist, he places ideology—what he calls the ‘intersubjective’ level of reality—at the foundation of his analysis. It is the mover and shaker here, what Harari believes will stamp objective reality and subjective reality both in its own image.

And the fact of the matter is, he really has no choice, given he has no other way of generalizing over the processes underwriting the growing Whirlwind that has us in its grasp. So when he turns to digitalism (or what he calls ‘Dataism’), it appears to him to be the last option standing:

“What might replace desires and experiences as the source of all meaning and authority? As of 2016, only one candidate is sitting in history’s reception room waiting for the job interview. This candidate is information.” 366

Meaning has to be found somewhere. Why? Because synchronization requires trust requires shared commitments to shared fictions, stories expressing those values we hold in common. As we have seen, science cannot determine ends, only means to those ends. Something has to fix our collective behaviour, and if science cannot, we will perforce turn to be some kind of religion…

But what if we were to automate collective behaviour? There’s a second candidate that Harari overlooks, one which I think is far, far more obvious than digitalism (which remains, for all its notoriety, an intellectual position—and a confused one at that, insofar as it has no workable theory of meaning/cognition). What will replace humanism? Atavism… Fantasy. For all the care Harari places in his analyses, he overlooks how investing AI with ever increasing social decision-making power simultaneously divests humans of that power, thus progressively relieving us of the need for shared values. The more we trust to AI, the less trust we require of one another. We need only have faith in the efficacy of our technical (and very objective) intermediaries; the system synchronizes us automatically in ways we need not bother knowing. Ideology ceases to become a condition of collective action. We need not have any stories regarding our automated social ecologies whatsoever, so long as we mind the diminishing explicit constraints the system requires of us.

Outside our dwindling observances, we are free to pursue whatever story we want. Screw our neighbours. And what stories will those be? Well, the kinds of stories we evolved to tell, which is to say, the kinds of stories our ancestors told to each other. Fantastic stories… such as those told by George R. R. Martin, Donald Trump, myself, or the Islamic state. Radical changes in hardware require radical changes in software, unless one has some kind of emulator in place. You have to be sensible to social change to ideologically adapt to it. “Islamic fundamentalists may repeat the mantra that ‘Islam is the answer,’” Harari writes, “but religions that lose touch with the technological realities of the day lose their ability even to understand the questions being asked” (269). But why should incomprehension or any kind of irrationality disqualify the appeal of Islam, if the basis of the appeal primarily lies in some optimization of our intentional cognitive capacities?

Humans are shallow information consumers by dint of evolution, and deep information consumers by dint of modern necessity. As that necessity recedes, it stands to reason our patterns of consumption will recede with it, that we will turn away from the malaise of perpetual crash space and find solace in ever more sophisticated simulations of worlds designed to appease our ancestral inclinations. As Harari himself notes, “Sapiens evolved in the African savannah tens of thousands of years ago, and their algorithms are just not built to handle twenty-first century data flows” (388). And here we come to the key to understanding the profundity, and perhaps even the inevitability of the Semantic Apocalypse: intentional cognition turns on cues which turn on ecological invariants that technology is even now rendering plastic. The issue here, in other words, isn’t so much a matter of ideological obsolescence as cognitive habitat destruction, the total rewiring of the neglected background upon which intentional cognition depends.

The thing people considering the future impact of technology need to pause and consider is that this isn’t any mere cultural upheaval or social revolution, this is an unprecedented transformation in the history of life on this planet, the point when the evolutionary platform of behaviour, morphology, becomes the product of behaviour. Suddenly a system that leveraged cognitive capacity via natural selection will be leveraging that capacity via neural selection—behaviourally. A change so fundamental pretty clearly spells the end of all ancestral ecologies, including the cognitive. Humanism is ‘disintegrating from within’ because intentional cognition itself is beginning to founder. The tsunami of information thundering above the shores of humanism is all deep information, information regarding what we evolved to ignore—and therefore trust. Small wonder, then, that it scuttles intentional problem-solving, generates discursive crash spaces that only philosophers once tripped into.

The more the mechanisms behind learning impediments are laid bare, the less the teacher can attribute performance to character, the more they are forced to adopt a clinical attitude. What happens when every impediment to learning is laid bare? Unprecedented causal information is flooding our institutions, removing more and more behaviour from the domain of character, why? Because character judgments always presume individuals could have done otherwise, and presuming individuals could have done otherwise presumes that we neglect the actual sources of behaviour. Harari brushes this thought on a handful occasions, writing, most notably:

“In the eighteenth century Homo sapiens was like a mysterious black box, whose inner workings were beyond our grasp. Hence when scholars asked why a man drew a knife and stabbed another to death, an acceptable answer said: ‘Because he chose to…” 282

But he fails to see the systematic nature of the neglect involved, and therefore the explanatory power it affords. Our ignorance of ourselves, in other words, determines not simply the applicability, but the solvency of intentional cognition as well. Intentional cognition allowed our ancestors to navigate opaque or ‘black box’ social ecologies. The role causal information plays in triggering intuitions of exemption is tuned to the efficacy of this system overall. By and large our ancestors exempted those individuals in those circumstances that best served their tribe as a whole. However haphazardly, moral intuitions involving causality served some kind of ancestral optimization. So when actionable causal information regarding our behaviour becomes available, we have no choice but to exempt those behaviours, no matter what kind of large scale distortions result. Why? Because it is the only moral thing to do.

Welcome to crash space. We know this is crash space as opposed to, say, scientifically informed enlightenment (the way it generally feels) simply by asking what happens when actionable causal information regarding our every behaviour becomes available. Will moral judgment become entirely inapplicable? For me, the free will debate has always been a paradigmatic philosophical crash space, a place where some capacity always seems to apply, yet consistently fails to deliver solutions because it does not. We evolved to communicate behaviour absent information regarding the biological sources of behaviour: is it any wonder that our cause-neglecting workarounds cannot square with the causes they work around? The growing institutional challenges arising out of the medicalization of character turns on the same cognitive short-circuit. How can someone who has no choice be held responsible?

Even as we drain the ignorance intentional cognition requires from our cognitive ecologies, we are flooding them with AI, what promises to be a deluge of algorithms trained to cue intentional cognition, impersonate persons, in effect. The evidence is unequivocal: our intentional cognitive capacities are easily cued out of school—in a sense, this is the cornerstone of their power, the ability to assume so much on the basis of so little information. But in ecologies designed to exploit intentional intuitions, this power and versatility becomes a tremendous liability. Even now litigators and lawmakers find themselves beset with the question of how intentional cognition should solve for environments flooded with artifacts designed to cue human intentional cognition to better extract various commercial utilities. The problems of the philosophers dwell in ivory towers no more.

First we cloud the water, then we lay the bait—we are doing this to ourselves, after all. We are taking our first stumbling steps into what is becoming a global social crash space. Intentional cognition is heuristic cognition. Since heuristic cognition turns on shallow information cues, we have good reason to assume that our basic means of understanding ourselves and our projects will be incompatible with deep information accounts. The more we learn about cognition, the more apparent this becomes, the more our intentional modes of problem-solving will break down. I’m not sure there’s anything much to be done at this point save getting the word out, empowering some critical mass of people with a notion of what’s going on around them. This is what Harari does to a remarkable extent with Homo Deus, something which we may all have cause to thank him.

Science is steadily revealing the very sources intentional cognition evolved to neglect. Technology is exploiting these revelations, busily engineering emulators to pander to our desires, allowing us to shelter more and more skin from the risk and toil of natural and social reality. Designer experience is designer meaning. Thus the likely irony: the end of meaning will appear to be its greatest blooming, the consumer curled in the womb of institutional matrons, dreaming endless fantasies, living lives of spellbound delight, exploring worlds designed to indulge ancestral inclinations.

To make us weep and laugh for meaning, never knowing whether we are together or alone.

Myth as Meth

by rsbakker

What is the lesson that Tolkien teaches us with Middle-earth? The grand moral, I think, is that the illusion of a world can be so easily cued. Tolkien reveals that meaning is cheap, easy to conjure, easy to believe, so long as we sit in our assigned seats. This is the way, at least, I thematically approach my own world-building. Like a form of cave-painting.

The idea here is to look at culture as a meaning machine, where ‘meaning’ is understood not as content, but in a post-intentional sense: various static and dynamic systems cuing various ‘folk’ forms of human cognition. Think of the wonder of the ‘artists’ in Chauvet, the amazement of discovering how to cue the cognition of worlds upon walls using only charcoal. Imagine that first hand, that first brain, tracking that reflex within itself, simply drawing a blacked finger down the wall.

chauvet horses

Traditional accounts, of course, would emphasize the symbolic or representational significance of events such as Chauvet, thereby dragging the question of the genesis of human culture into the realm of endless philosophical disputation. On a post-intentional view, however, what Chauvet vividly demonstrates is how human cognition can be easily triggered out of school. Human cognition is so heuristic, in fact, that it has little difficulty simulating those cues once they have been discovered. Since human cognition also turns out to be wildly opportunistic, the endless socio-practical gerrymandering characterizing culture was all but inevitable. Where traditional views of the ‘human revolution’ focus on utterly mysterious modes of symbolic transmission and elaboration, the present account focuses on the processes of cue isolation and cognitive adaptation. What are isolated are material/behavioural means of simulating cues belonging to ancestral forms of cognition. What is adapted is the cognitive system so cued: the cave paintings at Chauvet amount to a socio-cognitive adaptation of visual cognition, a way to use visual cognitive cues ‘out of school’ to attenuate behaviour. Though meaning, understood intentionally, remains an important explanandum in this approach, ‘meaning’ understood post-intentionally simply refers to the isolation and adaptation of cue-based cognitive systems to achieve some systematic behavioural effect. The basic processes involved are no more mysterious than those underwriting camouflage in nature.*

A post-intentional theory of meaning focuses on the continuity of semantic practices and nature, and views any theoretical perspective entailing the discontinuity of those practices and nature as spurious artifacts of the application of heuristic modes of cognition to theoretical issues. A post-intentional theory of meaning, in other worlds, views culture as a natural phenomenon, and not some arcane artifact of something empirically inexplicable. Signification is wholly material on this account, with all the messiness that comes with it.

Cognitive systems optimize effectiveness by reaching out only as far into nature as they need to. If they can solve distal systems via proximal signals possessing reliable systematic relationships to those systems, they will do so. Humans, like all other species possessing nervous systems, are shallow information consumers in what might be called deep information environments.


Given the limits of human cognition, our ancestors could report whatever they wanted about the greater world (their deep information environments), so long as those reports came cheap and/or discharged some kind of implicit function. They enjoyed what might be called, deep discursive impunity.


 

Consider anthropomorphism, the reflexive application of radically heuristic socio-cognitive capacities dedicated to solving our fellow humans to nonhuman species and nature more generally. When we run afoul anthropomorphism we ‘misattribute’ folk posits adapted to human problem-solving to nonhuman processes. As misapplications, anthropomorphisms tell us nothing about the systems they take as their putative targets. One does not solve a drought by making offerings to gods of rain. This is what makes anthropomorphic worldviews ‘fantastic’: the fact that they tell us very little, if anything, about the very nature they purport to describe and explain.

Now this, on the face of things, should prove maladaptive, since it amounts to squandering tremendous resources and behaviour effecting solutions to problems that do not exist. But of course, as is the case with so much human behaviour, it likely possesses ulterior functions serving the interests of individuals in ways utterly inaccessible to those individuals, at least in ancestral contexts.

The cognitive sophistication required to solve those deep information environments effectively rendered them inscrutable, impenetrable black-boxes, short the development of science. What we painted across the sides those boxes, then, could only be fixed by our basic cognitive capacities and by whatever ulterior function they happened to discharge. Given the limits of human cognition, our ancestors could report whatever they wanted about the greater world (their deep information environments), so long as those reports came cheap and/or discharged some kind of implicit function. They enjoyed what might be called, deep discursive impunity. All they would need is a capacity to identify cues belonging to social cognition in the natural world—to see, for instance, retribution, in the random walk of weather—and the ulterior exploitation of anthropomorphism could get underway.

Given the ancestral inaccessibility of deep information, and given the evolutionary advantages of social coordination and cohesion, particularly in the context of violent intergroup competition, it becomes easy to see how the quasi-cognition of an otherwise impenetrable nature could become a resource. When veridicality has no impact one way or another, social and individual facilitation alone determines the selection of the mechanisms responsible. When anything can be believed, to revert to folk idioms, then only those beliefs that deliver matter. This, then, explains why different folk accounts of the greater world possess deep structural similarities despite their wild diversity. Their reliance on socio-cognitive systems assures deep commonalities in form, as do the common ulterior functions provided. The insolubility of the systems targeted, on the other hand, assures any answer meeting the above constraints will be as effective as any other.

Given the evolutionary provenance of this situation, we are now in a position to see how accurate deep information can be seen as a form of cognitive pollution, something alien that disrupts and degrades ancestrally stable, shallow information ecologies. Strangely enough, what allowed our ancestors to report the nature of nature was the out-and-out inscrutability of nature, the absence of any (deep) information to the contrary—and the discursive impunity this provides. Anthropomorphic quasi-cognition requires deep information neglect. The greater our scientifically mediated sensitivity to deep information becomes, the less tenable anthropomorphic quasi-cognition becomes, the more fantastic folk worlds become. The worlds arising out of our evolutionary heritage find themselves relegated to fairy tales.

Fantasy worlds, then, can be seen as an ontological analogue to the cave paintings at Chauvet. They cue ancestral modes of cognition, simulating the kinds of worlds our ancestors reflexively reported, folk worlds rife with those posits they used to successfully solve one another in a wide variety of practical contexts, meaningful worlds possessing the kinds of anthropomorphic ontologies we find in myths and religions.

With the collapse of the cognitive ecology that made these worlds possible, comes the ineffectiveness of the tools our ancestors used to navigate them. We now find ourselves in deep information worlds, environments not only rife with information our ancestors had neglected, but also crammed with environments engineered to manipulate shallow information cues. We now find ourselves in a world overrun with crash spaces, regions where our ancestral tools consistently fail, and cheat spaces, regions where they are exploited for commercial gain.

This is a rather remarkable fact, even if it becomes entirely obvious upon reflection. Humans possess ideal cognitive ecologies, solve spaces, environments rewarding their capacities, just as humans possess crash spaces, environments punishing their capacities. This is the sense in which fantasy worlds can be seen as a compensatory mechanism, a kind of cognitive eco-preserve, a way to inhabit more effortless shallow information worlds, pseudo-solution spaces, hypothetical environments serving up largely unambiguous cues to generally reliable cognitive capacities. And like biological eco-preserves, perhaps they serve an important function. As we saw with anthropomorphism above, pseudo-solution spaces can be solvers (as opposed to crashers) in their own respect—culture is nothing if not a testimony to this.


Fantasy is zombie scripture, the place where our ancient assumptions lurch in the semblance of life. The fantasy writer is the voodoo magician, imbuing dead meaning with fictional presence.


 

But fantasy worlds are also the playground of blind brains. The more we learn about ourselves, the more we learn how to cue different cognitive capacities out of school—how to cheat ourselves for good or ill. Our shallow information nature is presently the focus of a vast, industrial research program, one gradually providing the information, techniques, and technology required to utterly pre-empt our ancestral ecologies, which is to say, to perfectly simulate ‘reality.’ The reprieve from the cognitive pollution of actual environments itself potentially amounts to more cognitive pollution. We are, in some respect at least, a migratory species, one prone to gravitate toward greener pastures. Is the migration between realities any less inevitable than the migration across lands?

Via the direct and indirect deformation of existing socio-cognitive ecologies, deep information both drives the demand for and enables the high-dimensional cuing of fantastic cognition. In our day and age, a hunger for meaning is at once a predisposition to seek the fantastic. We should expect that hunger to explode with the pace of technological change. For all the Big Data ballyhoo, it pays to remember that we are bound up in an auto-adaptive macro-social system that is premised upon solving us, mastering our cognitive reflexes in ways invisible or that please. We are presently living through the age where it succeeds.

Fantasy is zombie scripture, the place where our ancient assumptions lurch in the semblance of life. The fantasy writer is the voodoo magician, imbuing dead meaning with fictional presence. This resurrection can either facilitate our relation to the actual world, or it can pre-empt it. Science and technology are the problem here. The mastery of deep information environments enables ever greater degrees of shallow information capture. As our zombie natures are better understood, the more effectively our reward systems are tuned, the deeper our descent into this or that variety of fantasy becomes. This is the dystopic image of Akratic society, a civilization ever more divided between deep and shallow information consumers, between those managing the mechanisms, and those captured in some kind of semantic cheat space.

How Science Reveals the Limits of ‘Nooaesthetics’ (A Reply to Alva Noë)

by rsbakker

As a full-time artist (novelist) who has long ago given up on the ability of traditional aesthetics (or as I’ll refer to it here, ‘nooaesthetics’) to do much more than recontextualize art in ways that yoke it to different ingroup agendas, I look at the ongoing war between the sciences and the scholarly traditions of the human as profoundly exciting. The old, perpetually underdetermined convolutions are in the process of being swept away—and good riddance! Alva Noë, however, sees things differently.

So much of rhetoric turns on asking only those questions that flatter your view. And far too often, this amounts to asking the wrong questions, in particular, those questions that only point your way. All the other questions, you pass over in strategic silence. Noë provides a classic example of this tactic in “How Art Reveals the Limits of Neuroscience,” his recent critique of ‘neuroaesthetics’ in the The Chronicle of Higher Education.

So for instance, it seems pretty clear that art is a human activity, a quintessentially human activity according to some. As a human activity, it seems pretty clear that our understanding of art turns on our understanding of humanity. As it turns out, we find ourselves in the early stages of the most radical revolution in our understanding of the human ever… Period. So it stands to reason that a revolution in our understanding of the human will amount to a revolution in our understanding of human activities—such as art.

The problem with revolutions, of course, is that they involve the overthrow of entrenched authorities, those invested in the old claims and the old ways of doing business. This is why revolutions always give rise to apologists, to individuals possessing the rhetorical means of rationalizing the old ways, while delegitimizing the new.

Noë, in this context at least, is pretty clearly the apologist, applying words as poultices, ways to soothe those who confuse old, obsolete necessities with absolute ones. He could have framed his critique of neuroaesthetics in this more comprehensive light, but that would have the unwelcome effect of raising other questions, the kind that reveal the poverty of the case he assembles. The fact is, for all the purported shortcomings of neuroaesthetics he considers, he utterly fails to explain why ‘nooaesthetics,’ the analysis, interpretation, and evaluation of art using the resources of the tradition, is any better.

The problem, as Noë sees it, runs as follows:

“The basic problem with the brain theory of art is that neuroscience continues to be straitjacketed by an ideology about what we are. Each of us, according to this ideology, is a brain in a vat of flesh and bone, or, to change the image, we are like submariners in a windowless craft (the body) afloat in a dark ocean of energy (the world). We know nothing of what there is around us except what shows up on our internal screens.”

As a description of parts of neuroscience, this is certainly the case. But as a high-profile spokesperson for enactive cognition, Noë knows full well that the representational paradigm is a fiercely debated one in the cognitive sciences. But it suits his rhetorical purposes to choose the most theoretically ill-equipped foes, because, as we shall see, his theoretical equipment isn’t all that capable either.

As a one-time Heideggerean, I recognize Noë’s tactics as my own from way back when: charge your opponent with presupposing some ‘problematic ontological assumption,’ then show how this or that cognitive register is distorted by said assumption. Among the most venerable of those problematic assumptions has to be the charge of ‘Cartesianism,’ one that has become so overdetermined as to be meaningless without some kind of qualification. Noë describes his understanding as follows:

“Crucially, this picture — you are your brain; the body is the brain’s vessel; the world, including other people, are unknowable stimuli, sources of irradiation of the nervous system — is not one of neuroscience’s findings. It is rather something that has been taken for granted by neuroscience from the start: Descartes’s conception with a materialist makeover.”

In cognitive science circles, Noë is notorious for the breezy way he consigns cognitive scientists to his ‘Cartesian box.’ For a fellow anti-representationalist such as myself, I often find his disregard for the nuances posed by his detractors troubling. Consider:

“Careful work on the conceptual foundations of cognitive neuroscience has questioned the plausibility of straightforward mind-brain reduction. But many neuroscientists, even those not working on such grand issues as the nature of consciousness, art, and love, are committed to a single proposition that is, in fact, tantamount to a Cartesian idea they might be embarrassed to endorse outright. The momentous proposition is this: Every thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition is in your brain. We may not know how the brain manages this feat, but, so it is said, we are beginning to understand. And this new knowledge — of how the organization of bits of matter inside your head can be your personality, thoughts, understanding, wonderings, religious or sexual impulses — is surely among the most exciting and important in all of science, or so it is claimed.”

I hate to say it, but this is a mischaracterization. One has to remember that before cognitive science, theory was all we had when it came to the human. Guesswork, profound to the extent that we consider ourselves profound, but guesswork all the same. Cognitive science, in its many-pronged attempt to scientifically explain the human, has inherited all this guesswork. What Noë calls ‘careful work’ simply refers to his brand of guesswork, enactive cognition, and its concerns, like the question of how the ‘mind’ is related to the ‘brain,’ are as old as the hills. ‘Straightforward mind brain reduction,’ as he calls it, has always been questioned. This mystery is a bullet that everyone in the cognitive sciences bites in some way or another. The ‘momentous proposition’ that the majority of neuroscientists assume isn’t that “[e]very thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition is in [our] brain,” but rather that every thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition involves our brain. Noë’s Cartesian box assumption is nowhere so simple or so pervasive as he would have you believe.

He knows this, of course, which is why he devotes the next paragraph to dispatching those scientists who want (like Noë himself does, ultimately) to have it both ways. He needs his Cartesian box to better frame the contest in clear-cut ‘us against them’ terms. The fact that cognitive science is a muddle of theoretical dissension—and moreover, that it knows as much—simply does not serve his tradition redeeming narrative. So you find him claiming:

“The concern of science, humanities, and art, is, or ought to be, the active life of the whole, embodied, environmentally and socially situated animal. The brain is necessary for human life and consciousness. But it can’t be the whole story. Our lives do not unfold in our brains. Instead of thinking of the Creator Brain that builds up the virtual world in which we find ourselves in our heads, think of the brain’s job as enabling us to achieve access to the places where we find ourselves and the stuff we share those places with.”

These, of course, are platitudes. In philosophical debates, when representationalists critique proponents of embodied or enactive cognition like Noë, they always begin by pointing out their agreement with claims like these. They entirely agree that environments condition experience, but disagree (given ‘environmentally off-line’ phenomena such as mental imagery or dreams) that they are directly constitutive of experience. The scientific view is de facto a situated view, a view committed to understanding natural systems in context, as contingent products of their environments. As it turns out, the best way to do this involves looking at these systems mechanically, not in any ‘clockwork’ deterministic sense, but in the far richer sense reveal by the life sciences. To understand how a natural system fits into its environment, we need to understand it, statistically if not precisely, as a component of larger systems. The only way to do this is figure how, as a matter of fact, it works, which is to say, to understand its own components. And it just so happens that the brain is the most complicated machine we have ever encountered.

The overarching concern of science is always the whole; it just so happens that the study of minutiae is crucial to understanding the whole. Does this lead to institutional myopia? Of course it does. Scientists are human like anyone else, every bit as prone to map local concerns across global ones. The same goes for English professors and art critics and novelists and Noë. The difference, of course, is the kind of cognitive authority possessed by scientists. Where the artistic decisions I make as a novelist can potentially enrich lives, discoveries in science can also save them, perhaps even create new forms of life altogether.

Science is bloody powerful. This, ultimately, is what makes the revolution in our human self-understanding out and out inevitable. Scientific theory, unlike theory elsewhere, commands consensus, because scientific theory, unlike theory elsewhere, reliably provides us with direct power over ourselves and our environments. Scientific understanding, when genuine, cannot but revolutionize. Nooaesthetic understanding, like religious or philosophical understanding, simply has no way of arbitrating its theoretical claims. It is, compared to science at least, toothless.

And it always has been. Only the absence of any real scientific understanding of the human has allowed us to pretend otherwise all these years, to think our armchair theory games were more than mere games. And that’s changing.

So of course it makes sense to be wary of scientific myopia, especially given what science has taught us about our cognitive foibles. Humans oversimplify, and science, like art and traditional aesthetics, is a human enterprise. The difference is that science, unlike traditional aesthetics, revolutionizes our collective understanding of ourselves and the world.

The very reason we need to guard against scientific myopia, in other words, is also the very reason why science is doomed to revolutionize the aesthetic. We need to be wary of things like Cartesian thinking simply because it really is the case that our every thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition turns on our biology in some fundamental respect. The only real question is how.

But Noë is making a far different and far less plausible claim: that contemporary neuroscience has no place in aesthetics.

“Neuroscience is too individual, too internal, too representational, too idealistic, and too antirealistic to be a suitable technique for studying art. Art isn’t really a phenomenon at all, not in the sense that photosynthesis or eyesight are phenomena that stand in need of explanation. Art is, rather, a mode of investigation, a style of research, into what we are. Art also gives us an opportunity to observe ourselves in the act of knowing the world.”

The reason for this, Noë is quick to point out, isn’t that the sciences of the human don’t have important things to say about a human activity such as art—of course it does—but because “neuroscience has failed to frame a plausible conception of human nature and experience.”

Neuroscience, in other words, possesses no solution to the mind-body problem. Like biology before the institutionalization of evolution, cognitive science lacks the theoretical framework required to unify the myriad phenomena of the human. But then, so does Noë, who only has philosophy to throw at the problem, philosophy that, by his own admission, neuroscience does not find all that compelling.

Which at last frames the question of neuroaesthetics the way Noë should have framed it in the beginning. Say we agree with Noë, and decide that neuroaesthetics has no place in art criticism. Okay, so what does? The possibility that neuroaesthetics ‘gets art wrong’ tells us nothing about the ability of nooaesthetics, traditional art criticism turning on folk-psychological idioms, to get art right. After all, the fact that science has overthrown every single traditional domain of speculation it has encountered strongly suggests that nooaesthetics has got art wrong as well. What grounds do we have for assuming that, in this one domain at least, our guesswork has managed to get things right? Like any other domain of traditional speculation on the human, theorists can’t even formulate their explananda in a consensus commanding way, let alone explain them. Noë can confidently declare to know ‘What Art Is’ if he wants, but ultimately he’s taking a very high number in a very long line at a wicket that, for all anyone knows, has always been closed.

The fact is, despite all the verbiage Noë has provided, it seems pretty clear that neuroaesthetics—even if inevitably myopic in, this, the age of its infancy—will play an ever more important role in our understanding of art, and that the nooaesthetic conceits of our past will correspondingly dwindle ever further into the mists of prescientific fable and myth.

As this artist thinks they should.

More Disney than Disney World: Semiotics as Theoretical Make-believe (II)

by rsbakker

III: The Gilded Stage

We are one species among 8.7 million, organisms embedded in environments that will select us the way they have our ancestors for 3.8 billion years running. Though we are (as a matter of empirical fact) continuous with our environments, the information driving our environmental behaviour is highly selective. The selectivity of our environmental sensitivities means that we are encapsulated, both in terms of the information available to our brain, and in terms of the information available for consciousness. Encapsulation simply follows from the finite, bounded nature of cognition. Human cognition is the product of ancestral human environments, a collection of good enough fixes for whatever problems those environments regularly posed. Given the biological cost of cognition, we should expect that our brains have evolved to derive as much information as possible from whatever signals available, to continually jump to reproductively advantageous conclusions. We should expect to be insensitive to the vast majority of information in our environments, to neglect everything save information that had managed to get our ancestors born.

As it turns out, shrewd guesswork carried the cognitive day. The correlate of encapsulated information access, in other words, is heuristic cognitive processing, a tendency to always see more than there really is.

So consider the streetscape from above once again:

Southwest Orange-20150421-00452

This looks like a streetscape only because the information provided generally cues the existence of hidden dimensions, which in this case simply do not exist. Since the cuing is always automatic and implicit, you just are looking down a street. Change your angle of access and the illusion of hidden dimensions—which is to say, reality—abruptly evaporates. The impossible New York skyline is revealed as counterfeit.

Southwest Orange-20150421-00453

Let’s call a stage any environment that reliably cues the cognition of alternate environments. On this definition, a stage could be the apparatus of a trapdoor spider, say, or a nest parasitized by a cuckoo, or a painting, or an epic poem, or yes, Disney World—any environment that reliably triggers the cognition of some environment other than the environment actually confronting some organism.

As the inclusion of the spider and the cuckoo should suggest, a stage is a biological phenomenon, the result of some organism cognizing one environment as another environment. Stages, in other words, are not semantic. It is simply the case that beetles sensing environments absent spiders will blunder into trapdoor spiders. It’s simply the case that some birds, sensing chicks, will feed those chicks, even if one of them happens to be a cuckoo. It is simply the case that various organisms exploit the cognitive insensitivities of various other organisms. One need not ascribe anything so arcane as ‘false beliefs’ to birds and beetles to make sense of their exploitation. All they need do is function in a way typically cued by one family of (often happy) environments in a different (often disastrous) environment.

Stages are rife throughout the natural world simply because biological cognition is so expensive. All cognition can be exploited because all cognition is bounded, dependant on taking innumerable factors for granted. Probabilistic guesses have to be made always and everywhere, such are the exigencies of survival and reproduction. Competing species need only happen upon ways to trigger those guesses in environments reproductively advantageous to them, and selection will pace out a new niche, a position in what might be called manipulation space.

The difficulty with qualifying a stage as a biological phenomenon, however, is that I included intentional artifacts such as narratives, paintings, and amusement parks as examples of stages above. The problem with this is that no one knows how to reconcile the biological with the intentional, how to fit meaning into the machinery of life.

And yet, as easy as it is to anthropomorphize the cuckoo’s ‘treachery’ or the trapdoor spider’s ‘cunning’—to infuse our biological examples with meaning—it seems equally easy to ‘zombify’ narrative or painting or Disney World. Hearing the Iliad, for instance, is a prodigious example of staging, insofar as it involves the serial cognition of alternate environments via auditory cues embedded in an actual, but largely neglected, environment. One can easily look at the famed cave paintings of Chauvet, say, as a manipulation of visual cues that automatically triggers the cognition of absent things, in this case, horses:

chauvet horses

But if narrative and painting are stages so far as ‘cognizing alternate environments’ goes, the differences between things like the Iliad or Chauvet and things like trapdoor spiders and cuckoos are nothing less than astonishing. For one, the narrative and pictorial cuing of alternative environments is only partial; the ‘alternate environment’ is entertained as opposed to experienced. For another, the staging involved in the former is communicative, whereas the staging involved in the latter is not. Narratives and paintings mean things, they possess ‘symbolic significance,’ or ‘representational content,’ whereas the predatory and parasitic stages you find in the natural world do not. And since meaning resists biological explanation, this strongly suggests that communicative staging resists biological explanation.

But let’s press on, daring theorists that we are, and see how far our ‘zombie stage’ can take us. The fact is, the ‘manipulation space’ intrinsic to bounded cognition affords opportunities as well as threats. In the case of Chauvet, for instance, you can almost feel the wonder of those first artists discovering the relations between technique and visual effect, ways to trick the eye into seeing what was not there there. Various patterns of visual information cue cognitive machinery adapted to solve environments absent those environments. Flat surfaces become windows.

Let’s divvy things up differently, look at cognition and metacognition in terms of multiple channels of information availability versus cognitive capacity. On this account, staging need not be complete: as with Chauvet, the cognition of alternate environments can be partial, localized within the present environment. And as with Chauvet, this embedded staging can be instrumentalized, exploited for various kinds of effects. Just how the cave paintings at Chauvet were used will always be a matter of archaeological speculation, but this in itself tells us something important about the kind of stage we’re now talking about: namely, their specificity. We share the same basic cognitive mechanisms as the original creators and consumers of the Horses, for instance, but we share nothing of their individual histories. This means the stage we step onto encountering them is bound to differ, perhaps radically, from the stage they stepped onto encountering them in the Upper Paleolithic. Since no individuals share precisely the same history, this means that all embedded stages are unique in some respect.

The potential evolutionary value of embedded stages, the kind of ‘cognitive double-vision’ peculiar to humans, seems relatively clear. If you can draw a horse you can show a fellow hunter what to look for, what direction to approach it, where to strike with a spear, how to carve the joints for efficient transportation, and so on. Embedding, in other words, allows organisms to communicate cognitive relationships to actual environments by cuing the cognition of that environment absent that environment. Embedding also allows organisms to communicate cognitive relationships to nonexistent environments as well. If you can draw a cave bear, you can just as easily deceive as teach a potential competitor. And lastly, embedding allows organisms to game their own cognitive systems. By experimenting with patterns of visual information, they can trigger a wide variety of different responses, triggering wonder, lust, fear, amusement, and so on. The cave paintings at Chauvet include what is perhaps the oldest example of pictorial ‘porn’ (in this case, a vulva formed by a bull overlapping a lion) for a reason.

chauvet vulva

Humans, you could say, are the staging animal, the animal capable of reorganizing and coordinating their cognitive comportments via the manipulation of available information into cues, those patterns prone to trigger various heuristic systems ‘out of school.’ Research into episodic memory reveals an intimate relation between the constructive (as opposed to veridical) nature of episodic memory and the ability to imagine future environments. Apparently the brain does not so much record events as it ransacks them, extracting information strategic to solving future environments. Nothing demonstrates the profound degree to which the brain is invested in strategic staging as the default or task-negative network. Whenever we find ourselves disengaged from some ongoing task, our brains, far from slowing down, switch modes and begin processing alternate, typically social, environments. We ‘daydream,’ or ‘ruminate,’ or ‘fantasize,’ activities almost as metabolically expensive as performing focussed tasks. The resting brain is a staging brain—a story-telling brain. It has literally evolved to cue and manipulate its own cognitive systems, to ‘entertain’ alternate environments, laying down priors in the absence of genuine experience to better manage surprise.

Language looms large over all this, of course, as the staging device par excellence. Language allows us to ‘paint a picture,’ or cue various cognitive systems, at any time. Via language, multiple humans can coordinate their behaviours to provide a single solution; they can engage their environments at ever more strategic joints, intervene in ways that reliably generate advantageous outcomes. Via language, environmental comportments can be compared, tested as embedded stages, which is to say, on the biological cheap. And the list goes on. The upshot is that language, like cave paintings, puts human cognition at the disposal of human cognition

And—here’s the thing—while remaining utterly blind to the structure and dynamics of human cognition.

The reason for this is simple: the biological complexity required to cognize environments is simply too great to be cognized as environmental. We see the ash and pigment smeared across the stone, we experience (the illusion of) horses, and we have no access whatsoever to the machinery in between. Or to phrase it in zombie terms, humans access environmental information, ash and pigment, which cues cognitive comportments to different environmental information, horses, in the absence of any cognitive comportment to this process. In fact, all we see are horses, effortlessly and automatically; it actually requires effort to see the ash and pigment! The activated environment crowds the actual environment from the focus to the fringe. The machinery that makes all this possible doesn’t so much as dimple the margin. We neglect it. And accordingly, what inklings we have strike us as all there is.

The question of signification is as old as philosophy: how the hell do nonexistent horses leap from patterns of light or sound? Until recently, all attempts to answer this question relied on observations regarding environmental cues, the resulting experience, and the environment cued. The sign, the soul, and the signified anchored our every speculative analysis simply because, short baffling instances of neuropathology, the machinery responsible never showed its hand.

Our cognitive comportment to signification, in other words, looked like:

Southwest Orange-20150421-00452

Which is to say, a stage.

Because we’re quite literally ‘hardwired’ into this position, we have no way of intuiting the radically impoverished (because specialized) nature of the information made available. We cannot trudge on the perpendicular to see what the stage looks like from different angles—we cannot alter our existing cognitive comportments. Thus, what might be called the semiotic stage strikes us as the environment, or anything but a stage. So profound is the illusion that the typical indicators of informatic insufficiency, the inability to leverage systematically effective behaviour, the inability to command consensus, are habitually overlooked by everyone save the ‘folk’ (ironically enough). Sign, soul, and signified could only take us so far. Despite millennia of philosophical and psychological speculation, despite all the myriad regimentations of syntax and semantics, language remains a mystery. Controversy reigns—which is to say, we as yet lack any decisive scientific account of language.

But then science has only begun the long trudge on the perpendicular. The project of accessing and interpreting the vast amounts of information neglected by the semiotic stage is just getting underway.

Since all the various competing semiotic theories are based on functions posited absent any substantial reference to the information neglected, the temptation is to assume that those functions operate autonomously, somehow ‘supervene’ upon the higher dimensional story coming out cognitive neuroscience. This has a number of happy dialectical consequences beyond simply proofing domains against cognitive scientific encroachments. Theoretical constraints can even be mapped backward, with the assumption that neuroscience will vindicate semiotic functions, or that semiotic functions actually help clarify neuroscience. Far from accepting any cognitive scientific constraints, they can assert that at least one of their multiple stabs in the dark pierces the mystery of language in the heart, and is thus implicitly presupposed in all communicative acts. Heady stuff.

Semiotics, in other words, would have you believe that either this

Southwest Orange-20150421-00452

is New York City as we know it, and will be vindicated by the long cognitive neuroscientific trudge on the perpendicular, or that it’s a special kind of New York City, one possessing no perpendicular to trudge—not unlike, surprise-surprise, assumptions regarding the first-person or intentionality in general.

On this account, the functions posited are sometimes predictive, sometimes not, and even when they are predictive (as opposed to merely philosophical), they are clearly heuristic, low-dimensional ways of tracking extremely complicated systems. As such, there’s no reason to think them inexplicably—magically—‘autonomous,’ and good reason to suppose why it might seem that way. Sign, soul, and signified, the blinkered channels that have traditionally informed our understanding of language, appear inviolable precisely because they are blinkered—since we cognize via those channels, the limits of those channels cannot be cognized: the invisibility of the perpendicular becomes its impossibility.

These are precisely the kinds of errors we should expect speaking animals to make in the infancy of their linguistic self-understanding. You might even say that humans were doomed to run afoul ‘theoretical hyperrealities’ like semiotics, discursive Disney Worlds…

Except that in Disney World, of course, the stages are advertised as stages, not inescapable or fundamental environments. Aside from policy level stuff, I have no idea how Disney World or Disney corporation systematically contributes to the subversion of social justice, and neither, I would submit, does any semiotician living. But I do think I know how to fit Disney into a far larger, and far more disturbing set of trends that have seized society more generally. To see this, we have to leave semiotics behind…

More Disney than Disney World: Semiotics as Theoretical Make-believe

by rsbakker

Southwest Orange-20150415-00408

I: SORCERERS OF THE MAGIC KINGDOM (a.k.a. THE SEMIOTICIAN)

Ask a humanities scholar their opinion of Disney and they will almost certainly give you some version of Louis Marin’s famous “degenerate utopia.”

And perhaps they should. Far from a harmless amusement park, Disney World is a vast commercial enterprise, one possessing, as all corporations must, a predatory market agenda. Disney also happens to be in the meaning business, selling numerous forms of access to their propriety content, to their worlds. Disney (much like myself) is in the alternate reality game. Given their commercial imperatives, their alternate realities primarily appeal to children, who, branded at so young an age, continue to fetishize their products well into adulthood. This generational turnover, combined with the acquisition of more and more properties, assures Disney’s growing cultural dominance. And their messaging is obviously, even painfully, ideological, both escapist and socially conservative, designed to systematically neglect all forms of impersonal conflict.

I think we can all agree on this much. But the humanities scholar typically has something more in mind, a proclivity to interpret Disney and its constituents in semiotic terms, as a ‘veil of signs,’ a consciousness constructing apparatus designed to conceal and legitimize existing power inequities. For them, Disney is not simply apologetic as opposed to critical, it also plays the more sinister role of engendering and reinforcing hyperreality, the seamless integration of simulation and reality into disempowering perspectives on the world.

So as Baudrillard claims in Simulacra and Simulations:

The Disneyland imaginary is neither true nor false: it is a deterrence machine set up in order to rejuvenate in reverse the fiction of the real. Whence the debility, the infantile degeneration of this imaginary. It is meant to be an infantile world, in order to make us believe that the adults are elsewhere, in the ‘real’ world, and to conceal the fact that the real childishness is everywhere, particularly among those adults who go there to act the child in order to foster illusions of their real childishness.

Baudrillard sees the lesson as an associative one, a matter of training. The more we lard reality with our representations, Baudrillard believes, the greater the violence done. So for him the great sin of Disneyland lay not so much in reinforcing ideological derangements via simulation, but in completing the illusion of an ideologically deranged world. It is the lie within the lie, he would have us believe, that makes the second lie so difficult to see through. The sin here is innocence, the kind of belief that falls out of cognitive incapacity. Why do kids believe in magic? Arguably, because they don’t know any better. By providing adults a venue for their children to believe, Disney has also provided them evidence of their own adulthood. Seeing through Disney’s simulations generates the sense of seeing through all illusions, and therefore, seeing the real.

Disney, in other words, facilitates ‘hyperreality’—a semiotic form of cognitive closure—by rendering consumers blind to their blindness. Disney, on the semiotic account, is an ideological neglect machine. Its primary social function is to provide cognitive anaesthesia to the masses, to keep them as docile and distracted as possible. Let’s call this the ‘Disney function,’ or Df. For humanities scholars, as a rule, Df amounts to the production of hyperreality, the politically pernicious conflation of simulation and reality.

In what follows, I hope to demonstrate what might seem a preposterous figure/field inversion. What I want to argue is that the semiotician has Df all wrong—Disney is actually a far more complicated beast—and that the production of hyperreality, if anything, belongs to his or her own interpretative practice. My claim, in other words, is that the ‘politically pernicious conflation of simulation and reality’ far better describes the social function of semiotics than it does Disney.

Semiotics, I want to suggest, has managed to gull intellectuals into actively alienating the very culture they would reform, leading to the degeneration of social criticism into various forms of moral entertainment, a way for jargon-defined ingroups to transform interpretative expertise into demonstrations of manifest moral superiority. Piety, in effect. Semiotics, the study of signs in life, allows the humanities scholar to sit in judgment not just of books, but of text,* which is to say, the entire world of meaning. It constitutes what might be called an ideological Disney World, only one that, unlike the real Disney World, cannot be distinguished from the real.

I know from experience the kind of incredulity these kinds of claim provoke from the semiotically minded. The illusion, as I know first-hand, is that complete. So let me invoke, for the benefit of those smirking down at these words, the same critical thinking mantra you train into your students, and remind you that all institutions are self-regarding, all institutions cultivate congratulatory myths, and to suggest that the notion of some institution set apart, some specialized cabal possessing practices inoculated against the universal human assumption of moral superiority, is implausible through and through. Or at least worth suspicion.

You are almost certainly deluded in some respect. What follows merely illustrates how. Nothing magical protects you from running afoul your cognitive shortcomings the same as the rest of humanity. As such, it really could be the case that you are the more egregious sorcerer, and that your world-view is the real ‘magic kingdom.’ If this idea truly is as preposterous as it feels, then you should have little difficulty understanding it on its own terms, and dismantling it accordingly.

.

II: INVESTIGATING THE CRIME SCENE

Sign and signified, simulation and simulated, appearance and reality: these dichotomies provide the implicit conceptual keel for all ideologically motivated semiotic readings of culture. This instantly transforms Disney, a global industrial enterprise devoted to the production of alternate realities, into a paradigmatic case. The Walt Disney Corporation, as fairly every child in the world knows, is in the simulation business. Of course, this alone does not make Disney ‘bad.’ As an expert interpreter of signs and simulations, the semiotician has no problem with deviations from reality in general, only those deviations prone to facilitate particular vested interests. This is the sense in which the semiotic project is continuous with the Enlightenment project more generally. It presumes that knowledge sets us free. Semioticians hold that some appearances—typically those canonized as ‘art’—actually provide knowledge of the real, whereas other appearances serve only to obscure the real, and so disempower those who run afoul them.

The sin of the Walt Disney Corporation, then, isn’t that it sells simulations, it’s that it sells disempowering simulations. The problem that Disney poses the semiotician, however, is that it sells simulations as simulations, not simulations as reality. The problem, in other words, is that Disney complicates their foundational dichotomy, and in ways that are not immediately clear.

You see microcosms of this complication everywhere you go in Disney World, especially where construction or any other ‘illusion dispelling’ activities are involved. Sights such as this:

Southwest Orange-20150415-00412

where pre-existing views are laminated across tarps meant to conceal some machination that Disney would rather not have you see, struck me as particularly bizarre. Who is being fooled here? My five year old even asked why they would bother painting trees rather than planting them. Who knows, I told her. Maybe they were planting trees. Maybe they were building trees such as this:

Southwest Orange-20150419-00433

Everywhere you go you stumble across premeditated visual obstructions, or the famous, omnipresent gates labelled ‘CAST MEMBERS ONLY.’ Everywhere you go, in other words, you are confronted with obvious evidence of staging, or what might be called premeditated information environments. As any magician knows, the only way to astound the audience is to meticulously control the information they do and do not have available. So long as absolute control remains technically infeasible, they often fudge, relying on the audience’s desire to be astounded to grease the wheels of their machinations.

One finds Disney’s commitment to the staging credo tacked here and there across the very walls raised to enforce it:

Southwest Orange-20150422-00458

Walt Disney was committed to the notion of environmental immersion, with the construction of ‘stages’ that were good enough, given various technical and economic limitations, to kindle wonder in children and generosity in their parents. Almost nobody is fooled outright, least of all the children. But most everyone is fooled enough. And this is the only thing that matters, when any showman tallies their receipts at the end of the day: staging sufficiency, not perfection. The visibility of artifice will be forgiven, even revelled in, so long as the trick manages to carry the day…

No one knows this better than the cartoonist.

The ‘Disney imaginary,’ as Baudrillard calls it, is first and foremost a money making machine. For parents of limited means, the mechanical regularity with which Disney has you reaching for your wallet is proof positive that you are plugged into some kind of vast economic machine. And making money, it turns out, doesn’t require believing, it requires believing enough—which is to say, make-believe. Disney World can revel in its artificiality because artificiality, far from threatening the primary function of the system, actually facilitates it. Children want cartoons; they genuinely prefer low-dimensional distortions of reality over reality. Disney is where cartoons become flesh and blood, where high dimension replicas of low-dimension constructs are staged as the higher dimensional truth of those constructs. You stand in line to have your picture taken with a phoney Tinkerbell that you say is real to play this extraordinary game of make-believe with your children.

To the extent that make-believe is celebrated, the illusion is celebrated as benign deception. You walk into streets like this:

Southwest Orange-20150421-00452

that become this:

Southwest Orange-20150421-00453

as you trudge from the perpendicular. The staged nature of the stage is itself staged within the stage as something staged. This is the structure of the Indiana Jones Stunt Spectacular, for instance, where the audience is actually transformed into a performer on a stage staged as a stage (a movie shoot). At every turn, in fact, families are confronted with this continual underdetermination of the boundaries between ‘real’ and not ‘real.’ We watched a cartoon Crush (the surfer turtle from Finding Nemo) do an audience interaction comedy routine (we nearly pissed ourselves). We had a bug jump out of the screen and spray us with acid (water) beneath that big ass tree above (we laughed and screamed). We were skunked twice. The list goes on and on.

All these ‘attractions’ both celebrate and exploit the narrative instinct to believe, the willingness to overlook all the discrepancies between the fantastic and the real. No one is drugged and plugged into the Disney Matrix against their will; people pay, people who generally make far less than tenured academics, to play make-believe with their children.

So what are we to make of this peculiar articulation of simulations and realities? What does it tell us about Df?

The semiotic pessimist, like Baudrillard, would say that Disney is subverting your ability to reliably distinguish the real from the not real, rendering you a willing consumer of a fictional reality filled with fictional wars. Umberto Eco, on the other hand, suggests the problem is one of conditioning consumer desire. By celebrating the unreality of the real, Disney is telling “us that faked nature corresponds much more to our daydream demands” (Travels in Hyperreality, 44). Disney, on his account, whets the wrong appetite. For both, Disney is both instrumental to and symptomatic of our ideological captivity.

The optimist, on the other hand, would say they’re illuminating the contingency of the real (a.k.a. the ‘power of imagination’), training the young to never quite believe their eyes. On this view, Disney is both instrumental to and symptomatic of our semantic creativity (even as it ruthlessly polices its own intellectual properties). According to the apocryphal quote often attributed to Walt Disney, “If you can dream it, you can do it.”

This is the interpretative antinomy that hounds all semiotic readings of the ‘Disney function.’ The problem, put simply, is that interpretations falling out of the semiotic focus on sign and signified, simulation and simulated, cannot decisively resolve whether self-conscious simulation a la Disney serves, in balance, more to subvert or to conserve prevailing social inequities.

All such high altitude interpretation of social phenomena is bound to be underdetermined, of course, simply because the systems involved are far, far, too complicated. Ironically, the theorist has to make due with cartoons, which is to say skewed idealizations of the phenomena involved, and simply hope that something of the offending dynamic shines through. But what I would like to suggest is that semiotic cartoons are particularly problematic in this regard, particularly apt to systematically distort the phenomena they claim to explicate, while—quite unlike Disney’s representations—concealing their cartoonishness.

To understand how and why this is the case, we need to consider the kinds of information the ‘semiotic stage’ is prone to neglect…

 

Hugos Weaving

by rsbakker

Red Skull

So the whole idea behind Three Pound Brain, way back when, was to open a waystation between ‘incompatible empires,’ to create a forum where ingroup complacencies are called out and challenged, where our native tendency to believe flattering bullshit can be called to account. To this end, I instigated two very different blog wars, one against an extreme ‘right’ figure in the fantasy community, Theodore Beale, another against an extreme ‘left’ figure, Benjanun Sriduangkaew. All along the idea was to expose these individuals, to show, at least for those who cared to follow, how humans were judging machines, prone to rationalize even the most preposterous and odious conceits. Humans are hardwired to run afoul pious delusion. The science is only becoming more definitive in this regard, I assure you. We are, each and every one of us, walking, talking, yardsticks. Unfortunately, we also have a tendency to affix spearheads to our rules, to confuse our sense of exceptionality and entitlement with the depravity and criminality of others—and to make them suffer.

When it comes to moral reasoning, humans are incompetent clowns. And in an age where high-school students are reengineering bacteria for science fairs, this does not bode well for the future. We need to get over ourselves—and now. Blind moral certainty is no longer a luxury our species can afford.

Now we all watch the news. We all appreciate the perils of moral certainty in some sense, the need to be wary of those who believe too hard. We’ve all seen the ‘Mad Fanatic’ get his or her ‘just desserts’ in innumerable different forms. The problem, however, is that the Mad Fanatic is always the other guy, while we merely enjoy the ‘strength of our convictions.’ Short of clinical depression at least, we’re always—magically you might say—the obvious ‘Hero.’

And, of course, this is a crock of shit. In study after study, experiment after experiment, researchers find that, outside special circumstances, moral argumentation and explanation are strategic—with us being none the wiser! (I highly recommend Joshua Greene’s Moral Tribes or Jonathan Haidt’s The Righteous Mind for a roundup of the research). It may feel like divine dispensation, but dollars to donuts it’s nothing more than confabulation. We are programmed to advance our interests as truth; we’d have no need of Judge Judy otherwise!

It is the most obvious invisible thing. But how do you show people this? How do you get humans to see themselves as the moral fool, as the one automatically—one might even say, mechanically—prone to rationalize their own moral interests, unto madness in some cases. The strategy I employ in my fantasy novels is to implicate the reader, to tweak their moral pieties, and then to jam them the best I can. My fantasy novels are all about the perils of moral outrage, the tragedy of willing the suffering of others in the name of some moral verity, and yet I regularly receive hate mail from morally outraged readers who think I deserve to suffer—fear and shame, in most cases, but sometimes death—for having written whatever it is they think I’ve written.

The blog wars were a demonstration of a different sort. The idea, basically, was to show how the fascistic impulse, like fantasy, appeals to a variety of inborn cognitive conceits. Far from a historical anomaly, fascism is an expression of our common humanity. We are all fascists, in our way, allergic to complexity, suspicious of difference, willing to sacrifice strangers on the altar of self-serving abstractions. We all want to master our natural and social environments. Public school is filled with little Hitlers—and so is the web.

And this, I wanted to show, is the rub. Before the web, we either kept our self-aggrandizing, essentializing instincts to ourselves or risked exposing them to the contradiction of our neighbours. Now, search engines assure that we never need run critical gauntlets absent ready-made rationalizations. Now we can indulge our cognitive shortcomings, endlessly justify our fears and hatreds and resentments. Now we can believe with the grain our stone-age selves. The argumentative advantage of the fascist is not so different from the narrative advantage of the fantasist: fascism, like fantasy, cues cognitive heuristics that once proved invaluable to our ancestors. To varying degrees, our brains are prone to interpret the world through a fascistic lens. The web dispenses fascistic talking points and canards and ad hominems for free—whatever we need to keep our clown costumes intact, all the while thunderously declaring ourselves angels. Left. Right. It really doesn’t matter. Humans are bigots, prone to strip away complexity and nuance—the very things required to solve modern social problems—to better indulge our sense of moral superiority.

For me, Theodore Beale (aka, Vox Day) and Benjanun Sriduangkaew (aka, acrackedmoon) demonstrated a moral version of the Dunning-Kruger effect, how the bigger the clown, the more inclined they are to think themselves angels. My strategy with Beale was simply to show the buffoonery that lay at the heart of his noxious set of views. And he eventually obliged, explaining why, despite the way his claims epitomize bias, he could nevertheless declare himself the winner of the magical belief lottery:

Oh, I don’t know. Out of nearly 7 billion people, I’m fortunate to be in the top 1% in the planet with regards to health, wealth, looks, brains, athleticism, and nationality. My wife is slender, beautiful, lovable, loyal, fertile, and funny. I meet good people who seem to enjoy my company everywhere I go.

He. Just. Is. Superior.

A king clown, you could say, lucky, by grace of God.

Benjanun Sriduangkaew, on the other hand, posed more of a challenge, since she was, when all was said and done, a troll in addition to a clown. In hindsight, however, I actually regard my blog war with her as the far more successful one simply because she was so successful. My schtick, remember, is to show people how they are the Mad Fanatic in some measure, large or small. Even though Sriduangkaew’s tactics consisted of little more than name-calling, even though her condemnations were based on reading the first six pages of my first book, a very large number of ‘progressive’ individuals were only too happy to join in, and to viscerally demonstrate the way moral outrage cares nothing for reasons or casualties. What’s a false positive when traitors are in our midst? All that mattered was that I was one of them according to so-and-so. I would point out over and over how they were simply making my argument for me, demonstrating how moral groupthink deteriorates into punishing strangers, and feeling self-righteous afterward. I would receive tens of thousands of hits on my posts, and less than a dozen clicks on the links I provided citing the relevant research. It was nothing short of phantasmagorical. I was, in some pathetic, cultural backwoods way, the target of a witch-hunt.

(The only thing I regret is that several of my friends became entangled, some jumping ship out fear (sending me ‘please relent’ letters), others, like Peter Watts, for the sin of calling the insanity insanity.)

It’s worth noting in passing that some Three Pound Brain regulars actually tried to get Beale and Sriduangkaew together. Beale, after all, actually held the views she so viciously attributed to me, Morgan, and others. He was the real deal—openly racist and misogynistic—and his blog had more followers than all of her targets combined. Sriduangkaew, on the other hand, was about as close to Beale’s man-hating feminist caricature as any feminist could be. But… nothing. Like competing predators on the savannah, they circled on opposite sides of the herd, smelling one another, certainly, but never letting their gaze wander from their true prey. It was as if, despite the wildly divergent content of their views, they recognized they were the same.

So here we stand a couple of years after the fray. Sriduangkaew, as it turns out, was every bit as troubled as she sounded, and caused others far, far more grief than she ever caused me. Beale, on other hand, has been kind enough to demonstrate yet another one of my points with his recent attempt to suborn the Hugos. Stories of individuals gaming the Hugos are notorious, so in a sense the only thing that makes Beale’s gerrymandering remarkable is the extremity of his views. How? people want to know. How could someone so ridiculously bigoted come to possess any influence in our ‘enlightened’ day and age?

Here we come to the final, and perhaps most problematic moral clown in this sad and comedic tale: the Humanities Academic.

I’m guessing that a good number of you reading this credit some English professor with transforming you into a ‘critical thinker.’ Too bad there’s no such thing. This is what makes the Humanities Academic a particularly pernicious Mad Fanatic: they convince clowns—that is, humans like you and me—that we need not be clowns. They convince cohort after cohort of young, optimistic souls that buying into a different set of flattering conceits amounts to washing the make-up off, thereby transcending the untutored ‘masses’ (or what more honest generations called the rabble). And this is what makes their particular circus act so pernicious: they frame assumptive moral superiority—ingroup elitism—as the result of hard won openness, and then proceed to judge accordingly.

So consider what Philip Sandifer, “a PhD in English with no small amount of training in postmodernism” thinks of Beale’s Hugo shenanigans:

To be frank, it means that traditional sci-fi/fantasy fandom does not have any legitimacy right now. Period. A community that can be this effectively controlled by someone who thinks black people are subhuman and who has called for acid attacks on feminists is not one whose awards have any sort of cultural validity. That sort of thing doesn’t happen to functional communities. And the fact that it has just happened to the oldest and most venerable award in the sci-fi/fantasy community makes it unambiguously clear that traditional sci-fi/fantasy fandom is not fit for purpose.

Simply put, this is past the point where phrases like “bad apples” can still be applied. As long as supporters of Theodore Beale hold sufficient influence in traditional fandom to have this sort of impact, traditional fandom is a fatally poisoned well. The fact that a majority of voices in fandom are disgusted by it doesn’t matter. The damage has already been done at the point where the list of nominees is 68% controlled by fascists.

The problem, Sandifer argues, is institutional. Beale’s antics demonstrate that the institution of fandom is all but dead. The implication is that the science fiction and fantasy community ought to be ashamed, that it needs to gird its loins, clean up its act.

Many of you, I’m sure, find Sandifer’s point almost painfully obvious. Perhaps you’re thinking those rumours about Bakker being a closet this or that must be true. I am just another clown, after all. But catch that moral reflex, if you can, because if you give in, you will be unable—as a matter of empirical fact—to consider the issue rationally.

There’s a far less clownish (ingroupish) way to look at this imbroglio.

Let’s say, for a moment, that readership is more important than ‘fandom’ by far. Let’s say, for a moment, that the Hugos are no more or less meaningful than any other ingroup award, just another mechanism that a certain bunch of clowns uses to confer prestige on those members who best exemplify their self-regarding values—a poor man’s Oscars, say.

And let’s suppose that the real problem facing the arts community lies in the impact of technology on cultural and political groupishness, on the way the internet and preference-parsing algorithms continue to ratchet buyers and sellers into ever more intricately tuned relationships. Let’s suppose, just for instance, that so-called literary works no longer reach dissenting audiences, and so only serve to reinforce the values of readers…

That precious few of us are being challenged anymore—at least not by writing.

The communicative habitat of the human being is changing more radically than at any time in history, period. The old modes of literary dissemination are dead or dying, and with them all the simplistic assumptions of our literary past. If writing that matters is writing that challenges, the writing that matters most has to be writing that avoids the ‘preference funnel,’ writing that falls into the hands of those who can be outraged. The only writing that matters, in other words, is writing that manages to span significant ingroup boundaries.

If this is the case, then Beale has merely shown us that science fiction and fantasy actually matter, that as a writer, your voice can still reach people who can (and likely will) be offended… as well as swayed, unsettled, or any of the things Humanities clowns claim writing should do.

Think about it. Why bother writing stories with progressive values for progressives only, that is, unless moral entertainment is largely what you’re interested in? You gotta admit, this is pretty much the sum of what passes for ‘literary’ nowadays.

Everyone’s crooked is someone else’s straight—that’s the dilemma. Since all moral interpretations are fundamentally underdetermined, there is no rational or evidential means to compel moral consensus. Pretty much anything can be argued when it comes to questions of value. There will always be Beales and Sriduangkaews, individuals adept at rationalizing our bigotries—always. And guess what? the internet has made them as accessible as fucking Wal-Mart. This is what makes engaging them so important. Of course Beale needs to be exposed—but not for the benefit of people who already despise his values. Such ‘exposure’ amounts to nothing more than clapping one another on the back. He needs to be exposed in the eyes of his own constituents, actual or potential. The fact that the paths leading to bigotry run downhill makes the project of building stairs all the more crucial.

‘Legitimacy,’ Sandifer says. Legitimacy for whom? For the likeminded—who else? But that, my well-educated friend, is the sound-proofed legitimacy of the Booker, or the National Book Awards—which is to say, the legitimacy of the irrelevant, the socially inert. The last thing this accelerating world needs is more ingroup ejaculate. The fact that Beale managed to pull this little coup is proof positive that science fiction and fantasy matter, that we dwell in a rare corner of culture where the battle of ideas is for… fucking… real.

And you feel ashamed.

Text as Teeter-Totter

by rsbakker

Neuropath will always occupy a special, yet prickly place in my psyche. The book is special to me because of its genesis, first and foremost, arising as it did out of what (I can now see) was a truly exceptional experience teaching Popular Culture and a bet with my incredulous wife. But it’s also special because of the kind of critical reception it’s since received: I’ve actually come across reviews warning people to take Thomas Metzinger’s blurb, “You should think twice before reading this!” seriously. I was aiming for something that balanced the visceral on a philosophical edge, so I was overjoyed by these kinds of visceral responses. But I was troubled that no one seemed to be grasping the philosophical beyond the visceral, seeing the implications considered in the book beyond what was merely personal. Then, several years back someone sent me this link to Steven Shaviro’s penetrating and erudite review of Neuropath. And I can remember feeling as though some kind of essential circuit between author, book, and critic, had been closed.

That the book had truly been completed.

Now I’m genuinely honoured to have the opportunity to once again complete that circuit in the flesh at Western University in a couple weeks time. Steven Shaviro has spent his career jamming cutting edge speculative fiction and speculative theory together in his skull, a semantic Large Hadron Collider, and publishing the resulting Feynman diagrams on the ground-breaking The Pinnochio Theory as well as in his numerous scholarly works. He will be presenting on Neuropath, and I will be responding, at a public lecture on Thursday, February 13th, at 4:30 PM, the North Campus Building, Rm 117. All are welcome.

“Reinstalling Eden”

by rsbakker

So several months back I was going through my daily blog roll and I noticed that Eric Schwitzgebel, a well-known skeptic and philosopher of mind, had posted a small fictional piece on Splintered Minds dealing with the morality of creating artificial consciousnesses. Forget 3D printing: what happens when we develop the power to create ‘secondary worlds’ filled with sentient and sapient entities, our own DIY Matrices, in effect? I’m not sure why, but for some reason, a sequel to his story simply leapt into my head. Within 20 minutes or so I had broken one of the more serious of the Ten Blog Commandments: ‘Thou shalt not comment to excess.’ But, as is usually the case, my exhibitionism got the best of me, and I posted it, entirely prepared to apologize for my breach of e-etiquette if need be. As it turned out, Eric loved the piece, so much so he emailed me suggesting that we rewrite both parts for possible publication. Since looooonnng form fiction is my area of expertise I contacted a friend of mine, Karl Schroeder, asking him what kind of venue would be appropriate, and he suggested we pitch Nature – [cue heavenly choir] – who has a coveted page dedicated to short pieces of speculative fiction.

And lo, it came to pass – largely thanks to Eric, who looked after all the technical details, and who was able to cajole the subpersonal herd of cats I call my soul into actually completing something short for a change. The piece can be found here. And be warned that, henceforth, anyone who trips me up on some point of reason will be met with, “Oh yeeeah. Like, I’m published in Nature, maan.”

‘Cause as we all know, Nature rocks.