Three Pound Brain

No bells, just whistling in the dark…

Category: ATROCITY TALES

Lollipop World

by rsbakker

What are the odds that I would finish writing a near-future viral thriller (The Lollipop Factory) just as 2019-nCoV was becoming entrenched in Wuhan? So, as it turned out, the first facts I wanted to know when I caught wind of the outbreak were things like the resource requirements for treatment, the average transmission rate per person, and whether transmission was asymptomatic. The sudden rush to build new hospitals answered the first question: 2019-nCoV was a resource intensive disease. As it turns out, some 18% of those with verified cases require intensive care. This fact became mind-boggling as more and more estimates of the transmission rate bubbled to the surface of the web: on average, investigators think 2019-nCoV is around twice as contagious as the seasonal flu. And if this weren’t bad enough, we now have solid evidence of asymptomatic transmission: as relieved as I was to learn that infected children weren’t getting sick, I understood the kind of epidemiological nightmare this represented. How do you contain a disease you can’t see?

So what’s the upshot?

2019-nCoV is more difficult to contain than the seasonal flu, and so, likely beyond containment short severe and sustained (ie, economic activity killing) restrictions on face-to-face interaction. Either way, we are likely at the beginning of the wildfire season, not the middle, nor the end.

The lethality of 2019-nCoV will be a function of the resources available to treat critical cases. If this reaches influenza pandemic proportions, then 2019-nCoV will likely be more, not less, deadly than SARS (which killed, given the resources available at the time, around 10% of those infected).

Personally, given the way Chinese authorities bungled the outbreak at the start, and given the alarming tendency of the WHO and CDC to communicate only the most optimistic appraisals of 2019-nCoV, I think this will be the biggest thing to hit humanity since World War II.

For the critically minded, most of the estimates referenced above can be found agreggated here. There’s countless caveats, of course, including the mutability 2019-nCoV itself, which could, like SARS, become less lethal over time. But don’t be lulled by calm-at-all-costs bureaucrats or the nothing-to-see-here Wall Street Bulls: the stakes may not be apocalyptic, but they are civilizational.

POSTSCRIPT (2/11/2020)

The General Director of the WHO, in addition to revealing the official name of the disease, Covid-19, has finally called on nations to treat it as “public enemy number one.” Markets reach record highs. Babies are kissed in New Hampshire.

POSTSCRIPT (2/12/2020)

Much of the latest cutting-edge research can be found here, expertly summarized no less. Ontario learned some hard lessons during SARS. Let’s all hope the good guys can delay this until spring. Maybe Africa, India, and Indonesia have no cases because they’re simply too warm for SARS-CoV-2 (the virus’s official official name). Better to suffer the nightmare next winter, once we’ve prepared.

Killing Bartleby (Before It’s Too Late)

by rsbakker

Why did I not die at birth,

come forth from the womb and expire?

Why did the knees receive me?

Or why the breasts, that I should suck?

For then I should have lain down and been quiet;

I should have slept; then I should have been at rest,

with kings and counselors of the earth

who rebuilt ruins for themselves…

—Job 3:11-14 (RSV)

 

“Bartleby, the Scrivener: A Story of Wall-Street”: I made the mistake of rereading this little gem a few weeks back. Section I, below, retells the story with an eye to heuristic neglect. Section II leverages this retelling into a critique of readings, like those belonging to the philosophers Gilles Deleuze and Slavoj Zizek, that fall into the narrator’s trap of exceptionalizing Bartleby. If you happen to know anyone interested in Bartleby criticism, by all means encourage them to defend their ‘doctrine of assumptions.’

 

I

The story begins with the unnamed narrator identifying two ignorances, one social and the other personal. The first involves Bartleby’s profession, that “somewhat singular set of men, of whom as yet nothing that I know of has ever been written.” Human scriveners, like human computers, hail from a time when social complexities demanded the undertaking of mechanical cognitive labours, the discharge of tasks too procedural to rest easy in the human soul. Copies are all the ‘system’ requires of them, pure documentary repetition. It isn’t so much that their individuality does not matter, but that it matters too much, perturbing (‘blotting’) the function of the whole. So far as social machinery is legal machinery, you could say law-copyists belong to the neglected innards of mid-19th century society. Bartleby belongs to what might be called the caste of most invisible men.

What makes him worthy of literary visibility turns on a second manifestation of ignorance, this one belonging to the narrator. “What my own astonished eyes saw of Bartleby,” he tells us, “that is all I know of him, except, indeed, one vague report which will appear in the sequel.” And even though the narrator thinks this interpersonal inscrutability constitutes “an irreparable loss to literature,” it turns out to be the very fact upon which the literary obsession with “Bartleby, the Scrivener” hangs. Bartleby is so visible because he is the most hidden of the hidden men.

Since comprehending the dimensions of a black box buried within a black box is impossible, the narrator has no choice but to illuminate the latter, to provide an accounting of Bartleby’s ecology: “Ere introducing the scrivener, as he first appeared to me, it is fit I make some mention of myself, my employees, my business, my chambers, and general surroundings; because some such description is indispensable to an adequate understanding of the chief character about to be presented.” In a sense, Bartleby is nothing apart from his ultimately profound impact on this ecology, such is his mystery.

Aside from inklings of pettiness, the narrator’s primary attribute, we learn, is also invisibility, the degree to which he disappears into his social syntactic role. “I am one of those unambitious lawyers who never addresses a jury, or in any way draws down public applause; but in the cool tranquility of a snug retreat, do a snug business among rich men’s bonds and mortgages and title-deeds,” he tells us. “All who know me, consider me an eminently safe man.” He is, in other words, the part that does not break down, and so, like Heidegger’s famed hammer, never becomes something present to hand, an object of investigation in his own right.

His description of his two existing scriveners demonstrates that his ‘safety’ is to some extent rhetorical, consisting in his ability to explain away inconsistencies, real or imagined. Between Turkey’s afternoon drunkenness and Nipper’s foul morning temperament, you could say his office is perpetually compromised, but the narrator chooses to characterize it otherwise, in terms of each man mechanically cancelling out the incompetence of the other. “Their fits relieved each other like guards,” the narrator informs us, resulting in “a good natural arrangement under the circumstances.”

He depicts what might be called an economy of procedural and interpersonal reflexes, a deterministic ecology consisting of strictly legal or syntactic demands, all turning on the irrelevance of the discharging individual, the absence of ‘blots,’ and a stochastic ecology of sometimes conflicting personalities. Not only does he instinctively understand the insoluble nature of the latter, he also understands the importance of apology, the power of language to square those circles that refuse to be squared. When he comes “within an ace” of firing Turkey, the drunken scrivener need only bow and say what amounts to nothing to mollify his employer. As with bonds and mortgages and title-deeds, the content does not so much matter as does the syntax, the discharge of social procedure. Everyone in his office “up stairs at No.—Wall-street” is a misfit, and the narrator is a compulsive ‘fitter,’ forever searching for ways to rationalize, mythologize, and so normalize, the idiosyncrasies of his interpersonal circumstances.

And of course, he and his fellows are entombed by the walls of Wall Street, enjoying ‘unobstructed views’ of obstructions. Theirs is a subterranean ecology, every bit as “deficient in what landscape painters call ‘life’” as the labour that consumes them.

Enter Bartleby. “After a few words touching his qualifications,” the narrator informs us, “I engaged him, glad to have among my corps of copyists a man of so singularly sedate an aspect, which I thought might operate beneficially upon the flighty temper of Turkey, and the fiery one of Nippers.” Absent any superficial sign of idiosyncrasy, he seems the perfect ecological fit. The narrator gives the man a desk behind a screen in his own office, a corner possessing a window upon obstruction.

After three days, he calls out to Bartleby to examine the accuracy of a document, reflexively assuming the man would discharge the task without delay, only to hear Bartleby, obscure behind his green screen, say the fateful words that would confound, not only our narrator, but countless readers and critics for generations to come: “I would prefer not to.” The narrator is gobsmacked:

“I sat awhile in perfect silence, rallying my stunned faculties. Immediately it occurred to me that my ears had deceived me, or Bartleby had entirely misunderstood my meaning. I repeated my request in the clearest tone I could assume. But in quite as clear a one came the previous reply, “I would prefer not to.””

Given the “natural expectancy of instant compliance,” the narrator assumes the breakdown is communicative. When he realizes this isn’t the case, he confronts Bartleby directly, to the same effect:

“Not a wrinkle of agitation rippled him. Had there been the least uneasiness, anger, impatience or impertinence in his manner; in other words, had there been any thing ordinarily human about him, doubtless I should have violently dismissed him from the premises. But as it was, I should have as soon thought of turning my pale plaster-of-paris bust of Cicero out of doors.”

Realizing that he has been comprehended, the narrator assumes willful defiance, that Bartleby seeks to provoke him, and that, accordingly, the man will present the cues belonging to interpersonal power struggles more generally. When Bartleby manifests none of these signs, the hapless narrator lacks the social script he requires to solve the problem. Turning out the scrivener becomes as unthinkable as surrendering his bust of Cicero, which is to say, the very emblem of his legal vocation.

The next time Bartleby refuses to read, the narrator demands an explanation, asking, “Why do you refuse?” To which Bartleby replies, once again, “I would prefer not to.” When the narrator presses, resolved “to reason with him,” he realizes that dysrationalia is not the problem: “It seemed to me that while I had been addressing him, he carefully revolved every statement that I made; fully comprehended the meaning; could not gainsay the irresistible conclusions; but, at the same time, some paramount consideration prevailed with him to reply as he did.”

If Bartleby were non compos mentis, then he could be ‘medicalized,’ reduced to something the narrator would find intelligible—something providing some script for action. Instead, the scrivener understands, or manifests as much, leaving the narrator groping for evidence of his own rationality:

“It is not seldom the case that when a man is browbeaten in some unprecedented and violently unreasonable way, he begins to stagger in his own plainest faith. He begins, as it were, vaguely to surmise that, wonderful as it may be, all the justice and all the reason is on the other side. Accordingly, if any disinterested persons are present, he turns to them for some reinforcement for his own faltering mind.”

For a claim to be rational it must be rational to everyone. Each of us is stranded with our own perspective, and each of us possesses only the dimmest perspective on that perspective: rationality is something we can only assume. This is why ‘truth’ (especially in ‘normative’ matters (politics)) so often amounts to a ‘numbers game,’ a matter of tallying up guesses. Our blindness to our cognitive orientation—medial neglect—combined with the generativity of the human brain and the capriciousness of our environments, requires the communicative policing of cognitive idiosyncrasies. Whatever rationality consists in, minimally it functions to minimize discrepancies between individuals, sometimes vis a vis their environments and sometimes not. Reason, like the narrator, makes things fit.

The ‘disinterested persons’ the narrator turns to are themselves misfits, with “Nippers’ ugly mood on duty and Turkey’s off.” The irony here, and what critics are prone to find most interesting, is that the three are anything but disinterested. The more thought-provoking fact, however, lies in the way they agree with their employer despite the wild variance of their answers. For all the idiosyncrasies of its constituents, the office ecology automatically manages to conserve its ‘paramount consideration’: functionality.

Baffled unto inaction, the narrator suffers bouts of explaining away Bartleby’s discrepancies in terms of his material and moral utilities. The fact of his indulgences alternately congratulates and exasperates him: Bartleby becomes (and remains) a bi-stable sociocognitive figure, alternately aggressor and victim. “Nothing so aggravates an earnest person as a passive resistance,” the narrator explains. “If the individual so resisted be of a not inhumane temper, and the resisting one perfectly harmless in his passivity; then, in the better moods of the former, he will endeavor charitably to construe to his imagination what proves impossible to be solved by his judgment.” To be earnest is to be prone to minimize social discrepancies, to optimize via the integrations of others. The passivity of “I would prefer not to” poises Bartleby upon a predictive-processing threshold, one where the vicissitudes of mood are enough to transform him from a ‘penniless wight’ into a ‘brooding Marius’ and back again. The signals driving the charitable assessment are constantly interfering with the signals driving the uncharitable assessment, forcing the different neural hypotheses to alternate.

Via this dissonance, the scrivener begins to train him, with each “I would prefer not to” tending “to lessen the probability of [his] repeating the inadvertence.”

The ensuing narrative establishes two facts. First, we discover that Bartleby belongs to the office ecology, and in a manner more profound than even the narrator, let alone any one of his employees. Discovering Bartleby indisposed in his office on a Sunday, the narrator finds himself fleeing his own premises, alternately lost in “sad fancyings—chimeras, doubtless, of a sick and silly brain” and “[p]resentiments of strange discoveries”—strung between delusion and revelation.

Second, we learn that Bartleby, despite belonging to the office ecology, nevertheless signals its ruination:

“Somehow, of late I had got into the way of involuntarily using this word “prefer” upon all sorts of not exactly suitable occasions. And I trembled to think that my contact with the scrivener had already and seriously affected me in a mental way. And what further and deeper aberration might it not yet produce?”

When the narrator catches Turkey also saying “prefer,” he says, “So you have got the word too,” as if a verbal tick could be caught as a cold. Turkey manifests cryptonesia. Nippers does the same not moments afterward—ever bit as unconsciously as Turkey. Knowing nothing of the way humans have evolved to unconsciously copy linguistic behaviour, the narrator construes Bartleby as a kind of contagion—or pollutant, a threat to his delicately balanced office ecology. He once again determines he must rid his office of the scrivener’s insidious influence, but, under that influence, once again allows prudence—or the appearance of such—to dissuade immediate action.

Bartleby at last refuses to copy, irrevocably undoing the foundation of the narrator’s ersatz rationalizations. “And what is the reason?” the narrator demands to know. Staring at the brick wall just beyond his window, Bartleby finally offers a different explanation: “Do you not see the reason for yourself.” Though syntactically structured as a question, this statement possesses no question mark in Melville’s original version (as it does, for instance, in the version anthologized by Norton). And indeed, the narrator misses the very reason implied by his own narrative—the wall that occupied so many of Bartleby’s reveries—and confabulates an apology instead: work induced ‘impaired vision.’

But this rationalization, like all the others, is quickly exhausted. The internal logic of the office ecology is entirely dependent on the logic of Wall-street: the text continually references the functional exigencies commanding the ebb and flow of their lives, the way “necessities connected with my business tyrannized over all other considerations.” The narrator, when all is said and done, is an instrument of the Law and the countless institutions dependent upon it. At long last he fires Bartleby rather than merely resolving to do so.

He celebrates his long-deferred decisiveness while walking home, only to once again confront the blank wall the scrivener has become:

“My procedure seemed as sagacious as ever—but only in theory. How it would prove in practice—there was the rub. It was truly a beautiful thought to have assumed Bartleby’s departure; but, after all, that assumption was simply my own, and none of Bartleby’s. The great point was, not whether I had assumed that he would quit me, but whether he would prefer so to do. He was more a man of preferences than assumptions.”

And so, the great philosophical debate, both within the text and its critical reception, is set into motion. Lost in rumination, the narrator overhears someone say, “I’ll take odds he doesn’t,” on the street, and angrily retorts, assuming the man was referring to Bartleby, and not, as was actually the case, an upcoming election. Bartleby’s ‘passive resistance’ has so transformed his cognitive ecology as to crash his ability to make sense of his fellow man. Meaning, at least so far as it exists in his small pocket of the world, has lost its traditional stability.

Of course, the stranger’s voice, though speaking of a different matter altogether, had spoken true. Bartleby prefers not to leave the office that has become his home.

“What was to be done? or, if nothing could be done, was there any thing further that I could assume in the matter? Yes, as before I had prospectively assumed that Bartleby would depart, so now I might retrospectively assume that departed he was. In the legitimate carrying out of this assumption, I might enter my office in a great hurry, and pretending not to see Bartleby at all, walk straight against him as if he were air. Such a proceeding would in a singular degree have the appearance of a home-thrust. It was hardly possible that Bartleby could withstand such an application of the doctrine of assumptions.”

The ‘home-thrust,’ in other words, is to simply pretend, to physically enact the assumption of Bartleby’s absence, to not only ignore him, but to neglect him altogether, to the point of walking through him if need be. “But upon second thoughts the success of the plan seemed rather dubious,” the narrator realizes. “I resolved to argue the matter over with him again,” even though argument, Sellars famed ‘game of giving and asking for reasons,’ is something Bartleby prefers not to recognize.

When the application of reason fails once again, the narrator at last entertains the thought of killing Bartleby, realizing “the circumstance of being alone in a solitary office, up stairs, of a building entirely unhallowed by humanizing domestic associations” is one tailor-made for the commission of murder. Even acts of evil have their ecological preconditions. But rather than seize Bartleby, he ‘grapples and throws’ the murderous temptation, recalling the Christian injunction to love his neighbour. As research suggests, imagination correlates with indecision, the ability to entertain (theorize) possible outcomes: the narrator is nothing if not an inspired social confabulator. For every action-demanding malignancy he ponders, his aversion to confrontation occasions another reason for exemption, which is all he needs to reduce the discrepancies posed.

He resigns himself to the man:

“Gradually I slid into the persuasion that these troubles of mine touching the scrivener, had been all predestinated from eternity, and Bartleby was billeted upon me for some mysterious purpose of an all-wise Providence, which it was not for a mere mortal like me to fathom. Yes, Bartleby, stay there behind your screen, thought I; I shall persecute you no more; you are harmless and noiseless as any of these old chairs; in short, I never feel so private as when I know you are here. At last I see it, I feel it; I penetrate to the predestinated purpose of my life. I am content. Others may have loftier parts to enact; but my mission in this world, Bartleby, is to furnish you with office-room for such period as you may see fit to remain.”

But this story, for all its grandiosity, likewise melts before the recalcitrant scrivener. The comical notion that furnishing Bartleby an office could have cosmic significance merely furnishes a means of ignoring what cannot be ignored: how the man compromises, in ways crude and subtle, the systems of assumptions, the network of rational reflexes, comprising the ecology of Wall-street. In other words, the narrator’s clients are noticing…

“Then something severe, something unusual must be done. What! surely you will not have him collared by a constable, and commit his innocent pallor to the common jail? And upon what ground could you procure such a thing to be done?—a vagrant, is he? What! he a vagrant, a wanderer, who refuses to budge? It is because he will not be a vagrant, then, that you seek to count him as a vagrant. That is too absurd. No visible means of support: there I have him. Wrong again: for indubitably he does support himself, and that is the only unanswerable proof that any man can show of his possessing the means so to do.”

At last invisibility must be sacrificed, and regularity undone. The narrator ratchets through the facts of the scrivener’s cognitive bi-stability. An innocent criminal. An immovable vagrant. Unsupported yet standing. Reason itself cracks about him. And what reason cannot touch only fight or flight can undo. If the ecology cannot survive Bartleby, and Bartleby is immovable, then the ecology must be torn down and reestablished elsewhere.

It’s tempting to read this story in ‘buddy terms,’ to think that the peculiarities of Bartleby only possess the power they do given the peculiarities of the narrator. (One of the interesting things about the yarn is the way it both congratulates and insults the neuroticism of the critic, who, having canonized Bartleby, cannot but flatter themselves both by thinking they would have endured Bartleby the way the narrator does, and by thinking that surely they wouldn’t be so disabled by the man). The narrator’s decision to relocate allows us to see the universality of his type, how others possessing far less history with the scrivener are themselves driven to apologize, to exhaust all ‘quiet’ means of minimizing discrepancies. “[S]ome fears are entertained of a mob,” his old landlord warns him, desperate to purge the scrivener from No.—Wall-street.

Threatened with exposure in the papers—visibility—the narrator once again confronts Bartleby the scrivener. This time he comes bearing possibilities of gainful employment, greener pastures, some earnest, some sarcastic, only to be told, “I would prefer not to,” with the addition of, “I am not particular.” And indeed, as Bartleby’s preference severs ever more ecological connections, he seems to become ever more super-ecological, something outside the human communicative habitat. Repulsed yet again, the narrator flees Wall-street altogether.

Bartleby, meanwhile, is imprisoned in the Tombs, the name given to the House of Detention in lower Manhattan. A walled street is replaced by a walled yard—which, the narrator will tell Bartleby, “is not so sad a place as one might think,” the irony being, of course, that with sky and grass the Tombs actually represent an improvement over Wall-street. Bartleby, for his part, only has eyes for the walls—his unobstructed view of obstruction. To assure his former scrivener is well fed, the narrator engages the prison cook, who asks him whether Bartleby is a forger, likening the man to Monroe Edwards, a famed slavetrader and counterfeiter in Melville’s day. Despite the criminal connotations of Nippers, the narrator assures the man he was “never socially acquainted with any forgers.”

On his next visit, he discovers that Bartleby’s metaphoric ‘dead wall reveries’ have become literal. The narrator finds him “huddled at the base of the wall, his knees drawn up, and lying on his side, his head touching the cold stones,” dead for starvation. Cutting the last, most fundamental ecological reflex of all—the consumption of food—Bartleby has finally touched the face of obstruction… oblivion.

The story proper ends with one last misinterpretation: the cook assuming that Bartleby sleeps. And even here, at this final juncture, the narrator apologizes rather than corrects, quoting Job 3:14, using the Holy Bible, perhaps, to “mason up his remains in the wall.” Melville, however, seems to be gesturing to the fundamental problem underwriting the whole of his tale, the problem of meaning, quoting a fragment of Job in extremis, asking God why he should have been born at all, if his lot was only desolation. What meaning resides in such a life? Why not die an innocent?

Like Bartleby.

What the narrator terms the “sequel” consists of no more than two paragraphs (set apart by a ‘wall’ of eight asterisks), the first divulging “one little item of rumor” which may or may not be more or less true, the second famously consisting in, “Ah Bartleby! Ah humanity!” The rumour occasioning these apostrophic cries suggests “that Bartleby had been a subordinate clerk in the Dead Letter Office at Washington, from which he had been suddenly removed by a change of administration.”

What moves the narrator to passions too complicated to scrutinize is nothing other than the ecology of such a prospect: “Conceive a man by nature and misfortune prone to a pallid hopelessness, can any business seem more fitted to heighten it than that of continually handling these dead letters, and assorting them for the flames?” Here at last, he thinks, we find some glimpse of the scrivener’s original habitat: dead letters potentially fund the reason the man forever pondered dead walls. Rather than a forger, one who cheats systems, Bartleby is an undertaker, one who presides over their crashing. The narrator paints his final rationalization, Bartleby mediating an ecology of fatal communicative interruptions:

“Sometimes from out the folded paper the pale clerk takes a ring:—the finger it was meant for, perhaps, moulders in the grave; a bank-note sent in swiftest charity:—he whom it would relieve, nor eats nor hungers any more; pardon for those who died despairing; hope for those died unhoping; good tidings for those who died stifled by unrelieved calamities. On errands of life, these letters speed to death.”

An ecology, in other words, consisting of quotidian ecological failures, life lost for the interruption of some crucial material connection, be it ink or gold. Thus, are Bartleby and humanity entangled in the failures falling out of neglect, the idiosyncratic, the addresses improperly copied, and the ill-timed, the words addressed to those already dead. A meta-ecology where discrepancies can never be healed only consigned to oblivion.

But, of course, were Bartleby still living, this ‘sad fancying’ would likewise turn out to be a ‘chimera of a sick and silly brain.’ Just another way to brick over the questions. If the narrator finds consolation, the wreckage of his story remains.

 

II

I admit that I feel more like Ahab than Ishmael… most of the time. But I’m not so much obsessed by the White Whale as by what is obliterated when it’s revealed as yet another mere cetacean. Be it the wrecking of The Pequod, or the flight of the office at No.— Wall-street, the problem of meaning is my White Whale. “Bartleby, the Scrivener” is compelling, I think, to the degree it lends that problem the dimensionality of narrative.

Where in Moby-Dick, the relation between the inscrutable and the human is presented via Ishmael, which is to say the third person, in Bartleby, the relation is presented in the second: the narrator is Ahab, every bit as obsessed with his own pale emblem of unaccountable discrepancy—every bit as maddened. The violence is merely sublimated in quotidian discursivity.

The labour of Ishmael falls to the critic. “Life is so short, and so ridiculous and irrational (from a certain point of view),” Melville writes to John C. Hoadley in 1877, “that one knows not what to make of it, unless—well, finish the sentence for yourself.” A great many critics have, spawning what Dan McCall termed (some time ago now) the ‘Bartleby Industry.’ There’s so many interpretations, in fact, that the only determinate thing one can say regarding the text is that it systematically underdetermines every attempt to determine its ‘meaning.’

In the ecology of literary and philosophical critique, Bartleby remains a crucial watering hole in an ever-shrinking reservation of the humanities. A great number of these interpretations share the narrator’s founding assumption, that Bartleby—the character—represents something exceptional. Consider, for instance, Deleuze in “Bartleby; or, the Formula.”

“If Bartleby had refused, he could still be seen as a rebel or insurrectionary, and as such would still have a social role. But the formula stymies all speech acts, and at the same time, it makes Bartelby a pure outsider [exclu] to whom no social position can be attributed. This is what the attorney glimpses with dread: all his hopes of bringing Bartleby back to reason are dashed because they rest on a logic of presuppositions according to which an employer ‘expects’ to be obeyed, or a kind of friend listened to, whereas Bartleby has invented a new logic, a logic of preference, which is enough to undermine the presuppositions of language as a whole.” 73

Or consider Zizek, who uses Bartleby to conclude The Parallax View no less:

“In his refusal of the Master’s order, Bartleby does not negate the predicate; rather, he affirms a nonpredicate: he does not say that he doesn’t want to do it; he says that he prefers (wants) not to do it. This is how we pass from the politics of “resistance” or “protestation,” which parasitizes upon what it negates, to a politics which opens up a new space outside the hegemonic position and its negation.” 380-1

Bartleby begets ‘Bartleby politics,’ the possibility of a relation to what stands outside relationality, a “move from something to nothing, from the gap between two ‘somethings’ to the gap that separates a something from nothing, from the void of its own place” (381). Bartleby isn’t simply an outsider on this account, he’s a pure outsider, more limit than liminal. And this, of course, is the very assumption that the narrator himself carries away intact: that Bartleby constitutes something ontologically or logically exceptional.

I no longer share this assumption. Like Borges in his “Prologue to Herman Melville’s “Bartleby,” I see “the symbol of the whale is less apt for suggesting the universe is vicious than for suggesting its vastness, its inhumanity, its bestial or enigmatic stupidity.” Melville, for all the wide-eyed grandiloquence of his prose, was a squinty-eyed skeptic. “These men are all cracked right across the brow,” he would write of philosophers such as Emerson. “And never will the pullers-down be able to cope with the builders-up.” For him, the interest always lies in the distances between lofty discourse and the bloody mundanities it purports to solve. As he writes to Hawthorne in 1851:

“And perhaps after all, there is no secret. We incline to think that the Problem of the Universe is like the Freemason’s mighty secret, so terrible to all children. It turns out, at last, to consist in a triangle, a mallet, and an apron—nothing more! We incline to think that God cannot explain His own secrets, and that He would like a little more information upon certain points Himself. We mortals astonish Him as much as He us.”

It’s an all too human reflex. Ignorance becomes justification for the stories we want to tell, and we are filled with “oracular gibberish” as a result.

So what if Bartleby holds no secrets outside the ‘contagion of nihilism’ that Borges ascribes to him?

As a novelist, I cannot but read the tale, with its manifest despair and gallows humour, as the expression of another novelist teetering on the edge of professional ruin. Melville conceived and wrote “Bartleby, the Scrivener” during a dark period of his life. Both Moby-Dick and Pierre had proved to be critical and commercial failures. As Melville would write to Hawthorne:

“What I feel most moved to write, that is banned—it will not pay. Yet, altogether write the other way I cannot. So the product is a final hash, and all my books are botches.”

Forgeries, neither artistic nor official. Two species of neuroticism plague full-time writers, particularly if they possess, as Melville most certainly did, a reflective bent. There’s the neuroticism that drives a writer to write, the compulsion to create, and there’s the neuroticism secondary to a writer’s consciousness of this prior incapacity, the neurotic compulsion to rationalize one’s neuroticism.

Why, for instance, am I writing this now? Am I a literary critic? No. Am I being paid to write this? No. Are there things I should be writing instead? Buddy, you have no idea. So why don’t I write as I should?

Well, quite simply, I would prefer not to.

And why is this? Is it because I have some glorious spark in me? Some essential secret? Am I, like Bartleby, a pure outsider?

Or am I just a fucking idiot? A failed copyist.

For critics, the latter is pretty much the only answer possible when it comes to living writers who genuinely fail to copy. No matter how hard we wave discrepancy’s flag, we remain discrepancy minimization machines—particularly where social cognition is concerned. Living literary dissenters cue reflexes devoted to living threats: the only good discrepancy is a dead discrepancy. As the narrator discovers, attributing something exceptional becomes far easier once the dissenter is dead. Once the source falls silent, the consequences possess the freedom to dispute things as they please.

Writers themselves, however, discover they are divided, that Ahab is not Ahab, but Ishmael as well, the spinner of tales about tales. A failed copyist. A hapless lawyer. Gazing at obstruction, chasing the whale, spinning rationalization after rationalization, confabulating as a human must, taking meagre heart in spasms of critical fantasy.

Endless interpretative self-deception. As much as I recognize Bartleby, I know the narrator only too well. This is why for me, “Bartleby, the Scrivener” is best seen as a prank on the literary establishment, a virus uploaded with each and every Introduction to American Literature class, one assuring that the critic forever bumbles as the narrator bumbles, waddling the easy way, the expected way, embodying more than applying the ‘doctrine of assumptions.’ Bartleby is the paradigmatic idiot, both in the ancient Greek sense of idios, private unto inscrutable, and idiosyncratic unto useless. But for the sake of vanity and cowardice, we make of him something vast, more than a metaphor for x. The character of Bartleby, on this reading, is not so much key to understanding something ‘absolute’ as he is key to understanding human conceit—which is to say, the confabulatory stupidity of the critic.

But explaining the prank, of course, amounts to falling for the prank (this is the key to its power). No matter how mundane one’s interpretation of Bartleby, as an authorial double, as a literary prank, it remains simply one more interpretation, further evidence of the narrative’s profound indeterminacy. ‘Negative exceptionalists’ like Deleuze or Zizek (or Agamben) need only point out this fact to rescue their case—don’t they? Even if Melville conceived Bartleby as his neurotic alter-ego, the word-crazed husband whose unaccountable preferences had reduced his family to penury (and so, charity), he nonetheless happened upon “a zone of indetermination or indiscernibility in which neither words nor characters can be distinguished” (“Bartleby, or the Formula,” 76).

No matter how high one stacks their mundane interpretations of Bartleby—as an authorial alter-ego, a psycho-sociological casualty, an exemplar of passive resistance, or so on—the profundity of his rationality crashing function remains every bit as profound, exceptional. Doesn’t it? After-all, nothing essential binds the distal intent of the author (itself nothing but another narrative) to the proximate effect of the text, which is to “send language itself into flight” (76). Once we set aside the biographical, psychological, historical, economic, political, and so on, does not this formal function remain? And is it not irreducible, exceptional?

That depends whether you think,

is exceptional. What should we say about Necker Cubes? Do they mark the point where the visibility of the visible collapses, generating ‘a zone of indetermination or indiscernibility in which neither indents nor protrusions can be distinguished’? Are they ‘pure figures,’ efficacies that stand outside the possibility of intelligible geometry? Or do they merely present the visual cortex with the demand to distinguish between indents and protrusions absent the information required to settle that demand, thus stranding visual experience upon the predictive threshold of both? Are they bi-stable images?

The first explanation pretty clearly mistakes a heuristic breakdown in the cognition of visual information with an exceptional visual object, something intrinsically indeterminate—something super-geometrical, in fact. When we encounter something visually indeterminate, we immediately blame our vision, which is to say, the invisible, enabling dimension of visual cognition. Visual discrepancies had real reproductive consequences, evolutionarily speaking. Thanks to medial neglect, we had no way of cognizing the ecological nature of vision, so we could only blink, peer, squint, rub our eyes, or change our position. If the discrepancy persisted, we wondered at it, and if we could, transformed it into something useful—be it cuing environmental forms on cave or cathedral walls (‘visual representations’) or cuing wonder with kaleidoscopes at Victorian exhibitions.

Likewise, Deleuze and Zizek (and many, many others) are mistaking a heuristic breakdown in the cognition of social information with an exceptional social entity, something intrinsically indeterminate—something super-social. Imagine encountering a Bartleby in your own place of employ. Imagine your employer not simply tolerating him, but enabling him, allowing him to drift ever deeper into anorexic catatonia. Initially, when we encounter something socially indeterminate in vivo, we typically blame communication—as does the narrator with Bartleby. Social discrepancies, one might imagine, had profound reproductive consequences (given that reproduction is itself social). The narrator’s sensitivity to such discrepancies is the sensitivity that all of us share. Given medial neglect, however, we have no way of cognizing the ecological nature of social cognition. So we check with our colleagues just to be sure (‘Am I losing my mind here?’), then we blame the breakdown in rational reflexes on the man himself. We gossip, test out this or that pet theory, pester spouses who, insensitive to potential micropolitical discrepancies, urge us to file a complaint with someone somewhere. Eventually, we either quit the place, get the poor sod some help, or transform him into something useful, like “Bartleby politics” or what have you. This is the prank that Melville lays out with the narrator—the prank that all post-modern appropriations of this tale trip into headlong…

The ecological nature of cognition entails the blindness of cognition to its ecological nature. We are distributed systems: we evolved to take as much of our environments for granted as we possibly could, accessing as little as possible to solve as many problems as possible. Experience and cognition turn on shallow information ecologies, blind systems turning on reliable (because reliably generated) environmental frequencies to solve problems—especially communicative problems. Absent the requisite systems and environments, these ecologies crash, result in the application of cognitive systems to situations they cannot hope to solve. Those who have dealt with addicted or mentally-ill loved ones know the profundity of these crashes first-hand, the way the unseen reflexes (‘preferences’) governing everyday interactions cast you into dismay and confusion time and again, all for want of applicability. There’s the face, the eyes, all the cues signaling them as them, and then… everything collapses into mealy alarm and confusion. Bartleby, with his dissenting preference, does precisely the same: Melville provides exquisite experiential descriptions of the dumbfounding characteristic of sociocognitive crashes.

Bartleby need not be a ‘pure outsider’ to do this. He just needs to provide enough information to demand disambiguation, but not enough information to provide it. “I would prefer not to”—Bartleby’s ‘formula,’ according to Deleuze—is anything but ‘minimal’: its performance functions the way it does because of the intricate communicative ecology it belongs to. But given medial neglect, our blindness to ecology, the formula is prone to strike us as something quite different, as something possessing no ecology.

It certainly strikes Deleuze as such:

“The formula is devastating because it eliminates the preferable just as mercilessly as any nonpreferred. It not only abolishes the term it refers to, and that it rejects, but also abolishes the other term it seemed to preserve, and that becomes impossible. In fact, it renders them indistinct: it hollows out an ever expanding zone of indiscernibility or indetermination between some nonpreferred activities and a preferable activity. All particularity, all reference is abolished.” 71

Since preferences affirm, ‘preferring not to’ (expressed in the subjunctive no less) can be read as an affirmative negation: it affirms the negation of the narrator’s request. Since nothing else is affirmed, there’s a peculiar sense in which ‘preferring not to’ possesses no reference whatsoever. Medial neglect assures that reflection on the formula occludes the enabling ecology, that asking what the formula does will result in fetishization, the attribution of efficacy in an explanatory vacuum. Suddenly ‘preferring not to’ appears to be a ‘semantic disintegration grenade,’ something essentially disruptive.

In point of natural fact, however, human sociocognition is fundamentally interactive, consisting in the synchronization of radically heuristic systems given only the most superficial information. Understanding one another is a radically interdependent affair. Bartleby presents all the information cuing social reliability, therefore consistently cuing predictions of reliability that turn out to be faulty. The narrator subsequently rummages through the various tools we possess to solve harmless acts of unreliability given medial neglect—tools which have no applicability in Bartleby’s case. Not only does Bartleby crash the network of predictive reflexes constituting the office ecology, he crashes the sociocognitive hacks that humans in general use to troubleshoot such breakdowns. He does so, not because of some arcane semantic power belonging to the ‘formula,’ but because he manifests as a sociocognitive Necker-Cube, cuing noncoercive troubleshooting routines that have no application given whatever his malfunction happens to be.

This is the profound human fact that Melville’s skeptical imagination fastened upon, as well as the reason Bartleby is ‘nothing in particular’: all human social cognition is fundamentally ecological. Consider, once again, the passage where the narrator entertains the possibility of neglecting Bartleby altogether, simply pretending he was absent:

“What was to be done? or, if nothing could be done, was there any thing further that I could assume in the matter? Yes, as before I had prospectively assumed that Bartleby would depart, so now I might retrospectively assume that departed he was. In the legitimate carrying out of this assumption, I might enter my office in a great hurry, and pretending not to see Bartleby at all, walk straight against him as if he were air. Such a proceeding would in a singular degree have the appearance of a home-thrust. It was hardly possible that Bartleby could withstand such an application of the doctrine of assumptions. But upon second thoughts the success of the plan seemed rather dubious. I resolved to argue the matter over with him again.”

Having reached the limits sociocognitive application, he proposes simply ignoring any subsequent failure in prediction, in effect, wishing the Bartlebian crash space away. The problem, of course, is that it ‘takes two to tango’: he has no choice but to ‘argue the matter again’ because the ‘doctrine of assumptions’ is interactional, ecological. What Melville has fastened upon here is the way the astronomical complexity of the sociocognitive (and metacognitive) systems involved holds us hostage, in effect, to their interactional reliability. Meaning depends on maddening sociocognitive intricacies.

The entirety of the story illustrates the fragility of this cognitive ecosystem despite its all-consuming power. Time and again Bartleby is characterized as an ecological casualty of the industrialization of social relations, be it the mass disposal of undelivered letters or the mass reproduction of legally binding documentation. Like ‘computer,’ ‘copier’ names something that was once human but has since become technology. But even as Bartleby’s breakdown expresses the system’s power to break the maladapted, it also reveals its boggling vulnerability, the ease with which it evaporates into like-minded conspiracies and ‘mere pretend.’ So long as everyone plays along—functions reliably—this interdependence remains occluded, and the irrationality (the discrepancy generating stupidity) of the whole never needs be confronted.

In other words, the lesson of Bartleby can be profound, as profound as human communication and cognition itself, without implying anything exceptional. Stupidity, blind, obdurate obliviousness, is all that is required. A minister’s black veil, a bit of crepe poised upon the right interactional interface, can throw whole interpretative communities from their pins. The obstruction, the blank wall, need not conceal anything magical to crash the gossamer ecologies of human life. It need only appear to be a window, or more cunning still, a window upon a wall. We need only be blind to the interactional machinery of looking to hallucinate absolute horizons. Blind to the meat of life.

And in this sense, we can accuse the negative exceptionalists such as Deleuze and Zizek not simply with ignoring life, the very topos of literature, but with concealing the threat that the technologization of life poses to life. Only in an ecology can we understand the way victims can at once be assailants absent aporia, how Bartleby, overthrown by the technosocial ecologies of his age, can in turn overthrow that technosocial ecology. Only understanding life for what we know it to be—biological—allows us to see the profound threat the endless technological rationalization of human sociocognitive ecologies poses to the viability of those ecologies. For Bartleby, by revealing the ecological fragility of human social cognition, how break begets break, reveals the antithesis between ‘progress’ and ‘meaning,’ how the former can only carry the latter so far before crashing.

As Deleuze and Zizek have it, Bartleby holds open a space of essential resistance. As the reading here has it, Bartleby provides a grim warning regarding the ecological fragility of human social cognition. One can even look at him as a blueprint for the potential weaponization of anthropomorphic artificial intelligence, systems designed to strand individual decision-making upon thresholds, to command inaction via the strategic presentation of cues. Far from representing some messianic discrepancy, apophatic proof of transcendence, he represents the way we ourselves become cognitive pollutants when abandoned to polluted cognitive ecologies.

Bleak Theory (By Paul J. Ennis)

by rsbakker

In the beginning there was nothing and it has been getting steadily worse ever since. You might know this, and yet repress it. Why? Because you have a mind that is capable of generating useful illusions, that’s why. How is this possible? Because you are endowed with a brain that creates a self-model which has the capacity to hide things from ‘you.’ This works better for some than for others. Some of us are brain-sick and, for whatever perverse reasons, we chip away at our delusions. In such cases recourse is possible to philosophy, which offers consolation (or so I am told), or to mysticism, which intentionally offers nothing, or to aesthetics, which is a kind of self-externalizing that lets the mind’s eye drift elsewhere. All in all, however, the armor on offer is thin. Such are the options: to mirror (philosophy), to blacken (mysticism), or to embrace contingency (aesthetics). Let’s select the latter for now. By embracing contingency I mean that aesthetics consists of deciding upon and pursuing something quite specific for intuitive rather than rational reasons. This is to try to come to know contingency in your very bones.

As a mirrorer by trade I have to abandon some beliefs to allow myself to proceed this way. My belief that truth comes first and everything else later will be bracketed. I replace this with a less demanding constraint: truth comes when you know why you believe what you believe. Oftentimes I quite simply believe things because they are austere and minimal and I have a soft spot for that kind of thing. When I allow myself to think in line with these bleak tones an unusual desire is generated: to outbleak black, to be bleaker than black. This desire comes from I know not where. It seemingly has no reason. It is an aesthetic impulse. That’s why I ask that you take from what follows what you will. It brings me no peace either way.

I cannot hope to satisfy anyone with a definition of aesthetic experience, but let me wager that those moments that let me identify with the world a-subjectively – but not objectively – are commonly associated in my mind with bleakness. My brain chemistry, my environment, and similar contingent influences have rendered me this way. So be it. Bleakness manifests most often when I am faced with what is most distinctly impersonal: with cloudscapes and dimmed, wet treescapes. Or better yet, any time I witness a stark material disfiguration of the real by our species. And flowering from this is a bleak outlook correlated with the immense, consistent, and mostly hidden, suffering that is our history – our being. The intensity arising from the global reach of suffering becomes impressive when dislocated from the personal and the particular because then you realize that it belongs to us. Whatever the instigator the result is the same: I am alerted not just to the depths of unknowing that I embody, to the fact that I will never know most of life, but also to the industrial-scale sorrow consistently operative in being. All that is, is a misstep away from ruin. Consciousness is the holocaust of happiness.

Not that I expect anything more. Whatever we may say of our cultural evolution there was nothing inscribed in reality suggesting our world should be a fit for us. I am, on this basis, not surprised by our bleak surroundings. The brain, model-creator that it is, does quite a job at systematizing the outside into a representation that allows you to function; assuming, that is, that you have been gifted with a working model. Some have not. Perhaps the real horror is to try to imagine what has been left out (even the most ardent realist surely knows you do not look at the world directly as it is). Thankfully there is no real reason for us to register most of the information out there and we were not designed to know most of it anyway. This is the minimal blessing our evolution has gifted us with. The maximal damage is that from the exaption we call consciousness cultural evolution flowers and puts our self-model at the mercy of a bombardment of social complexity – our factical situation. It is impossible to know how our information age is toying with our brain, suffice to say that the spike in depression, anxiety and self-loathing is surely some kind of signal. The brain though, like the body, can function even when maltreated. Whether this is truly to the good is difficult to say.

And yet we must be careful to remember that even in so-called eliminative materialism the space of reasons remains. The normative dimension is, as Brandom would put it, irreducible. It does not constitute the entire range of cognition, and is perhaps best deflated in light of empirical evidence, but that is beside the point. To some degree, perhaps minor, we are rational animals with the capacity for relatively free decision-making. My intuition is that ultimately the complexity of our structure means that we will never be free of certain troubles arising from what we are. Being embodied is to be torn between immense capacity and the constant threat of losing capacities. A stroke, striking as if from nowhere, can fundamentally alter anyone. This is not to suggest that progress does not occur. It can and it does, but it can also be, and often is, undone. It’s an unfortunate state of affairs, bleak even, but being attuned to the bleakness of reality does not result in passivity by necessity.

Today there are projects that explicitly register all this, and nonetheless intend to operate in line with the potentiality contained within the capacities of reason. What differentiates these projects, oftentimes rationalist in nature, is that they do not follow our various universalist legacies in simply conceiving of the general human as deserving of dignity simply because we all belong to the same class of suffering beings. This is not sufficient to make humans act well. The phenomenon of suffering is easily recognizable and most humans are acutely aware of it, and yet they continue to act in ways contrary to how we ‘ought’ to respond. In fact, it is clear that knowing the sheer scale of suffering may lead to hedonism, egoism or repression. Various functional delusions can be generated by our mind, and it is hardly beyond us to rationalize selfishness on the basis of the universal. We are versatile like that. For this reason, I find myself torn between two poles. I maintain a philosophical respect for various neo-rationalist projects under development. And I remain equally under no illusion they will ever be put to much use. And I do not blame people for falling short of these demands. I am so far from them I only really take them seriously on the page. I find myself drawn, for these reasons, to the pessimist attitude, often considered a suspect stance.

One might suggest that we need only a minimal condition to be ethical. An appeal to the reality of pain in sentient and sapient creatures, perhaps. In that decision you might find solace – despite everything (or in spite of everything). It is a choice, however. Our attempts to assert an ethical universalism are bound up with a counter-logic: the bleak truth of contingency on the basis of the impersonal-in-the-personal. It is a logic quietly operative in the philosophical tradition and one I believe has been suppressed. Self-suppressed it flirts too much with a line leading us to the truth of our hallucination. It’s Nietzsche telling you about perspectivism hinging on the impersonal will-to-power and then you maturing, and forgetting. Not knocking his arguments out of the water, mind. Simply preferring not to accept it. Nobody wants to circle back round to the merry lunatic truths that make a mockery of your life. You might find it hard to get out of bed…whereas now I am sure you leap up every morning, smile on your face…The inhuman, impersonal attachment to each human has many names, but let us look at some that are found right at the heart of the post-Kantian tradition: transcendental subject, Dasein, Notion. Don’t believe me? I don’t mind, it makes no difference to me.

Let’s start with the sheer impersonality involved in Heidegger’s sustained fascination with discussing the human without using the word. Dasein is not supposed to be anything or anyone, in particular. Now once you think about it Dasein really does come across as extraordinarily peculiar. It spends a lot of its time being infested by language since this is, Heidegger insists, the place where its connection to being can be expressed. Yet it is also an easily overrun fortress that has been successfully invaded by techno-scientific jargon. When you hook this thesis up with Heidegger’s epochal shifts then the impersonal forces operative in his schema start to look downright ominous. However, we can’t blame Heidegger on what we can blame on Kant. His transcendental field of sense also belongs to one and all. And so, like Dasein, no one in particular. This aspect of the transcendental field still remains contentious. The transcendental is, at once, housed in a human body but also, in its sense-making functions, to be considered somehow separate from it. It is not quite human, but not exactly inhuman either.

There is, then, some strange aspect, I can think of no other word for it, inhabiting our own flowing world of a coherent ego, or ‘I,’ that allows for the emergence of a pooled intersubjectivity. Kant’s account, of course, had two main aims: to constrain groundless metaphysical speculation and, in turn, to ground the sciences. Yet his readers did not always follow his path. Kant’s decision to make a distinction between the phenomena and the noumena is perhaps the most consequential one in our tradition and is surely one of the greatest examples of opening up what you intended to close down. The nature of the noumenal realm has proven irresistible to philosophers and it has recursive consequences for how we see ourselves. If the nominal realm names a reality that is phenomenally clouded then it surely precedes, ontologically, the ego-as-center; even if it is superseded by the ego’s modelling function for us. Seen within the wider context of the noumenal realm it is legitimate to ask whether the ‘I’ is merely a densely concentrated, discrete packet amidst a wider flow; a locus amidst the chaos. The ontological generation of egos is then shorn back until all you have is Will (Schopenhaeur), Will to Power (Nietzsche), or, in a less generative sense ‘what gives,’ es gibt (Heidegger). This way of thinking belongs, when one takes the long-view, to the slow-motion deconstruction of the Cartesian ego in post-Kantian philosophy, albeit with Husserl cutting a lonely revivalist figure here. Today the ego is trounced everywhere, but there is perhaps no better example that the ‘no-self-at-all’ argument of Metzinger, but even the one-object-amongst-many thesis of object oriented ontology traces a similar line.

The destruction of the Cartesian ego may have its lineage in Kant, but the notion of the impersonal as force, process, or will, owes much to Hegel. In his metaphysics Hegel presents us with a cosmic loop explicable through retroactive justification. At the beginning, the un-articulated Notion, naming what is at the heart-of-the-real, sets off without knowledge of itself, but with the emergence of thinking subjects the Notion is finally able to think itself. In this transition the gap between the un-articulated and articulated Notion is closed, and the entire thing sets off again in directions as yet unknown. Absolute knowing is, after all, not totalized knowing, but a constant, vigilant knowing navigating its way through contingency and recognizing the necessity below it all. But that’s just the thing: despite being important conduits to this process, and having a quite special and specific function, it’s the impersonal process that really counts. In the end Kant’s attempt to close down discussion about the nature of the noumenal realm simply made it one of the most appealing themes for a philosopher to pursue. Censorship helps sales.

Speaking of sales, all kinds of new realism are being hawked on the various para-academic street-corners. All of them benefit from a tint of recognizability rooted, I would suggest, in the fact that ontological realism has always been hidden in plain sight; for any continentalist willing to look. What is different today is how the question of the impersonal attachments affecting the human comes not from inside philosophy, but from a number of external pressures. In what can only be described as a tragic situation for metaphysicians, truth now seeps into the discipline from the outside. We see thinking these days where philosophers promised there was none. The brilliance of continental realism lies in reminding us how this is an immense opportunity for philosophers to wake up from various self-induced slumbers, even if that means stepping outside the protected circle from time to time. It involves bringing this bubbling, left-over question of ontological realism right to the fore. This does not mean ontological realism will come to be accepted and then casually integrated into the tradition. If anything the backlash may eviscerate it, but the attempt will have been made. Or was, and quietly passed.

And the attempt should be made because the impersonality infecting ontological realist excesses such as the transcendental subject (in-itself), the Notion, or Dasein are attuned to what we can now see as the (delayed) flowering of the Copernican revolution. The de-centering is now embedded enough that whatever defense of the human we posit it must not be dishonest. We cannot hallucinate our way out of our ‘cold world’. If we know that our self-model is itself a hallucination, but a very real one, then what do we do then? Is it enough to situate the real in our ontological flesh and blood being-there that is not captured by thinking? Or is it best to remain with thinking as a contingent error that despite its aberrancy nonetheless spews out the truth? These avenues are grounded in consciousness and in our bodies and although both work wonders they can just as easily generate terrors. Truth qualified by these terrors is where one might go. No delusion can outflank these constraints forever. Bled of any delusional disavowal, one tries to think without hope. Hope is undignified anyway. Dignity involves resisting all provocation and remaining sane when you know it’s bleakness all the way down.

Some need hope, no? As I write this I feel the beautiful soul rising from his armchair, but I do not want to hear it. Bleak theory is addressed to your situation: a first worlder inhabiting an accelerated malaise. The ethics to address poverty, inequality, and hardship will be different. Our own heads are disordered and we do not quite know how to respond to the field outside it. You will feel guilty for your myopia, and you deserve it, but you cannot elide by endlessly pointing to the plank in the other’s eye.  You can pray through your tears, and in doing so ironically demonstrate the disturbance left by the death of God, but what does this shore up? It builds upon cathedral ruins: those sites where being is doubled-up and bent-over-backwards trying to look inconspicuous as just another option. Do you want to write religion back into being? Why not, as Ayache suggests, just ruin yourself? I hope it is clear I don’t have any answers: all clarity is a lie these days. I can only offer bleak theory as a way of seeing and perhaps a way of operating. It ‘works’ as follows: begin with confusion and shear away at what you can. Whatever is left is likely the closest thing approximating to what we name truth. It will be strictly negative. Elimination of errors is the best you can hope for.

I don’t know how to end this, so I am just going to end it.

 

Unkempt Nation, Disheveled Soul

by rsbakker

So this has been a mad summer in pretty much every respect. The first week of May, my hard-drive died, and I lost pretty much everything I had written the previous six months. My wife was in Venezuela at the time, marching, so I had a hard time wrapping my head around the psychological enormity of the event. It’s not every day you turn on the news to watch events embroiling your loved ones.

Anyway, I’m still pulling the pieces together. I had occasion to revisit some of my first blog posts, and I thought I would post a few snippets from way back in 2010, when we could still pretend technology wasn’t driving the world insane. Rather than get angry all over again at the lack of reviews, or fret for the future of democratic society in the technological age, I thought I would let my younger, less well-groomed self do the ranting.

I’ll be back with things more substantial soon.

 

September 14, 2010 – So why are so many writers heros? Aside from good old human psychology, I blame it on the old ‘Write What You Know’ literary maxim.

Like so many literary maxims it sounds appealing at first blush. After all, how can you be honest–authentic–unless you write ‘what you know.’ But like all maxims it has a flip side: Telling practitioners what they should do is at once telling them what they should not do. Telling writers to only write what they know is telling them to studiously avoid all the things their lives lack–adventure, romance, spectacle–which is to say, the very things that regular people crave.

So this maxim has the happy side-effect of policing who gets to communicate to whom, and so securing the institutional boundaries of the literary specialist. Not only is real culture left to its own naive devices, it becomes the unflagging foil, a kind of self-congratulatory resource, one that can be tapped over and over again to confirm the literary writer’s sense of superiority. Thus all the writerly heros, stranded in seas of absurdity.

September 16, 2010 – The pigeonhole has no bottom, believe you me. I used to be so naive as to think I could climb out, but now I’m starting to think that it swallows everyone in the end. I wonder about all the other cranks and crackpots out there, about all the other sparks that have been snuffed by relentless inattention. It’s no accident that eulogies are so filled with cliches.

After all, it’s neurophysiology that I’m up against more than any passing cultural bigotry. The brain pigeonholes everything it encounters to better lower its caloric load, to economize. We sort far more than we ponder. Novelty, when we encounter it, is either confused for something old and stupid or comes across as errant noise. Things were this way long before corporations and capital.

So I find myself wondering what I should do. Maybe I should just resign myself to my fate, numb the pain, mellow those revenge fantasies. Become a fatalist.

But then there’s nothing like bitterness to keep that fire scorching your belly. And there’s nothing I fear more than becoming old and complacent. Only the well-groomed don’t have chips on their shoulders.

September 18, 2010 – What really troubles me is the way this hypocrisy has been institutionalized. So long as you treat ‘culture’ as a what, which is to say, as a abstract construct, a formalism, then you can congratulate yourself for all the myriad ways in which your abstractions disrupt those abstractions. But as soon as you treat ‘culture’ as a who, which is to say, as a cartoon we use to generalize over millions of living, breathing people, the notion of ‘disruption’ becomes pretty ridiculous pretty quick. All it takes is one simple question: “Who is disrupted?” and the illusion of criticality is dispelled. One little question.

The conceit is so weak. And yet somehow we’ve managed to raise a veritable landfill of illusory subversion upon it. ‘Literature,’ we call it.

Says a lot about the power of vanity, if you think about it.

As well as why I’m probably doomed to fail.

September 20, 2010 – But our culture has become frightfully compartmentalized. The web, which was supposed to blow open the doors of culture–to ‘flatten everything’–seems to have had the opposite effect. Since we’re hardwired to reflexively seek out affirmation and confirmation, rendering everything equally available has meant our paths of least resistence no longer take us across unfamiliar territory. We can get what we want and need without taking detours through things we didn’t realize we wanted or needed. We can make an expedient bastion out of our parochial tastes.

February 27, 2011 – These people, it seems to me, have to be engaged, have to be challenged, if only so that the masses don’t succumb to their own weaknesses for self-serving chauvinism. These people are appealing simply because they are so adept at generating ‘reasons’ for self-serving intuitions that we all share. That we and our ways are special, exempt, and that Others are a threat to us. That our high-school is, like, really the greatest high-school on the planet. Confirmation bias, my-side bias, the list goes on. And given that humans have evolved to be easily and almost irrevocably programmed, it seems to me that the most important place to wage this battle is in classroom. To begin teaching doubt as the highest virtue, as opposed to the madness of belief.

The prevailing madness.

Funny, huh? It’s the lapse in belief that these guys typically see as symptomatic of modern societal decline. But really what they’re talking about is a lapse in agreement. Belief is as pervasive as ever, but as a principle rather than any specific consensual canon. It stands to reason that the lack of ‘moral and cognitive solidarity’ would make us uncomfortable, considering the kinds of scarcity and competition faced by our ancestors.

January 13, 2011 – The problem is that human nature is adapted to environments where the access to information was geographically indexed, where its accumulation exacted a significant caloric toll. We don’t call private investigators ‘gumshoes’ for no reason. We are adapted to environments where the info-gathering workload continually forced us to ‘settle,’ which is to say, make due with something other than what we originally desired, when it comes to information.

This is what makes the ‘global village’ such a deceptive misnomer. In the preindustrial village, where everyone depended upon one another, our cognitive selfishness made quite a bit of adaptive sense: in environments where scarcity and interdependency force cognitive compromise, you can see how cognitive selfishness–finding ways to justify oneself while impugning potential competitors–might pay real dividends in terms of in-group prestige. Where the circumstantial leash is tight, it pays to pull and pull, and perhaps reach those morsels that escape others.

In the industrial village, however, the leash is far longer. But even still, if you want pursue your views, geographical constraints force you to engage individuals who do not share them. Who knows what Bob across the road believes? (My Bob was an evangelical Christian, and I count myself lucky for having endlessly argued with him).

In the information village the leash is cut altogether. The likeminded can effortlessly congregate in innumerable echo chambers. Of course, they can effortlessly congregate with those they disagree with as well, but… The tendency, by and large, is not only to seek confirmation, but to confuse it with intelligence and truth–which is why right-wingers tend to watch more Fox than PBS.

Now, enter all these specialized programs, which are bent on moulding your information environment into something as pleasing as possible. Don’t like the N-word? Well, we can make sure you never need to encounter it again–ever.

The world is sycophantic, and it’s becoming more so all the time. This, I think, is a far better cartoon generalization than ‘flat,’ insofar as it references the user, the intermediary, as well as the information environment.

The contemporary (post-posterity) writer has to incorporate this radically different social context into their practice (if that practice is to be considered even remotely self-critical). If you want to produce literary effects, then you have to write for a sycophantic world, find ways not simply to subvert the ideological defences of readers, but to trick the inhuman, algorithmic gate-keepers as well.

This means being strategically sycophantic. To give people what they want, sure, but with something more as well.

 

Snuffing the Spark: A Nihilistic Account of Moral Progress

by rsbakker

sparkman

 

If we define moral progress in brute terms of more and more individuals cooperating, then I think we can cook up a pretty compelling naturalistic explanation for its appearance.

So we know that our basic capacity to form ingroups is adapted to prehistoric ecologies characterized by resource scarcity and intense intergroup competition.

We also know that we possess a high degree of ingroup flexibility: we can easily add to our teams.

We also know moral and scientific progress are related. For some reason, modern prosocial trends track scientific and technological advance. Any theory attempting to explain moral progress should explain this connection.

We know that technology drastically increases information availability.

It seems modest to suppose that bigger is better in group competition. Cultural selection theory, meanwhile, pretty clearly seems to be onto something.

It seems modest to suppose that ingroup cuing turns on information availability.

Technology, as the homily goes, ‘brings us closer’ across a variety of cognitive dimensions. Moral progress, then, can be understood as the sustained effect of deep (or ancestrally unavailable) social information cuing various ingroup responses–people recognizing fractions of themselves (procedural if not emotional bits) in those their grandfathers would have killed.  The competitive benefits pertaining to cooperation suggest that ingroup trending cultures would gradually displace those trending otherwise.

Certainly there’s a far, far more complicated picture to be told here—a bottomless one, you might argue—but the above set of generalizations strike me as pretty solid. The normativist would cry foul, for instance, claiming that some account of the normative nature of the institutions underpinning such a process is necessary to understanding ‘moral progress.’ For them, moral progress has to involve autonomy, agency, and a variety of other posits perpetually lacking decisive formulation. Heuristic neglect allows us to sidestep this extravagance as the very kind of dead-end we should expect to confound us. At the same time, however, reflection on moral cognition has doubtless had a decisive impact on moral cognition. The problem of explaining ‘norm-talk’ remains. The difference is we now recognize the folly of using normative cognition to theoretically solve the nature of normative cognition. How can systems adapted to solving absent information regarding the nature of normative cognition reveal the nature of normative cognition? Relieved of these inexplicable posits, the generalizations above become unproblematic. We can set aside the notion of some irreducible ‘human spark’ impinging on the process in a manner that makes them empirically inexplicable.

If only our ‘deepest intuitions’ could be trusted.

The important thing about this way of looking at things is that it reveals the degree to which moral progress depends upon its information environments. So far, the technical modification of our environments has allowed our suite of social instincts, combined with institutionally regimented social training, to progressively ratchet the expansion of the franchise. But accepting the contingency of moral progress means accepting vulnerability to radical transformations in our information environment. Nothing guarantees moral progress outside the coincidence of certain capacities in certain conditions. Change those conditions, and you change the very function of human moral cognition.

So, for instance, what if something as apparently insignificant as the ‘online disinhibition effect’ has the gradual, aggregate effect of intensifying adversarial group identifications? What if the network possibilities of the web gradually organizes those possessing authoritarian dispositions, renders them more socially cohesive, while having the opposite impact on those possessing anti-authoritarian dispositions?

Anything can happen here, folks.

One can be a ‘nihilist’ and yet be all for ‘moral progress.’ The difference is that you are advocating for cooperation, for hewing to heuristics that promote prosocial behaviour. More importantly, you have no delusions of somehow standing outside contingency, of ‘rational’ immunity to radical transformations in your cognitive environments. You don’t have the luxury of burning magical holes through actual problems with your human spark. You see the ecology of things, and so you intervene.

The Death of Wilson: How the Academic Left Created Donald Trump

by rsbakker

Tim and Wilson 2

 

People need to understand that things aren’t going to snap back into magical shape once Trump becomes archive footage. The Economist had a recent piece on all the far-right demagoguery in the past, and though they stress the impact that politicians like Goldwater have had subsequent to their electoral losses, they imply that Trump is part of a cyclical process, essentially more of the same. Perhaps this might have been the case were this anything but the internet age. For all we know, things could skid madly out of control.

Society has been fundamentally rewired. This is a simple fact. Remember Home Improvement, how Tim would screw something up, then wander into the backyard to lay his notions and problems on his neighbour Wilson, who would only ever appear as a cap over the fence line? Tim was hands on, but interpersonally incompetent, while Wilson was bookish and wise to the ways of the human heart—as well as completely obscured save for his eyes and various caps by the fence between them.

This is a fantastic metaphor for the communication of ideas before the internet and its celebrated ability to ‘bring us together.’ Before, when you had chauvinist impulses, you had to fly them by whoever was available. Pre-internet, extreme views were far more likely to be vetted by more mainstream attitudes. Simple geography combined with the limitations of analogue technology had the effect of tamping the prevalence of such views down. But now Tim wouldn’t think of hassling Wilson over the fence, not when he could do a simple Google and find whatever he needed to confirm his asinine behaviour. Our chauvinistic impulses no longer need to run any geographically constrained social gauntlet to find articulation and rationalization. No matter how mad your beliefs, evidence of their sanity is only ever a few keystrokes away.

This has to have some kind of aggregate, long-term effect–perhaps a dramatic one. The Trump phenomenon isn’t the manifestation of an old horrific contagion following the same old linear social vectors; it’s the outbreak of an old horrific contagion following new nonlinear social vectors. Trump hasn’t changed anything, save identifying and exploiting an ecological niche that was already there. No one knows what happens next. Least of all him.

What’s worse, with the collapse of geography comes the collapse of fences. Phrases like “cretinization of the masses” is simply one Google search away as well. Before, Wilson would have been snickering behind that fence, hanging with his friends and talking about his moron neighbour, who really is a nice guy, you know, but needs help to think clearly all the same. Now the fence is gone, and Tim can finally see Wilson for the condescending, self-righteous bigot he has always been.

Did I just say ‘bigot’? Surely… But this is what Trump supporters genuinely think. They think ‘liberal cultural elites’ are bigoted against them. As implausible as his arguments are, Murray is definitely tracking a real social phenomenon in Coming Apart. A good chunk of white America feels roundly put upon, attacked economically and culturally. No bonus this Christmas. No Christmas tree at school. Why should a minimum wage retail worker think they somehow immorally benefit by dint of blue eyes and pale skin? Why should they listen to some bohemian asshole who’s both morally and intellectually self-righteous? Why shouldn’t they feel aggrieved on all sides, economically and culturally disenfranchised?

Who celebrates them? Aside from Donald Trump.

Trump

 

You have been identified as an outgroup competitor.

Last week, Social Psychological and Personality Science published a large study conducted by William Chopik, a psychologist out of Michigan State University, showing the degree to which political views determine social affiliations: it turns out that conservatives generally don’t know Clinton supporters and liberals generally don’t know any Trump supporters. Americans seem to be spontaneously segregating along political lines.

Now I’m Canadian, which, although it certainly undermines the credibility of my observations on the Trump phenomenon in some respects, actually does have its advantages. The whole thing is curiously academic, for Canadians, watching our cousins to the south play hysterical tug-o-war with their children’s future. What’s more, even though I’m about as academically institutionalized as a human can be, I’m not an academic, and I have steadfastly resisted the tendency of the highly educated to surround themselves with people who are every bit as institutionalized—or at least smitten—by academic culture.

I belong to no tribe, at least not clearly. Because of this, I have Canadian friends who are, indeed, Trump supporters. And I’ve been whaling on them, asking questions, posing arguments, and they have been whaling back. Precisely because we are Canadian, the whole thing is theatre for us, allowing, I like to think, for a brand of honesty that rancour and defensiveness would muzzle otherwise.

When I get together with my academic friends, however, something very curious happens whenever I begin reporting these attitudes: I get interrupted. “But-but, that’s just idiotic/wrong/racist/sexist!” And that’s when I begin whaling on them, not because I don’t agree with their estimation, but because, unlike my academic confreres, I don’t hold Trump supporters responsible. I blame them, instead. Aren’t they the ‘critical thinkers’? What else did they think the ‘cretins’ would do? Magically seize upon their enlightened logic? Embrace the wisdom of those who openly call them fools?

Fact is, you’re the ones who jumped off the folk culture ship.

The Trump phenomenon falls into the wheelhouse of what has been an old concern of mine. For more than a decade now, I’ve been arguing that the social habitat of intellectual culture is collapsing, and that the persistence of the old institutional organisms is becoming more and more socially pernicious. Literature professors, visual artists, critical theorists, literary writers, cultural critics, intellectual historians and so on all continue acting and arguing as though this were the 20th century… as if they were actually solving something, instead of making matters worse.

See before, when a good slice of media flushed through bottlenecks that they mostly controlled, the academic left could afford to indulge in the same kind of ingroup delusions that afflict all humans. The reason I’m always interrupted in the course of reporting the attitudes of my Trump supporting friends is simply that, from an ingroup perspective, they do not matter.

More and more research is converging upon the notion that the origins of human cooperation lie in human enmity. Think Band of Brothers only in an evolutionary context. In the endless ‘wars before civilization’ one might expect those groups possessing members willing to sacrifice themselves for the good of their fellows would prevail in territorial conflicts against groups possessing members inclined to break and run. Morality has been cut from the hip of murder.

This thesis is supported by the radical differences in our ability to ‘think critically’ when interacting with ingroup confederates as opposed to outgroup competitors. We are all but incapable of listening, and therefore responding rationally, to those we perceive as threats. This is largely why I think literature, minimally understood as fiction that challenges assumptions, is all but dead. Ask yourself: Why is it so easy to predict that so very few Trump supporters have read Underworld? Because literary fiction caters to the likeminded, and now, thanks to the precision of the relationship between buyer and seller, it is only read by the likeminded.

But of course, whenever you make these kinds of arguments to academic liberals you are promptly identified as an outgroup competitor, and you are assumed to have some ideological or psychological defect preventing genuine critical self-appraisal. For all their rhetoric regarding ‘critical thinking,’ academic liberals are every bit as thin-skinned as Trump supporters. They too feel put upon, besieged. I gave up making this case because I realized that academic liberals would only be able to hear it coming from the lips of one of their own, and even then, only after something significant enough happened to rattle their faith in their flattering institutional assumptions. They know that institutions are self-regarding, they admit they are inevitably tarred by the same brush, but they think knowing this somehow makes them ‘self-critical’ and so less prone to ingroup dysrationalia. Like every other human on the planet, they agree with themselves in ways that flatter themselves. And they direct their communication accordingly.

I knew it was only a matter of time before something happened. Wilson was dead. My efforts to eke out a new model, to surmount cultural balkanization, motivated me to engage in ‘blog wars’ with two very different extremists on the web (both of whom would be kind enough to oblige my predictions). This experience vividly demonstrated to me how dramatically the academic left was losing the ‘culture wars.’ Conservative politicians, meanwhile, were becoming more aggressively regressive in their rhetoric, more willing to publicly espouse chauvinisms that I had assumed safely buried.

The academic left was losing the war for the hearts and minds of white America. But so long as enrollment remained steady and book sales remained strong, they remained convinced that nothing fundamental was wrong with their model of cultural engagement, even as technology assured a greater match between them and those largely approving of them. Only now, with Trump, are they beginning to realize the degree to which the technological transformation of their habitat has rendered them culturally ineffective. As George Saunders writes in “Who Are All These Trump Supporters?” in The New Yorker:

Intellectually and emotionally weakened by years of steadily degraded public discourse, we are now two separate ideological countries, LeftLand and RightLand, speaking different languages, the lines between us down. Not only do our two subcountries reason differently; they draw upon non-intersecting data sets and access entirely different mythological systems. You and I approach a castle. One of us has watched only “Monty Python and the Holy Grail,” the other only “Game of Thrones.” What is the meaning, to the collective “we,” of yon castle? We have no common basis from which to discuss it. You, the other knight, strike me as bafflingly ignorant, a little unmoored. In the old days, a liberal and a conservative (a “dove” and a “hawk,” say) got their data from one of three nightly news programs, a local paper, and a handful of national magazines, and were thus starting with the same basic facts (even if those facts were questionable, limited, or erroneous). Now each of us constructs a custom informational universe, wittingly (we choose to go to the sources that uphold our existing beliefs and thus flatter us) or unwittingly (our app algorithms do the driving for us). The data we get this way, pre-imprinted with spin and mythos, are intensely one-dimensional.

The first, most significant thing to realize about this passage is that it’s written by George Saunders for The New Yorker, a premier ingroup cultural authority on a premier ingroup cultural podium. On the view given here, Saunders pretty much epitomizes the dysfunction of literary culture, an academic at Syracuse University, the winner of countless literary awards (which is to say, better at impressing the likeminded than most), and, I think, clearly a genius of some description.

To provide some rudimentary context, Saunders attends a number of Trump rallies, making observations and engaging Trump supporters and protesters alike (but mostly the former) asking gentle questions, and receiving, for the most part, gentle answers. What he describes observation-wise are instances of ingroup psychology at work, individuals, complete strangers in many cases, making forceful demonstrations of ingroup solidarity and resolve. He chronicles something countless humans have witnessed over countless years, and he fears for the same reasons all those generations have feared. If he is puzzled, he is unnerved more.

He isolates two culprits in the above passage, the ‘intellectual and emotional weakening brought about by degraded public discourse,’ and more significantly, the way the contemporary media landscape has allowed Americans to ideologically insulate themselves against the possibility of doubt and negotiation. He blames, essentially, the death of Wilson.

As a paradigmatic ‘critical thinker,’ he’s careful to throw his own ‘subject position’ into mix, to frame the problem in a manner that distributes responsibility equally. It’s almost painful to read, at times, watching him walk the tightrope of hypocrisy, buffeted by gust after gust of ingroup outrage and piety, trying to exemplify the openness he mistakes for his creed, but sounding only lyrically paternalistic in the end–at least to ears not so likeminded. One can imagine the ideal New Yorker reader, pursing their lips in empathic concern, shaking their heads with wise sorrow, thinking…

But this is the question, isn’t it? What do all these aspirational gestures to openness and admissions of vague complicity mean when the thought is, inevitably, fools? Is this not the soul of bad faith? To offer up portraits of tender humanity in extremis as proof of insight and impartiality, then to end, as Saunders ends his account, suggesting that Trump has been “exploiting our recent dullness and aversion to calling stupidity stupidity, lest we seem too precious.”

Academics… averse to calling stupidity stupid? Trump taking advantage of this aversion? Lordy.

This article, as beautiful as it is, is nothing if not a small monument to being precious, to making faux self-critical gestures in the name of securing very real ingroup imperatives. We are the sensitive ones, Saunders is claiming. We are the light that lets others see. And these people are the night of American democracy.

He blames the death of Wilson and the excessive openness of his ingroup, the error of being too open, too critically minded…

Why not just say they’re jealous because he and his friends are better looking?

If Saunders were at all self-critical, anything but precious, he would be asking questions that hurt, that cut to the bone of his aggrandizing assumptions, questions that become obvious upon asking them. Why not, for instance, ask Trump supporters what they thought of CivilWarLand in Bad Decline? Well, because the chances of any of them reading any of his work aside from “CommComm” (and only then because it won the World Fantasy Award in 2010) were virtually nil.

So then why not ask why none of these people has read anything written by him or any of his friends or their friends? Well, he’s already given us a reason for that: the death of Wilson.

Okay, so Wilson is dead, effectively rendering your attempts to reach and challenge those who most need to be challenged with your fiction toothless. And so you… what? Shrug your shoulders? Continue merely entertaining those whom you find the least abrasive?

If I’m right, then what we’re witnessing is so much bigger than Trump. We are tender. We are beautiful. We are vicious. And we are capable of believing anything to secure what we perceive as our claim. What matters here is that we’ve just plugged billions of stone-age brains chiselled by hundreds of millions of years of geography into a world without any. We have tripped across our technology and now we find ourselves in crash space, a domain where the transformation of our problems has rendered our traditional solutions obsolete.

It doesn’t matter if you actually are on their side or not, whatever that might mean. What matters is that you have been identified as an outgroup competitor, and that none of the authority you think your expertise warrants will be conceded to you. All the bottlenecks that once secured your universal claims are melting away, and you need to find some other way to discharge your progressive, prosocial aspirations. Think of all the sensitive young talent sifting through your pedagogical fingers. What do you teach them? How to be wise? How to contribute to their community? Or how to play the game? How to secure the approval of those just like you—and so, how to systematically alienate them from their greater culture?

So. Much. Waste. So much beauty, wisdom, all of it aimed at nowhere… tossed, among other places, into the heap of crumpled Kleenexes called The New Yorker.

Who would have thunk it? The best way to pluck the wise from the heart of our culture was to simply afford them the means to associate almost exclusively with one another, then trust to human nature, our penchant for evolving dialects and values in isolation. The edumacated no longer have the luxury of speaking among themselves for the edification of those servile enough to listen of their own accord. The ancient imperative to actively engage, to have the courage to reach out to the unlikeminded, to write for someone else, has been thrust back upon the artist. In the days of Wilson, we could trust to argument, simply because extreme thoughts had to run a gamut of moderate souls. Not so anymore.

If not art, then argument. If not argument, then art. Invade folk culture. Glory in delighting those who make your life possible–and take pride in making them think.

Sometimes they’re the idiot and sometimes we’re the idiot–that seems to be the way this thing works. To witness so many people so tangled in instinctive chauvinisms and cartoon narratives is to witness a catastrophic failure of culture and education. This is what Trump is exploiting, not some insipid reluctance to call stupid stupid.

I was fairly bowled over a few weeks back when my neighbour told me he was getting his cousin in Florida to send him a Trump hat. I immediately asked him if he was crazy.

“Name one Donald Trump who has done right by history!” I demanded, attempting to play Wilson, albeit minus the decorum and the fence.

Shrug. Wild eyes and a genuine smile. “Then I hope he burns it down.”

“How could you mean that?”

“I dunno, brother. Can’t be any worse than this fucking shit.”

Nothing I could say could make him feel any different. He’s got the internet.*

 

*[Note to readers: This post is receiving a great deal of Facebook traffic, and relatively little critical comment, which tells me individuals are saving their comments for whatever ingroup they happen to belong to, thus illustrating the very dynamic critiqued in the piece. Sound off! Dare to dissent in ideologically mixed company, or demonstrate the degree to which you need others to agree before raising your voice.]

Inverse Invariance

by rsbakker

I just returned from my annual trip up north, to land where you’re lucky to get a cell signal (let alone Wi-Fi) to party with my high-school buddies. Bad timing, I know, given that I have a book coming out next week! But essential, all the same.

Besides, I got to play a game of RISK. My neighbours left this awesome little nugget behind when they moved, so I snapped it up when the owner asked me if I wanted it.

Risk Onyx Edition

Who knew it was such a great drinking game? Joe crushed, and Boz, despite what he might tell you, had no chance. Tom and I got into a pissing match for Australia and committed mutual suicide. For Australia! It appears that RISK is but another mechanism baiting the stone age brain.

No phones in the pocket or on the table, just kids on errands slinking about. The crunch of heavy tunes fading into birdsong. Cursing and laughing over the chatter of dice. Good times.

Updates coming up.

Aesthetic Insanity (or insane aesthete)?

by rsbakker

Icarus

 

I’ve been trying to force myself to think more in marketing terms, as I’m sure some of you have noticed. It makes for odd bedfellows, pitching free book giveaways one post, then eliminativistic interpretations of observer effects the next. Truth be told, I like the way it jars, the way it renders sensible the boundaries between cultural ingroups. If accusing people of “selling out” doesn’t amount to an ingroup shame mechanism, I’m not sure what does. The same goes for accusations of “pretentiousness” coming the other way. In each case it amounts to shouting, “Objectionable communication!” Personally, I think anyone who wants to write literature—fiction that actually jumps the rails of ever tightening buyer-seller relationships—needs to be perched in some uncomfortable, easy-to-spoof spot like this. You need to find yourself places where others aren’t sure you belong, otherwise you’re simply one of the likeminded writing for the likeminded, and simply pretending that people have been challenged and/or appalled. It’s all ‘genre’ in the pejorative sense, otherwise—algorithmically managed no less!

A commenter mentioned the ‘insane ambition’ of The Second Apocalypse on the previous thread, and that got me thinking about how one would go about estimating the ambition of a literary project. The thing is, I’ve always had an easier time selling the ambition of the project than I have the project itself. The problem with this, of course, is that the world is drowning in grandiose ambitions—I’m pretty sure I only manage to identify myself as a probable manqué when I do this. But what if I could say that The Second Apocalypse was one of the most insanely ambitious literary projects ever undertaken? Does anyone know how a project like this fits in the greater scheme of literary ambition?

Is this worth telling my publicist to use?

 

The Discursive Meanie

by rsbakker

So I went to see Catherine Malabou speak on the relation between deep history, consciousness and neuroscience last night. As she did in her Critical Inquiry piece, she argued that some new conceptuality was required to bridge the natural historical and the human, a conceptuality that neuroscience could provide. When I introduced myself to her afterward, she recognized my name, said that she had read my post, “Malabou, Continentalism, and New Age Philosophy.” When I asked her what she thought, she blushed and told me that she thought it was mean.

I tried to smooth things over, but for most people, I think, expressing aggression in interpersonal exchanges is like throwing boulders tied to their waist. Hard words rewrite communicative contexts, and it takes the rest of the brain several moments to catch up. Once she tossed her boulder it was only a matter of time before the rope yanked her away. Discussion over.

I appreciate that I’m something of an essayistic asshole, and that academics, adapted to genteel communicative contexts as they are, generally have little experience with, let alone stomach for, the more bruising environs of the web. But then the near universal academic tendency to take the path of least communicative resistance, to foster discursive ingroups, is precisely the tendency Three Pound Brain is dedicated to exposing. The problem, of course, is that cuing people to identify you as a threat pretty much guarantees they will be unable to engage you rationally, as was the case here. Malabou had dismissed me, and so my arguments simply followed.

How does one rattle ingroup assumptions as an outgroup competitor, short disguising oneself as an ingroup sympathizer, that is? Interesting conundrum, that. I suppose if I had more notoriety, they would feel compelled to engage me…

Is it time to rethink my tactics?

Akrasis

by rsbakker

Akrasis (or, social akrasis) refers to the technologically driven socio-economic process, already underway at the beginning of the 20th century, which would eventually lead to Choir.

Where critics in the early 21st century continued to decry the myriad cruelties of the capitalist system, they failed to grasp the greater peril hidden in the way capitalism panders to human yens. Quick to exploit the discoveries arising out of cognitive science, market economies spontaneously retooled to ever more effectively cue and service consumer demand, eventually reconfiguring the relation between buyer and seller into subpersonal circuits (triggering the notorious shift to ‘whim marketing,’ the data tracking of ‘desires’ independent of the individuals hosting them). The ecological nature of human cognition all but assured the mass manipulative character of this transformation. The human dependency on proximal information to cue what amount to ancestral guesses regarding the nature of their social and natural environments provided sellers with countless ways to game human decision making. The global economy was gradually reorganized to optimize what amounted to human cognitive shortcomings. We became our own parasite.

Just as technological transformation (in particular, the scaling of AI) began crashing the utility of our heuristic modes of meaning making, it began to provide virtual surrogates, ways to enable the exercise of otherwise unreliable cognitive capacities. In other words, even as the world became ever more inhuman, our environments became ever more anthropomorphic, ever more ‘smart’ and ‘immersive.’ Thus ‘akrasis,’ the ancient term referring to the state of acting against one’s judgment, which here describes a society acting against the human capacity to judge altogether, a society bent upon the systematic substitution of actual autonomy for simulated autonomy.

Humans, after all, have evolved to leverage the signal of select upstream interventions, assuming it a reliable component of their environments. Once we developed the capacity to hack these latter signals, the world effectively became a drug.

Akrasis has a long history, as long as life itself, according to certain theories. Before the 21st century, the process appeared ‘enlightening,’ but only because the limitations of the technologies involved (painting, literacy, etc.) rendered the resulting transformations manageable. But the rate of transformation continued to accelerate, while the human capacity to adapt remained constant. The outcome was inevitable. As the bandwidth of our interventions approached then surpassed the bandwidth of our central nervous systems, the simulation of meaning became the measure of meaning. Our very frame of reference had been engulfed. For billions, the only obvious direction of success—the direction of ‘cognitive comfort’—lay away from the world and into technology. So they defected in their billions, embracing signals, environments, manufactured entirely from predatory code. Culture became indistinguishable from cheat space—as did, for those embracing virtual fitness indicators, experience itself.

By 2050, we had become an advanced akratic civilization, a species whose ancestral modes of meaning-making had been utterly compromised. Art was an early casualty, though decades would be required to recognize as much. Fantasy, after all, was encouraged in all forms, especially those, like art or religion, laying claim to obsolete authority gradients. To believe in art was to display market vulnerabilities, or to be so poor as to be insignificant. No different than believing in God.

Social akrasis is now generally regarded as a thermodynamic process intrinsic to life, the mechanical outcome of biology falling within the behavioural purview of biology. Numerous simulations have demonstrated that ‘outcome convergent’ or ‘optimizing’ systems, once provided the base capacity required to extract excess capacity from their environments, will simply bootstrap until they reach a point where the system detaches from its environment altogether, begins converging upon the signal of some environmental outcome, rather than any actual environmental outcome.

Thus the famous ‘Junkie Solution’ to Fermi’s Paradox (as recently confirmed by the Gala Semantic Supercomputer at MIT).

And thus Choir.