Three Pound Brain

No bells, just whistling in the dark…

Tag: crash space

Killing Bartleby (Before It’s Too Late)

by rsbakker

Why did I not die at birth,

come forth from the womb and expire?

Why did the knees receive me?

Or why the breasts, that I should suck?

For then I should have lain down and been quiet;

I should have slept; then I should have been at rest,

with kings and counselors of the earth

who rebuilt ruins for themselves…

—Job 3:11-14 (RSV)

 

“Bartleby, the Scrivener: A Story of Wall-Street”: I made the mistake of rereading this little gem a few weeks back. Section I, below, retells the story with an eye to heuristic neglect. Section II leverages this retelling into a critique of readings, like those belonging to the philosophers Gilles Deleuze and Slavoj Zizek, that fall into the narrator’s trap of exceptionalizing Bartleby. If you happen to know anyone interested in Bartleby criticism, by all means encourage them to defend their ‘doctrine of assumptions.’

 

I

The story begins with the unnamed narrator identifying two ignorances, one social and the other personal. The first involves Bartleby’s profession, that “somewhat singular set of men, of whom as yet nothing that I know of has ever been written.” Human scriveners, like human computers, hail from a time when social complexities demanded the undertaking of mechanical cognitive labours, the discharge of tasks too procedural to rest easy in the human soul. Copies are all the ‘system’ requires of them, pure documentary repetition. It isn’t so much that their individuality does not matter, but that it matters too much, perturbing (‘blotting’) the function of the whole. So far as social machinery is legal machinery, you could say law-copyists belong to the neglected innards of mid-19th century society. Bartleby belongs to what might be called the caste of most invisible men.

What makes him worthy of literary visibility turns on a second manifestation of ignorance, this one belonging to the narrator. “What my own astonished eyes saw of Bartleby,” he tells us, “that is all I know of him, except, indeed, one vague report which will appear in the sequel.” And even though the narrator thinks this interpersonal inscrutability constitutes “an irreparable loss to literature,” it turns out to be the very fact upon which the literary obsession with “Bartleby, the Scrivener” hangs. Bartleby is so visible because he is the most hidden of the hidden men.

Since comprehending the dimensions of a black box buried within a black box is impossible, the narrator has no choice but to illuminate the latter, to provide an accounting of Bartleby’s ecology: “Ere introducing the scrivener, as he first appeared to me, it is fit I make some mention of myself, my employees, my business, my chambers, and general surroundings; because some such description is indispensable to an adequate understanding of the chief character about to be presented.” In a sense, Bartleby is nothing apart from his ultimately profound impact on this ecology, such is his mystery.

Aside from inklings of pettiness, the narrator’s primary attribute, we learn, is also invisibility, the degree to which he disappears into his social syntactic role. “I am one of those unambitious lawyers who never addresses a jury, or in any way draws down public applause; but in the cool tranquility of a snug retreat, do a snug business among rich men’s bonds and mortgages and title-deeds,” he tells us. “All who know me, consider me an eminently safe man.” He is, in other words, the part that does not break down, and so, like Heidegger’s famed hammer, never becomes something present to hand, an object of investigation in his own right.

His description of his two existing scriveners demonstrates that his ‘safety’ is to some extent rhetorical, consisting in his ability to explain away inconsistencies, real or imagined. Between Turkey’s afternoon drunkenness and Nipper’s foul morning temperament, you could say his office is perpetually compromised, but the narrator chooses to characterize it otherwise, in terms of each man mechanically cancelling out the incompetence of the other. “Their fits relieved each other like guards,” the narrator informs us, resulting in “a good natural arrangement under the circumstances.”

He depicts what might be called an economy of procedural and interpersonal reflexes, a deterministic ecology consisting of strictly legal or syntactic demands, all turning on the irrelevance of the discharging individual, the absence of ‘blots,’ and a stochastic ecology of sometimes conflicting personalities. Not only does he instinctively understand the insoluble nature of the latter, he also understands the importance of apology, the power of language to square those circles that refuse to be squared. When he comes “within an ace” of firing Turkey, the drunken scrivener need only bow and say what amounts to nothing to mollify his employer. As with bonds and mortgages and title-deeds, the content does not so much matter as does the syntax, the discharge of social procedure. Everyone in his office “up stairs at No.—Wall-street” is a misfit, and the narrator is a compulsive ‘fitter,’ forever searching for ways to rationalize, mythologize, and so normalize, the idiosyncrasies of his interpersonal circumstances.

And of course, he and his fellows are entombed by the walls of Wall Street, enjoying ‘unobstructed views’ of obstructions. Theirs is a subterranean ecology, every bit as “deficient in what landscape painters call ‘life’” as the labour that consumes them.

Enter Bartleby. “After a few words touching his qualifications,” the narrator informs us, “I engaged him, glad to have among my corps of copyists a man of so singularly sedate an aspect, which I thought might operate beneficially upon the flighty temper of Turkey, and the fiery one of Nippers.” Absent any superficial sign of idiosyncrasy, he seems the perfect ecological fit. The narrator gives the man a desk behind a screen in his own office, a corner possessing a window upon obstruction.

After three days, he calls out to Bartleby to examine the accuracy of a document, reflexively assuming the man would discharge the task without delay, only to hear Bartleby, obscure behind his green screen, say the fateful words that would confound, not only our narrator, but countless readers and critics for generations to come: “I would prefer not to.” The narrator is gobsmacked:

“I sat awhile in perfect silence, rallying my stunned faculties. Immediately it occurred to me that my ears had deceived me, or Bartleby had entirely misunderstood my meaning. I repeated my request in the clearest tone I could assume. But in quite as clear a one came the previous reply, “I would prefer not to.””

Given the “natural expectancy of instant compliance,” the narrator assumes the breakdown is communicative. When he realizes this isn’t the case, he confronts Bartleby directly, to the same effect:

“Not a wrinkle of agitation rippled him. Had there been the least uneasiness, anger, impatience or impertinence in his manner; in other words, had there been any thing ordinarily human about him, doubtless I should have violently dismissed him from the premises. But as it was, I should have as soon thought of turning my pale plaster-of-paris bust of Cicero out of doors.”

Realizing that he has been comprehended, the narrator assumes willful defiance, that Bartleby seeks to provoke him, and that, accordingly, the man will present the cues belonging to interpersonal power struggles more generally. When Bartleby manifests none of these signs, the hapless narrator lacks the social script he requires to solve the problem. Turning out the scrivener becomes as unthinkable as surrendering his bust of Cicero, which is to say, the very emblem of his legal vocation.

The next time Bartleby refuses to read, the narrator demands an explanation, asking, “Why do you refuse?” To which Bartleby replies, once again, “I would prefer not to.” When the narrator presses, resolved “to reason with him,” he realizes that dysrationalia is not the problem: “It seemed to me that while I had been addressing him, he carefully revolved every statement that I made; fully comprehended the meaning; could not gainsay the irresistible conclusions; but, at the same time, some paramount consideration prevailed with him to reply as he did.”

If Bartleby were non compos mentis, then he could be ‘medicalized,’ reduced to something the narrator would find intelligible—something providing some script for action. Instead, the scrivener understands, or manifests as much, leaving the narrator groping for evidence of his own rationality:

“It is not seldom the case that when a man is browbeaten in some unprecedented and violently unreasonable way, he begins to stagger in his own plainest faith. He begins, as it were, vaguely to surmise that, wonderful as it may be, all the justice and all the reason is on the other side. Accordingly, if any disinterested persons are present, he turns to them for some reinforcement for his own faltering mind.”

For a claim to be rational it must be rational to everyone. Each of us is stranded with our own perspective, and each of us possesses only the dimmest perspective on that perspective: rationality is something we can only assume. This is why ‘truth’ (especially in ‘normative’ matters (politics)) so often amounts to a ‘numbers game,’ a matter of tallying up guesses. Our blindness to our cognitive orientation—medial neglect—combined with the generativity of the human brain and the capriciousness of our environments, requires the communicative policing of cognitive idiosyncrasies. Whatever rationality consists in, minimally it functions to minimize discrepancies between individuals, sometimes vis a vis their environments and sometimes not. Reason, like the narrator, makes things fit.

The ‘disinterested persons’ the narrator turns to are themselves misfits, with “Nippers’ ugly mood on duty and Turkey’s off.” The irony here, and what critics are prone to find most interesting, is that the three are anything but disinterested. The more thought-provoking fact, however, lies in the way they agree with their employer despite the wild variance of their answers. For all the idiosyncrasies of its constituents, the office ecology automatically manages to conserve its ‘paramount consideration’: functionality.

Baffled unto inaction, the narrator suffers bouts of explaining away Bartleby’s discrepancies in terms of his material and moral utilities. The fact of his indulgences alternately congratulates and exasperates him: Bartleby becomes (and remains) a bi-stable sociocognitive figure, alternately aggressor and victim. “Nothing so aggravates an earnest person as a passive resistance,” the narrator explains. “If the individual so resisted be of a not inhumane temper, and the resisting one perfectly harmless in his passivity; then, in the better moods of the former, he will endeavor charitably to construe to his imagination what proves impossible to be solved by his judgment.” To be earnest is to be prone to minimize social discrepancies, to optimize via the integrations of others. The passivity of “I would prefer not to” poises Bartleby upon a predictive-processing threshold, one where the vicissitudes of mood are enough to transform him from a ‘penniless wight’ into a ‘brooding Marius’ and back again. The signals driving the charitable assessment are constantly interfering with the signals driving the uncharitable assessment, forcing the different neural hypotheses to alternate.

Via this dissonance, the scrivener begins to train him, with each “I would prefer not to” tending “to lessen the probability of [his] repeating the inadvertence.”

The ensuing narrative establishes two facts. First, we discover that Bartleby belongs to the office ecology, and in a manner more profound than even the narrator, let alone any one of his employees. Discovering Bartleby indisposed in his office on a Sunday, the narrator finds himself fleeing his own premises, alternately lost in “sad fancyings—chimeras, doubtless, of a sick and silly brain” and “[p]resentiments of strange discoveries”—strung between delusion and revelation.

Second, we learn that Bartleby, despite belonging to the office ecology, nevertheless signals its ruination:

“Somehow, of late I had got into the way of involuntarily using this word “prefer” upon all sorts of not exactly suitable occasions. And I trembled to think that my contact with the scrivener had already and seriously affected me in a mental way. And what further and deeper aberration might it not yet produce?”

When the narrator catches Turkey also saying “prefer,” he says, “So you have got the word too,” as if a verbal tick could be caught as a cold. Turkey manifests cryptonesia. Nippers does the same not moments afterward—ever bit as unconsciously as Turkey. Knowing nothing of the way humans have evolved to unconsciously copy linguistic behaviour, the narrator construes Bartleby as a kind of contagion—or pollutant, a threat to his delicately balanced office ecology. He once again determines he must rid his office of the scrivener’s insidious influence, but, under that influence, once again allows prudence—or the appearance of such—to dissuade immediate action.

Bartleby at last refuses to copy, irrevocably undoing the foundation of the narrator’s ersatz rationalizations. “And what is the reason?” the narrator demands to know. Staring at the brick wall just beyond his window, Bartleby finally offers a different explanation: “Do you not see the reason for yourself.” Though syntactically structured as a question, this statement possesses no question mark in Melville’s original version (as it does, for instance, in the version anthologized by Norton). And indeed, the narrator misses the very reason implied by his own narrative—the wall that occupied so many of Bartleby’s reveries—and confabulates an apology instead: work induced ‘impaired vision.’

But this rationalization, like all the others, is quickly exhausted. The internal logic of the office ecology is entirely dependent on the logic of Wall-street: the text continually references the functional exigencies commanding the ebb and flow of their lives, the way “necessities connected with my business tyrannized over all other considerations.” The narrator, when all is said and done, is an instrument of the Law and the countless institutions dependent upon it. At long last he fires Bartleby rather than merely resolving to do so.

He celebrates his long-deferred decisiveness while walking home, only to once again confront the blank wall the scrivener has become:

“My procedure seemed as sagacious as ever—but only in theory. How it would prove in practice—there was the rub. It was truly a beautiful thought to have assumed Bartleby’s departure; but, after all, that assumption was simply my own, and none of Bartleby’s. The great point was, not whether I had assumed that he would quit me, but whether he would prefer so to do. He was more a man of preferences than assumptions.”

And so, the great philosophical debate, both within the text and its critical reception, is set into motion. Lost in rumination, the narrator overhears someone say, “I’ll take odds he doesn’t,” on the street, and angrily retorts, assuming the man was referring to Bartleby, and not, as was actually the case, an upcoming election. Bartleby’s ‘passive resistance’ has so transformed his cognitive ecology as to crash his ability to make sense of his fellow man. Meaning, at least so far as it exists in his small pocket of the world, has lost its traditional stability.

Of course, the stranger’s voice, though speaking of a different matter altogether, had spoken true. Bartleby prefers not to leave the office that has become his home.

“What was to be done? or, if nothing could be done, was there any thing further that I could assume in the matter? Yes, as before I had prospectively assumed that Bartleby would depart, so now I might retrospectively assume that departed he was. In the legitimate carrying out of this assumption, I might enter my office in a great hurry, and pretending not to see Bartleby at all, walk straight against him as if he were air. Such a proceeding would in a singular degree have the appearance of a home-thrust. It was hardly possible that Bartleby could withstand such an application of the doctrine of assumptions.”

The ‘home-thrust,’ in other words, is to simply pretend, to physically enact the assumption of Bartleby’s absence, to not only ignore him, but to neglect him altogether, to the point of walking through him if need be. “But upon second thoughts the success of the plan seemed rather dubious,” the narrator realizes. “I resolved to argue the matter over with him again,” even though argument, Sellars famed ‘game of giving and asking for reasons,’ is something Bartleby prefers not to recognize.

When the application of reason fails once again, the narrator at last entertains the thought of killing Bartleby, realizing “the circumstance of being alone in a solitary office, up stairs, of a building entirely unhallowed by humanizing domestic associations” is one tailor-made for the commission of murder. Even acts of evil have their ecological preconditions. But rather than seize Bartleby, he ‘grapples and throws’ the murderous temptation, recalling the Christian injunction to love his neighbour. As research suggests, imagination correlates with indecision, the ability to entertain (theorize) possible outcomes: the narrator is nothing if not an inspired social confabulator. For every action-demanding malignancy he ponders, his aversion to confrontation occasions another reason for exemption, which is all he needs to reduce the discrepancies posed.

He resigns himself to the man:

“Gradually I slid into the persuasion that these troubles of mine touching the scrivener, had been all predestinated from eternity, and Bartleby was billeted upon me for some mysterious purpose of an all-wise Providence, which it was not for a mere mortal like me to fathom. Yes, Bartleby, stay there behind your screen, thought I; I shall persecute you no more; you are harmless and noiseless as any of these old chairs; in short, I never feel so private as when I know you are here. At last I see it, I feel it; I penetrate to the predestinated purpose of my life. I am content. Others may have loftier parts to enact; but my mission in this world, Bartleby, is to furnish you with office-room for such period as you may see fit to remain.”

But this story, for all its grandiosity, likewise melts before the recalcitrant scrivener. The comical notion that furnishing Bartleby an office could have cosmic significance merely furnishes a means of ignoring what cannot be ignored: how the man compromises, in ways crude and subtle, the systems of assumptions, the network of rational reflexes, comprising the ecology of Wall-street. In other words, the narrator’s clients are noticing…

“Then something severe, something unusual must be done. What! surely you will not have him collared by a constable, and commit his innocent pallor to the common jail? And upon what ground could you procure such a thing to be done?—a vagrant, is he? What! he a vagrant, a wanderer, who refuses to budge? It is because he will not be a vagrant, then, that you seek to count him as a vagrant. That is too absurd. No visible means of support: there I have him. Wrong again: for indubitably he does support himself, and that is the only unanswerable proof that any man can show of his possessing the means so to do.”

At last invisibility must be sacrificed, and regularity undone. The narrator ratchets through the facts of the scrivener’s cognitive bi-stability. An innocent criminal. An immovable vagrant. Unsupported yet standing. Reason itself cracks about him. And what reason cannot touch only fight or flight can undo. If the ecology cannot survive Bartleby, and Bartleby is immovable, then the ecology must be torn down and reestablished elsewhere.

It’s tempting to read this story in ‘buddy terms,’ to think that the peculiarities of Bartleby only possess the power they do given the peculiarities of the narrator. (One of the interesting things about the yarn is the way it both congratulates and insults the neuroticism of the critic, who, having canonized Bartleby, cannot but flatter themselves both by thinking they would have endured Bartleby the way the narrator does, and by thinking that surely they wouldn’t be so disabled by the man). The narrator’s decision to relocate allows us to see the universality of his type, how others possessing far less history with the scrivener are themselves driven to apologize, to exhaust all ‘quiet’ means of minimizing discrepancies. “[S]ome fears are entertained of a mob,” his old landlord warns him, desperate to purge the scrivener from No.—Wall-street.

Threatened with exposure in the papers—visibility—the narrator once again confronts Bartleby the scrivener. This time he comes bearing possibilities of gainful employment, greener pastures, some earnest, some sarcastic, only to be told, “I would prefer not to,” with the addition of, “I am not particular.” And indeed, as Bartleby’s preference severs ever more ecological connections, he seems to become ever more super-ecological, something outside the human communicative habitat. Repulsed yet again, the narrator flees Wall-street altogether.

Bartleby, meanwhile, is imprisoned in the Tombs, the name given to the House of Detention in lower Manhattan. A walled street is replaced by a walled yard—which, the narrator will tell Bartleby, “is not so sad a place as one might think,” the irony being, of course, that with sky and grass the Tombs actually represent an improvement over Wall-street. Bartleby, for his part, only has eyes for the walls—his unobstructed view of obstruction. To assure his former scrivener is well fed, the narrator engages the prison cook, who asks him whether Bartleby is a forger, likening the man to Monroe Edwards, a famed slavetrader and counterfeiter in Melville’s day. Despite the criminal connotations of Nippers, the narrator assures the man he was “never socially acquainted with any forgers.”

On his next visit, he discovers that Bartleby’s metaphoric ‘dead wall reveries’ have become literal. The narrator finds him “huddled at the base of the wall, his knees drawn up, and lying on his side, his head touching the cold stones,” dead for starvation. Cutting the last, most fundamental ecological reflex of all—the consumption of food—Bartleby has finally touched the face of obstruction… oblivion.

The story proper ends with one last misinterpretation: the cook assuming that Bartleby sleeps. And even here, at this final juncture, the narrator apologizes rather than corrects, quoting Job 3:14, using the Holy Bible, perhaps, to “mason up his remains in the wall.” Melville, however, seems to be gesturing to the fundamental problem underwriting the whole of his tale, the problem of meaning, quoting a fragment of Job in extremis, asking God why he should have been born at all, if his lot was only desolation. What meaning resides in such a life? Why not die an innocent?

Like Bartleby.

What the narrator terms the “sequel” consists of no more than two paragraphs (set apart by a ‘wall’ of eight asterisks), the first divulging “one little item of rumor” which may or may not be more or less true, the second famously consisting in, “Ah Bartleby! Ah humanity!” The rumour occasioning these apostrophic cries suggests “that Bartleby had been a subordinate clerk in the Dead Letter Office at Washington, from which he had been suddenly removed by a change of administration.”

What moves the narrator to passions too complicated to scrutinize is nothing other than the ecology of such a prospect: “Conceive a man by nature and misfortune prone to a pallid hopelessness, can any business seem more fitted to heighten it than that of continually handling these dead letters, and assorting them for the flames?” Here at last, he thinks, we find some glimpse of the scrivener’s original habitat: dead letters potentially fund the reason the man forever pondered dead walls. Rather than a forger, one who cheats systems, Bartleby is an undertaker, one who presides over their crashing. The narrator paints his final rationalization, Bartleby mediating an ecology of fatal communicative interruptions:

“Sometimes from out the folded paper the pale clerk takes a ring:—the finger it was meant for, perhaps, moulders in the grave; a bank-note sent in swiftest charity:—he whom it would relieve, nor eats nor hungers any more; pardon for those who died despairing; hope for those died unhoping; good tidings for those who died stifled by unrelieved calamities. On errands of life, these letters speed to death.”

An ecology, in other words, consisting of quotidian ecological failures, life lost for the interruption of some crucial material connection, be it ink or gold. Thus, are Bartleby and humanity entangled in the failures falling out of neglect, the idiosyncratic, the addresses improperly copied, and the ill-timed, the words addressed to those already dead. A meta-ecology where discrepancies can never be healed only consigned to oblivion.

But, of course, were Bartleby still living, this ‘sad fancying’ would likewise turn out to be a ‘chimera of a sick and silly brain.’ Just another way to brick over the questions. If the narrator finds consolation, the wreckage of his story remains.

 

II

I admit that I feel more like Ahab than Ishmael… most of the time. But I’m not so much obsessed by the White Whale as by what is obliterated when it’s revealed as yet another mere cetacean. Be it the wrecking of The Pequod, or the flight of the office at No.— Wall-street, the problem of meaning is my White Whale. “Bartleby, the Scrivener” is compelling, I think, to the degree it lends that problem the dimensionality of narrative.

Where in Moby-Dick, the relation between the inscrutable and the human is presented via Ishmael, which is to say the third person, in Bartleby, the relation is presented in the second: the narrator is Ahab, every bit as obsessed with his own pale emblem of unaccountable discrepancy—every bit as maddened. The violence is merely sublimated in quotidian discursivity.

The labour of Ishmael falls to the critic. “Life is so short, and so ridiculous and irrational (from a certain point of view),” Melville writes to John C. Hoadley in 1877, “that one knows not what to make of it, unless—well, finish the sentence for yourself.” A great many critics have, spawning what Dan McCall termed (some time ago now) the ‘Bartleby Industry.’ There’s so many interpretations, in fact, that the only determinate thing one can say regarding the text is that it systematically underdetermines every attempt to determine its ‘meaning.’

In the ecology of literary and philosophical critique, Bartleby remains a crucial watering hole in an ever-shrinking reservation of the humanities. A great number of these interpretations share the narrator’s founding assumption, that Bartleby—the character—represents something exceptional. Consider, for instance, Deleuze in “Bartleby; or, the Formula.”

“If Bartleby had refused, he could still be seen as a rebel or insurrectionary, and as such would still have a social role. But the formula stymies all speech acts, and at the same time, it makes Bartelby a pure outsider [exclu] to whom no social position can be attributed. This is what the attorney glimpses with dread: all his hopes of bringing Bartleby back to reason are dashed because they rest on a logic of presuppositions according to which an employer ‘expects’ to be obeyed, or a kind of friend listened to, whereas Bartleby has invented a new logic, a logic of preference, which is enough to undermine the presuppositions of language as a whole.” 73

Or consider Zizek, who uses Bartleby to conclude The Parallax View no less:

“In his refusal of the Master’s order, Bartleby does not negate the predicate; rather, he affirms a nonpredicate: he does not say that he doesn’t want to do it; he says that he prefers (wants) not to do it. This is how we pass from the politics of “resistance” or “protestation,” which parasitizes upon what it negates, to a politics which opens up a new space outside the hegemonic position and its negation.” 380-1

Bartleby begets ‘Bartleby politics,’ the possibility of a relation to what stands outside relationality, a “move from something to nothing, from the gap between two ‘somethings’ to the gap that separates a something from nothing, from the void of its own place” (381). Bartleby isn’t simply an outsider on this account, he’s a pure outsider, more limit than liminal. And this, of course, is the very assumption that the narrator himself carries away intact: that Bartleby constitutes something ontologically or logically exceptional.

I no longer share this assumption. Like Borges in his “Prologue to Herman Melville’s “Bartleby,” I see “the symbol of the whale is less apt for suggesting the universe is vicious than for suggesting its vastness, its inhumanity, its bestial or enigmatic stupidity.” Melville, for all the wide-eyed grandiloquence of his prose, was a squinty-eyed skeptic. “These men are all cracked right across the brow,” he would write of philosophers such as Emerson. “And never will the pullers-down be able to cope with the builders-up.” For him, the interest always lies in the distances between lofty discourse and the bloody mundanities it purports to solve. As he writes to Hawthorne in 1851:

“And perhaps after all, there is no secret. We incline to think that the Problem of the Universe is like the Freemason’s mighty secret, so terrible to all children. It turns out, at last, to consist in a triangle, a mallet, and an apron—nothing more! We incline to think that God cannot explain His own secrets, and that He would like a little more information upon certain points Himself. We mortals astonish Him as much as He us.”

It’s an all too human reflex. Ignorance becomes justification for the stories we want to tell, and we are filled with “oracular gibberish” as a result.

So what if Bartleby holds no secrets outside the ‘contagion of nihilism’ that Borges ascribes to him?

As a novelist, I cannot but read the tale, with its manifest despair and gallows humour, as the expression of another novelist teetering on the edge of professional ruin. Melville conceived and wrote “Bartleby, the Scrivener” during a dark period of his life. Both Moby-Dick and Pierre had proved to be critical and commercial failures. As Melville would write to Hawthorne:

“What I feel most moved to write, that is banned—it will not pay. Yet, altogether write the other way I cannot. So the product is a final hash, and all my books are botches.”

Forgeries, neither artistic nor official. Two species of neuroticism plague full-time writers, particularly if they possess, as Melville most certainly did, a reflective bent. There’s the neuroticism that drives a writer to write, the compulsion to create, and there’s the neuroticism secondary to a writer’s consciousness of this prior incapacity, the neurotic compulsion to rationalize one’s neuroticism.

Why, for instance, am I writing this now? Am I a literary critic? No. Am I being paid to write this? No. Are there things I should be writing instead? Buddy, you have no idea. So why don’t I write as I should?

Well, quite simply, I would prefer not to.

And why is this? Is it because I have some glorious spark in me? Some essential secret? Am I, like Bartleby, a pure outsider?

Or am I just a fucking idiot? A failed copyist.

For critics, the latter is pretty much the only answer possible when it comes to living writers who genuinely fail to copy. No matter how hard we wave discrepancy’s flag, we remain discrepancy minimization machines—particularly where social cognition is concerned. Living literary dissenters cue reflexes devoted to living threats: the only good discrepancy is a dead discrepancy. As the narrator discovers, attributing something exceptional becomes far easier once the dissenter is dead. Once the source falls silent, the consequences possess the freedom to dispute things as they please.

Writers themselves, however, discover they are divided, that Ahab is not Ahab, but Ishmael as well, the spinner of tales about tales. A failed copyist. A hapless lawyer. Gazing at obstruction, chasing the whale, spinning rationalization after rationalization, confabulating as a human must, taking meagre heart in spasms of critical fantasy.

Endless interpretative self-deception. As much as I recognize Bartleby, I know the narrator only too well. This is why for me, “Bartleby, the Scrivener” is best seen as a prank on the literary establishment, a virus uploaded with each and every Introduction to American Literature class, one assuring that the critic forever bumbles as the narrator bumbles, waddling the easy way, the expected way, embodying more than applying the ‘doctrine of assumptions.’ Bartleby is the paradigmatic idiot, both in the ancient Greek sense of idios, private unto inscrutable, and idiosyncratic unto useless. But for the sake of vanity and cowardice, we make of him something vast, more than a metaphor for x. The character of Bartleby, on this reading, is not so much key to understanding something ‘absolute’ as he is key to understanding human conceit—which is to say, the confabulatory stupidity of the critic.

But explaining the prank, of course, amounts to falling for the prank (this is the key to its power). No matter how mundane one’s interpretation of Bartleby, as an authorial double, as a literary prank, it remains simply one more interpretation, further evidence of the narrative’s profound indeterminacy. ‘Negative exceptionalists’ like Deleuze or Zizek (or Agamben) need only point out this fact to rescue their case—don’t they? Even if Melville conceived Bartleby as his neurotic alter-ego, the word-crazed husband whose unaccountable preferences had reduced his family to penury (and so, charity), he nonetheless happened upon “a zone of indetermination or indiscernibility in which neither words nor characters can be distinguished” (“Bartleby, or the Formula,” 76).

No matter how high one stacks their mundane interpretations of Bartleby—as an authorial alter-ego, a psycho-sociological casualty, an exemplar of passive resistance, or so on—the profundity of his rationality crashing function remains every bit as profound, exceptional. Doesn’t it? After-all, nothing essential binds the distal intent of the author (itself nothing but another narrative) to the proximate effect of the text, which is to “send language itself into flight” (76). Once we set aside the biographical, psychological, historical, economic, political, and so on, does not this formal function remain? And is it not irreducible, exceptional?

That depends whether you think,

is exceptional. What should we say about Necker Cubes? Do they mark the point where the visibility of the visible collapses, generating ‘a zone of indetermination or indiscernibility in which neither indents nor protrusions can be distinguished’? Are they ‘pure figures,’ efficacies that stand outside the possibility of intelligible geometry? Or do they merely present the visual cortex with the demand to distinguish between indents and protrusions absent the information required to settle that demand, thus stranding visual experience upon the predictive threshold of both? Are they bi-stable images?

The first explanation pretty clearly mistakes a heuristic breakdown in the cognition of visual information with an exceptional visual object, something intrinsically indeterminate—something super-geometrical, in fact. When we encounter something visually indeterminate, we immediately blame our vision, which is to say, the invisible, enabling dimension of visual cognition. Visual discrepancies had real reproductive consequences, evolutionarily speaking. Thanks to medial neglect, we had no way of cognizing the ecological nature of vision, so we could only blink, peer, squint, rub our eyes, or change our position. If the discrepancy persisted, we wondered at it, and if we could, transformed it into something useful—be it cuing environmental forms on cave or cathedral walls (‘visual representations’) or cuing wonder with kaleidoscopes at Victorian exhibitions.

Likewise, Deleuze and Zizek (and many, many others) are mistaking a heuristic breakdown in the cognition of social information with an exceptional social entity, something intrinsically indeterminate—something super-social. Imagine encountering a Bartleby in your own place of employ. Imagine your employer not simply tolerating him, but enabling him, allowing him to drift ever deeper into anorexic catatonia. Initially, when we encounter something socially indeterminate in vivo, we typically blame communication—as does the narrator with Bartleby. Social discrepancies, one might imagine, had profound reproductive consequences (given that reproduction is itself social). The narrator’s sensitivity to such discrepancies is the sensitivity that all of us share. Given medial neglect, however, we have no way of cognizing the ecological nature of social cognition. So we check with our colleagues just to be sure (‘Am I losing my mind here?’), then we blame the breakdown in rational reflexes on the man himself. We gossip, test out this or that pet theory, pester spouses who, insensitive to potential micropolitical discrepancies, urge us to file a complaint with someone somewhere. Eventually, we either quit the place, get the poor sod some help, or transform him into something useful, like “Bartleby politics” or what have you. This is the prank that Melville lays out with the narrator—the prank that all post-modern appropriations of this tale trip into headlong…

The ecological nature of cognition entails the blindness of cognition to its ecological nature. We are distributed systems: we evolved to take as much of our environments for granted as we possibly could, accessing as little as possible to solve as many problems as possible. Experience and cognition turn on shallow information ecologies, blind systems turning on reliable (because reliably generated) environmental frequencies to solve problems—especially communicative problems. Absent the requisite systems and environments, these ecologies crash, result in the application of cognitive systems to situations they cannot hope to solve. Those who have dealt with addicted or mentally-ill loved ones know the profundity of these crashes first-hand, the way the unseen reflexes (‘preferences’) governing everyday interactions cast you into dismay and confusion time and again, all for want of applicability. There’s the face, the eyes, all the cues signaling them as them, and then… everything collapses into mealy alarm and confusion. Bartleby, with his dissenting preference, does precisely the same: Melville provides exquisite experiential descriptions of the dumbfounding characteristic of sociocognitive crashes.

Bartleby need not be a ‘pure outsider’ to do this. He just needs to provide enough information to demand disambiguation, but not enough information to provide it. “I would prefer not to”—Bartleby’s ‘formula,’ according to Deleuze—is anything but ‘minimal’: its performance functions the way it does because of the intricate communicative ecology it belongs to. But given medial neglect, our blindness to ecology, the formula is prone to strike us as something quite different, as something possessing no ecology.

It certainly strikes Deleuze as such:

“The formula is devastating because it eliminates the preferable just as mercilessly as any nonpreferred. It not only abolishes the term it refers to, and that it rejects, but also abolishes the other term it seemed to preserve, and that becomes impossible. In fact, it renders them indistinct: it hollows out an ever expanding zone of indiscernibility or indetermination between some nonpreferred activities and a preferable activity. All particularity, all reference is abolished.” 71

Since preferences affirm, ‘preferring not to’ (expressed in the subjunctive no less) can be read as an affirmative negation: it affirms the negation of the narrator’s request. Since nothing else is affirmed, there’s a peculiar sense in which ‘preferring not to’ possesses no reference whatsoever. Medial neglect assures that reflection on the formula occludes the enabling ecology, that asking what the formula does will result in fetishization, the attribution of efficacy in an explanatory vacuum. Suddenly ‘preferring not to’ appears to be a ‘semantic disintegration grenade,’ something essentially disruptive.

In point of natural fact, however, human sociocognition is fundamentally interactive, consisting in the synchronization of radically heuristic systems given only the most superficial information. Understanding one another is a radically interdependent affair. Bartleby presents all the information cuing social reliability, therefore consistently cuing predictions of reliability that turn out to be faulty. The narrator subsequently rummages through the various tools we possess to solve harmless acts of unreliability given medial neglect—tools which have no applicability in Bartleby’s case. Not only does Bartleby crash the network of predictive reflexes constituting the office ecology, he crashes the sociocognitive hacks that humans in general use to troubleshoot such breakdowns. He does so, not because of some arcane semantic power belonging to the ‘formula,’ but because he manifests as a sociocognitive Necker-Cube, cuing noncoercive troubleshooting routines that have no application given whatever his malfunction happens to be.

This is the profound human fact that Melville’s skeptical imagination fastened upon, as well as the reason Bartleby is ‘nothing in particular’: all human social cognition is fundamentally ecological. Consider, once again, the passage where the narrator entertains the possibility of neglecting Bartleby altogether, simply pretending he was absent:

“What was to be done? or, if nothing could be done, was there any thing further that I could assume in the matter? Yes, as before I had prospectively assumed that Bartleby would depart, so now I might retrospectively assume that departed he was. In the legitimate carrying out of this assumption, I might enter my office in a great hurry, and pretending not to see Bartleby at all, walk straight against him as if he were air. Such a proceeding would in a singular degree have the appearance of a home-thrust. It was hardly possible that Bartleby could withstand such an application of the doctrine of assumptions. But upon second thoughts the success of the plan seemed rather dubious. I resolved to argue the matter over with him again.”

Having reached the limits sociocognitive application, he proposes simply ignoring any subsequent failure in prediction, in effect, wishing the Bartlebian crash space away. The problem, of course, is that it ‘takes two to tango’: he has no choice but to ‘argue the matter again’ because the ‘doctrine of assumptions’ is interactional, ecological. What Melville has fastened upon here is the way the astronomical complexity of the sociocognitive (and metacognitive) systems involved holds us hostage, in effect, to their interactional reliability. Meaning depends on maddening sociocognitive intricacies.

The entirety of the story illustrates the fragility of this cognitive ecosystem despite its all-consuming power. Time and again Bartleby is characterized as an ecological casualty of the industrialization of social relations, be it the mass disposal of undelivered letters or the mass reproduction of legally binding documentation. Like ‘computer,’ ‘copier’ names something that was once human but has since become technology. But even as Bartleby’s breakdown expresses the system’s power to break the maladapted, it also reveals its boggling vulnerability, the ease with which it evaporates into like-minded conspiracies and ‘mere pretend.’ So long as everyone plays along—functions reliably—this interdependence remains occluded, and the irrationality (the discrepancy generating stupidity) of the whole never needs be confronted.

In other words, the lesson of Bartleby can be profound, as profound as human communication and cognition itself, without implying anything exceptional. Stupidity, blind, obdurate obliviousness, is all that is required. A minister’s black veil, a bit of crepe poised upon the right interactional interface, can throw whole interpretative communities from their pins. The obstruction, the blank wall, need not conceal anything magical to crash the gossamer ecologies of human life. It need only appear to be a window, or more cunning still, a window upon a wall. We need only be blind to the interactional machinery of looking to hallucinate absolute horizons. Blind to the meat of life.

And in this sense, we can accuse the negative exceptionalists such as Deleuze and Zizek not simply with ignoring life, the very topos of literature, but with concealing the threat that the technologization of life poses to life. Only in an ecology can we understand the way victims can at once be assailants absent aporia, how Bartleby, overthrown by the technosocial ecologies of his age, can in turn overthrow that technosocial ecology. Only understanding life for what we know it to be—biological—allows us to see the profound threat the endless technological rationalization of human sociocognitive ecologies poses to the viability of those ecologies. For Bartleby, by revealing the ecological fragility of human social cognition, how break begets break, reveals the antithesis between ‘progress’ and ‘meaning,’ how the former can only carry the latter so far before crashing.

As Deleuze and Zizek have it, Bartleby holds open a space of essential resistance. As the reading here has it, Bartleby provides a grim warning regarding the ecological fragility of human social cognition. One can even look at him as a blueprint for the potential weaponization of anthropomorphic artificial intelligence, systems designed to strand individual decision-making upon thresholds, to command inaction via the strategic presentation of cues. Far from representing some messianic discrepancy, apophatic proof of transcendence, he represents the way we ourselves become cognitive pollutants when abandoned to polluted cognitive ecologies.

AI and the Coming Cognitive Ecological Collapse: A Reply to David Krakauer

by rsbakker

the-space-cadets

Thanks to Dirk and his tireless linking generosity, I caught “Will AI Harm Us?” in Nautilus by David Krakauer, the President of the Santa Fe Institute, on the potential dangers posed by AI on this side of the Singularity. According to Krakauer, the problem lies in the fact that AI’s are competitive as opposed to complementary cognitive artifacts of the kind we have enjoyed until now. Complementary cognitive artifacts, devices such as everything from mnemonics to astrolabes to mathematical notations, allow us to pull up the cognitive ladder behind us in some way—to somehow do without the tool. “In almost every use of an ancient cognitive artifact,” he writes, “after repeated practice and training, the artifact itself could be set aside and its mental simulacrum deployed in its place.”

Competitive cognitive artifacts, however, things like calculators, GPS’s, and pretty much anything AI-ish, don’t let us kick away the ladder. We lose the artifact, and we lose the ability. As Krakauer writes:

In the case of competitive artifacts, when we are deprived of their use, we are no better than when we started. They are not coaches and teachers—they are serfs. We have created an artificial serf economy where incremental and competitive artificial intelligence both amplifies our productivity and threatens to diminish organic and complementary artificial intelligence…

So where complementary cognitive artifacts teach us how to fish, competitive cognitive artifacts simply deliver the fish, rendering us dependent. Krakauer’s complaint against AI, in other words, is the same as Plato’s complaint against writing, and I think fares just as well argumentatively. As Socrates famously claims in The Phaedrus,

For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.

The problem with writing is that it is competitive precisely in Krakauer’s sense: it’s a ladder we cannot kick away. What Plato could not foresee, of course, was the way writing would fundamentally transform human cognitive ecology. He was a relic of the preliterate age, just as Krakauer (like us) is a relic of the pre-AI age. The problem for Krakauer, then, is that the distinction between complementary and competitive cognitive artifacts—the difference between things like mnemonics and things like writing—possesses no reliable evaluative force. All tools involve trade-offs. Since Krakauer has no way of knowing how AI will transform our cognitive ecology he has no way of evaluating the kinds of trade-offs they will force upon us.

This is the problem with all ‘excess dependency arguments’ against technology, I think: they have no convincing way of assessing the kind of cognitive ecology that will result, aside from the fact that it involves dependencies. No one likes dependencies, ergo…

But I like to think I’ve figured the naturalistic riddle of cognition out,* and as a result I think I can make a pretty compelling case why we should nevertheless accept that AI poses a very grave threat this side of the Singularity. The problem, in a nut shell, is that we are shallow information consumers, evolved to generate as much gene-promoting behaviour out of as little environmental information as possible. Human cognition relies on simple cues to draw very complex conclusions simply because it could always rely on adaptive correlations between those cues and the systems requiring solution: it could always depend on what might be called cognitive ecological stability.

Since our growing cognitive dependency on our technology always involves trade-offs, it should remain an important concern (as it clearly seems to be, given the endless stream of works devoted to the downside of this or that technology in this or that context). The dependency we really need to worry about, however, is our cognitive biological dependency on ancestral environmental correlations, simply because we have good reason to believe those cognitive ecologies will very soon cease to exist. Human cognition is thoroughly heuristic, which is to say, thoroughly dependent on cues reliably correlated to whatever environmental system requires solution. AI constitutes a particular threat because no form of human cognition is more heuristic, more cue dependent, than social cognition. Humans are very easily duped into anthropomorphizing given the barest cues, let alone processes possessing AI. It pays to remember the simplicity of the bots Ashley Madison used to gull male subscribers into thinking they were getting female nibbles.

And herein lies the rub: the environmental proliferation of AI means the fundamental transformation of our ancestral sociocognitive ecologies, from one where the cues we encounter are reliably correlated to systems we can in fact solve—namely, each other—into one where the cues we encounter are correlated to systems that cannot be fathomed, and the only soul solved is the consumer’s.

 

*  Bakker, R. Scott. “On Alien Philosophy,” Journal of Consciousness Studies, forthcoming.

Myth as Meth

by rsbakker

What is the lesson that Tolkien teaches us with Middle-earth? The grand moral, I think, is that the illusion of a world can be so easily cued. Tolkien reveals that meaning is cheap, easy to conjure, easy to believe, so long as we sit in our assigned seats. This is the way, at least, I thematically approach my own world-building. Like a form of cave-painting.

The idea here is to look at culture as a meaning machine, where ‘meaning’ is understood not as content, but in a post-intentional sense: various static and dynamic systems cuing various ‘folk’ forms of human cognition. Think of the wonder of the ‘artists’ in Chauvet, the amazement of discovering how to cue the cognition of worlds upon walls using only charcoal. Imagine that first hand, that first brain, tracking that reflex within itself, simply drawing a blacked finger down the wall.

chauvet horses

Traditional accounts, of course, would emphasize the symbolic or representational significance of events such as Chauvet, thereby dragging the question of the genesis of human culture into the realm of endless philosophical disputation. On a post-intentional view, however, what Chauvet vividly demonstrates is how human cognition can be easily triggered out of school. Human cognition is so heuristic, in fact, that it has little difficulty simulating those cues once they have been discovered. Since human cognition also turns out to be wildly opportunistic, the endless socio-practical gerrymandering characterizing culture was all but inevitable. Where traditional views of the ‘human revolution’ focus on utterly mysterious modes of symbolic transmission and elaboration, the present account focuses on the processes of cue isolation and cognitive adaptation. What are isolated are material/behavioural means of simulating cues belonging to ancestral forms of cognition. What is adapted is the cognitive system so cued: the cave paintings at Chauvet amount to a socio-cognitive adaptation of visual cognition, a way to use visual cognitive cues ‘out of school’ to attenuate behaviour. Though meaning, understood intentionally, remains an important explanandum in this approach, ‘meaning’ understood post-intentionally simply refers to the isolation and adaptation of cue-based cognitive systems to achieve some systematic behavioural effect. The basic processes involved are no more mysterious than those underwriting camouflage in nature.*

A post-intentional theory of meaning focuses on the continuity of semantic practices and nature, and views any theoretical perspective entailing the discontinuity of those practices and nature as spurious artifacts of the application of heuristic modes of cognition to theoretical issues. A post-intentional theory of meaning, in other worlds, views culture as a natural phenomenon, and not some arcane artifact of something empirically inexplicable. Signification is wholly material on this account, with all the messiness that comes with it.

Cognitive systems optimize effectiveness by reaching out only as far into nature as they need to. If they can solve distal systems via proximal signals possessing reliable systematic relationships to those systems, they will do so. Humans, like all other species possessing nervous systems, are shallow information consumers in what might be called deep information environments.


Given the limits of human cognition, our ancestors could report whatever they wanted about the greater world (their deep information environments), so long as those reports came cheap and/or discharged some kind of implicit function. They enjoyed what might be called, deep discursive impunity.


 

Consider anthropomorphism, the reflexive application of radically heuristic socio-cognitive capacities dedicated to solving our fellow humans to nonhuman species and nature more generally. When we run afoul anthropomorphism we ‘misattribute’ folk posits adapted to human problem-solving to nonhuman processes. As misapplications, anthropomorphisms tell us nothing about the systems they take as their putative targets. One does not solve a drought by making offerings to gods of rain. This is what makes anthropomorphic worldviews ‘fantastic’: the fact that they tell us very little, if anything, about the very nature they purport to describe and explain.

Now this, on the face of things, should prove maladaptive, since it amounts to squandering tremendous resources and behaviour effecting solutions to problems that do not exist. But of course, as is the case with so much human behaviour, it likely possesses ulterior functions serving the interests of individuals in ways utterly inaccessible to those individuals, at least in ancestral contexts.

The cognitive sophistication required to solve those deep information environments effectively rendered them inscrutable, impenetrable black-boxes, short the development of science. What we painted across the sides those boxes, then, could only be fixed by our basic cognitive capacities and by whatever ulterior function they happened to discharge. Given the limits of human cognition, our ancestors could report whatever they wanted about the greater world (their deep information environments), so long as those reports came cheap and/or discharged some kind of implicit function. They enjoyed what might be called, deep discursive impunity. All they would need is a capacity to identify cues belonging to social cognition in the natural world—to see, for instance, retribution, in the random walk of weather—and the ulterior exploitation of anthropomorphism could get underway.

Given the ancestral inaccessibility of deep information, and given the evolutionary advantages of social coordination and cohesion, particularly in the context of violent intergroup competition, it becomes easy to see how the quasi-cognition of an otherwise impenetrable nature could become a resource. When veridicality has no impact one way or another, social and individual facilitation alone determines the selection of the mechanisms responsible. When anything can be believed, to revert to folk idioms, then only those beliefs that deliver matter. This, then, explains why different folk accounts of the greater world possess deep structural similarities despite their wild diversity. Their reliance on socio-cognitive systems assures deep commonalities in form, as do the common ulterior functions provided. The insolubility of the systems targeted, on the other hand, assures any answer meeting the above constraints will be as effective as any other.

Given the evolutionary provenance of this situation, we are now in a position to see how accurate deep information can be seen as a form of cognitive pollution, something alien that disrupts and degrades ancestrally stable, shallow information ecologies. Strangely enough, what allowed our ancestors to report the nature of nature was the out-and-out inscrutability of nature, the absence of any (deep) information to the contrary—and the discursive impunity this provides. Anthropomorphic quasi-cognition requires deep information neglect. The greater our scientifically mediated sensitivity to deep information becomes, the less tenable anthropomorphic quasi-cognition becomes, the more fantastic folk worlds become. The worlds arising out of our evolutionary heritage find themselves relegated to fairy tales.

Fantasy worlds, then, can be seen as an ontological analogue to the cave paintings at Chauvet. They cue ancestral modes of cognition, simulating the kinds of worlds our ancestors reflexively reported, folk worlds rife with those posits they used to successfully solve one another in a wide variety of practical contexts, meaningful worlds possessing the kinds of anthropomorphic ontologies we find in myths and religions.

With the collapse of the cognitive ecology that made these worlds possible, comes the ineffectiveness of the tools our ancestors used to navigate them. We now find ourselves in deep information worlds, environments not only rife with information our ancestors had neglected, but also crammed with environments engineered to manipulate shallow information cues. We now find ourselves in a world overrun with crash spaces, regions where our ancestral tools consistently fail, and cheat spaces, regions where they are exploited for commercial gain.

This is a rather remarkable fact, even if it becomes entirely obvious upon reflection. Humans possess ideal cognitive ecologies, solve spaces, environments rewarding their capacities, just as humans possess crash spaces, environments punishing their capacities. This is the sense in which fantasy worlds can be seen as a compensatory mechanism, a kind of cognitive eco-preserve, a way to inhabit more effortless shallow information worlds, pseudo-solution spaces, hypothetical environments serving up largely unambiguous cues to generally reliable cognitive capacities. And like biological eco-preserves, perhaps they serve an important function. As we saw with anthropomorphism above, pseudo-solution spaces can be solvers (as opposed to crashers) in their own respect—culture is nothing if not a testimony to this.


Fantasy is zombie scripture, the place where our ancient assumptions lurch in the semblance of life. The fantasy writer is the voodoo magician, imbuing dead meaning with fictional presence.


 

But fantasy worlds are also the playground of blind brains. The more we learn about ourselves, the more we learn how to cue different cognitive capacities out of school—how to cheat ourselves for good or ill. Our shallow information nature is presently the focus of a vast, industrial research program, one gradually providing the information, techniques, and technology required to utterly pre-empt our ancestral ecologies, which is to say, to perfectly simulate ‘reality.’ The reprieve from the cognitive pollution of actual environments itself potentially amounts to more cognitive pollution. We are, in some respect at least, a migratory species, one prone to gravitate toward greener pastures. Is the migration between realities any less inevitable than the migration across lands?

Via the direct and indirect deformation of existing socio-cognitive ecologies, deep information both drives the demand for and enables the high-dimensional cuing of fantastic cognition. In our day and age, a hunger for meaning is at once a predisposition to seek the fantastic. We should expect that hunger to explode with the pace of technological change. For all the Big Data ballyhoo, it pays to remember that we are bound up in an auto-adaptive macro-social system that is premised upon solving us, mastering our cognitive reflexes in ways invisible or that please. We are presently living through the age where it succeeds.

Fantasy is zombie scripture, the place where our ancient assumptions lurch in the semblance of life. The fantasy writer is the voodoo magician, imbuing dead meaning with fictional presence. This resurrection can either facilitate our relation to the actual world, or it can pre-empt it. Science and technology are the problem here. The mastery of deep information environments enables ever greater degrees of shallow information capture. As our zombie natures are better understood, the more effectively our reward systems are tuned, the deeper our descent into this or that variety of fantasy becomes. This is the dystopic image of Akratic society, a civilization ever more divided between deep and shallow information consumers, between those managing the mechanisms, and those captured in some kind of semantic cheat space.

The Death of Wilson: How the Academic Left Created Donald Trump

by rsbakker

Tim and Wilson 2

 

People need to understand that things aren’t going to snap back into magical shape once Trump becomes archive footage. The Economist had a recent piece on all the far-right demagoguery in the past, and though they stress the impact that politicians like Goldwater have had subsequent to their electoral losses, they imply that Trump is part of a cyclical process, essentially more of the same. Perhaps this might have been the case were this anything but the internet age. For all we know, things could skid madly out of control.

Society has been fundamentally rewired. This is a simple fact. Remember Home Improvement, how Tim would screw something up, then wander into the backyard to lay his notions and problems on his neighbour Wilson, who would only ever appear as a cap over the fence line? Tim was hands on, but interpersonally incompetent, while Wilson was bookish and wise to the ways of the human heart—as well as completely obscured save for his eyes and various caps by the fence between them.

This is a fantastic metaphor for the communication of ideas before the internet and its celebrated ability to ‘bring us together.’ Before, when you had chauvinist impulses, you had to fly them by whoever was available. Pre-internet, extreme views were far more likely to be vetted by more mainstream attitudes. Simple geography combined with the limitations of analogue technology had the effect of tamping the prevalence of such views down. But now Tim wouldn’t think of hassling Wilson over the fence, not when he could do a simple Google and find whatever he needed to confirm his asinine behaviour. Our chauvinistic impulses no longer need to run any geographically constrained social gauntlet to find articulation and rationalization. No matter how mad your beliefs, evidence of their sanity is only ever a few keystrokes away.

This has to have some kind of aggregate, long-term effect–perhaps a dramatic one. The Trump phenomenon isn’t the manifestation of an old horrific contagion following the same old linear social vectors; it’s the outbreak of an old horrific contagion following new nonlinear social vectors. Trump hasn’t changed anything, save identifying and exploiting an ecological niche that was already there. No one knows what happens next. Least of all him.

What’s worse, with the collapse of geography comes the collapse of fences. Phrases like “cretinization of the masses” is simply one Google search away as well. Before, Wilson would have been snickering behind that fence, hanging with his friends and talking about his moron neighbour, who really is a nice guy, you know, but needs help to think clearly all the same. Now the fence is gone, and Tim can finally see Wilson for the condescending, self-righteous bigot he has always been.

Did I just say ‘bigot’? Surely… But this is what Trump supporters genuinely think. They think ‘liberal cultural elites’ are bigoted against them. As implausible as his arguments are, Murray is definitely tracking a real social phenomenon in Coming Apart. A good chunk of white America feels roundly put upon, attacked economically and culturally. No bonus this Christmas. No Christmas tree at school. Why should a minimum wage retail worker think they somehow immorally benefit by dint of blue eyes and pale skin? Why should they listen to some bohemian asshole who’s both morally and intellectually self-righteous? Why shouldn’t they feel aggrieved on all sides, economically and culturally disenfranchised?

Who celebrates them? Aside from Donald Trump.

Trump

 

You have been identified as an outgroup competitor.

Last week, Social Psychological and Personality Science published a large study conducted by William Chopik, a psychologist out of Michigan State University, showing the degree to which political views determine social affiliations: it turns out that conservatives generally don’t know Clinton supporters and liberals generally don’t know any Trump supporters. Americans seem to be spontaneously segregating along political lines.

Now I’m Canadian, which, although it certainly undermines the credibility of my observations on the Trump phenomenon in some respects, actually does have its advantages. The whole thing is curiously academic, for Canadians, watching our cousins to the south play hysterical tug-o-war with their children’s future. What’s more, even though I’m about as academically institutionalized as a human can be, I’m not an academic, and I have steadfastly resisted the tendency of the highly educated to surround themselves with people who are every bit as institutionalized—or at least smitten—by academic culture.

I belong to no tribe, at least not clearly. Because of this, I have Canadian friends who are, indeed, Trump supporters. And I’ve been whaling on them, asking questions, posing arguments, and they have been whaling back. Precisely because we are Canadian, the whole thing is theatre for us, allowing, I like to think, for a brand of honesty that rancour and defensiveness would muzzle otherwise.

When I get together with my academic friends, however, something very curious happens whenever I begin reporting these attitudes: I get interrupted. “But-but, that’s just idiotic/wrong/racist/sexist!” And that’s when I begin whaling on them, not because I don’t agree with their estimation, but because, unlike my academic confreres, I don’t hold Trump supporters responsible. I blame them, instead. Aren’t they the ‘critical thinkers’? What else did they think the ‘cretins’ would do? Magically seize upon their enlightened logic? Embrace the wisdom of those who openly call them fools?

Fact is, you’re the ones who jumped off the folk culture ship.

The Trump phenomenon falls into the wheelhouse of what has been an old concern of mine. For more than a decade now, I’ve been arguing that the social habitat of intellectual culture is collapsing, and that the persistence of the old institutional organisms is becoming more and more socially pernicious. Literature professors, visual artists, critical theorists, literary writers, cultural critics, intellectual historians and so on all continue acting and arguing as though this were the 20th century… as if they were actually solving something, instead of making matters worse.

See before, when a good slice of media flushed through bottlenecks that they mostly controlled, the academic left could afford to indulge in the same kind of ingroup delusions that afflict all humans. The reason I’m always interrupted in the course of reporting the attitudes of my Trump supporting friends is simply that, from an ingroup perspective, they do not matter.

More and more research is converging upon the notion that the origins of human cooperation lie in human enmity. Think Band of Brothers only in an evolutionary context. In the endless ‘wars before civilization’ one might expect those groups possessing members willing to sacrifice themselves for the good of their fellows would prevail in territorial conflicts against groups possessing members inclined to break and run. Morality has been cut from the hip of murder.

This thesis is supported by the radical differences in our ability to ‘think critically’ when interacting with ingroup confederates as opposed to outgroup competitors. We are all but incapable of listening, and therefore responding rationally, to those we perceive as threats. This is largely why I think literature, minimally understood as fiction that challenges assumptions, is all but dead. Ask yourself: Why is it so easy to predict that so very few Trump supporters have read Underworld? Because literary fiction caters to the likeminded, and now, thanks to the precision of the relationship between buyer and seller, it is only read by the likeminded.

But of course, whenever you make these kinds of arguments to academic liberals you are promptly identified as an outgroup competitor, and you are assumed to have some ideological or psychological defect preventing genuine critical self-appraisal. For all their rhetoric regarding ‘critical thinking,’ academic liberals are every bit as thin-skinned as Trump supporters. They too feel put upon, besieged. I gave up making this case because I realized that academic liberals would only be able to hear it coming from the lips of one of their own, and even then, only after something significant enough happened to rattle their faith in their flattering institutional assumptions. They know that institutions are self-regarding, they admit they are inevitably tarred by the same brush, but they think knowing this somehow makes them ‘self-critical’ and so less prone to ingroup dysrationalia. Like every other human on the planet, they agree with themselves in ways that flatter themselves. And they direct their communication accordingly.

I knew it was only a matter of time before something happened. Wilson was dead. My efforts to eke out a new model, to surmount cultural balkanization, motivated me to engage in ‘blog wars’ with two very different extremists on the web (both of whom would be kind enough to oblige my predictions). This experience vividly demonstrated to me how dramatically the academic left was losing the ‘culture wars.’ Conservative politicians, meanwhile, were becoming more aggressively regressive in their rhetoric, more willing to publicly espouse chauvinisms that I had assumed safely buried.

The academic left was losing the war for the hearts and minds of white America. But so long as enrollment remained steady and book sales remained strong, they remained convinced that nothing fundamental was wrong with their model of cultural engagement, even as technology assured a greater match between them and those largely approving of them. Only now, with Trump, are they beginning to realize the degree to which the technological transformation of their habitat has rendered them culturally ineffective. As George Saunders writes in “Who Are All These Trump Supporters?” in The New Yorker:

Intellectually and emotionally weakened by years of steadily degraded public discourse, we are now two separate ideological countries, LeftLand and RightLand, speaking different languages, the lines between us down. Not only do our two subcountries reason differently; they draw upon non-intersecting data sets and access entirely different mythological systems. You and I approach a castle. One of us has watched only “Monty Python and the Holy Grail,” the other only “Game of Thrones.” What is the meaning, to the collective “we,” of yon castle? We have no common basis from which to discuss it. You, the other knight, strike me as bafflingly ignorant, a little unmoored. In the old days, a liberal and a conservative (a “dove” and a “hawk,” say) got their data from one of three nightly news programs, a local paper, and a handful of national magazines, and were thus starting with the same basic facts (even if those facts were questionable, limited, or erroneous). Now each of us constructs a custom informational universe, wittingly (we choose to go to the sources that uphold our existing beliefs and thus flatter us) or unwittingly (our app algorithms do the driving for us). The data we get this way, pre-imprinted with spin and mythos, are intensely one-dimensional.

The first, most significant thing to realize about this passage is that it’s written by George Saunders for The New Yorker, a premier ingroup cultural authority on a premier ingroup cultural podium. On the view given here, Saunders pretty much epitomizes the dysfunction of literary culture, an academic at Syracuse University, the winner of countless literary awards (which is to say, better at impressing the likeminded than most), and, I think, clearly a genius of some description.

To provide some rudimentary context, Saunders attends a number of Trump rallies, making observations and engaging Trump supporters and protesters alike (but mostly the former) asking gentle questions, and receiving, for the most part, gentle answers. What he describes observation-wise are instances of ingroup psychology at work, individuals, complete strangers in many cases, making forceful demonstrations of ingroup solidarity and resolve. He chronicles something countless humans have witnessed over countless years, and he fears for the same reasons all those generations have feared. If he is puzzled, he is unnerved more.

He isolates two culprits in the above passage, the ‘intellectual and emotional weakening brought about by degraded public discourse,’ and more significantly, the way the contemporary media landscape has allowed Americans to ideologically insulate themselves against the possibility of doubt and negotiation. He blames, essentially, the death of Wilson.

As a paradigmatic ‘critical thinker,’ he’s careful to throw his own ‘subject position’ into mix, to frame the problem in a manner that distributes responsibility equally. It’s almost painful to read, at times, watching him walk the tightrope of hypocrisy, buffeted by gust after gust of ingroup outrage and piety, trying to exemplify the openness he mistakes for his creed, but sounding only lyrically paternalistic in the end–at least to ears not so likeminded. One can imagine the ideal New Yorker reader, pursing their lips in empathic concern, shaking their heads with wise sorrow, thinking…

But this is the question, isn’t it? What do all these aspirational gestures to openness and admissions of vague complicity mean when the thought is, inevitably, fools? Is this not the soul of bad faith? To offer up portraits of tender humanity in extremis as proof of insight and impartiality, then to end, as Saunders ends his account, suggesting that Trump has been “exploiting our recent dullness and aversion to calling stupidity stupidity, lest we seem too precious.”

Academics… averse to calling stupidity stupid? Trump taking advantage of this aversion? Lordy.

This article, as beautiful as it is, is nothing if not a small monument to being precious, to making faux self-critical gestures in the name of securing very real ingroup imperatives. We are the sensitive ones, Saunders is claiming. We are the light that lets others see. And these people are the night of American democracy.

He blames the death of Wilson and the excessive openness of his ingroup, the error of being too open, too critically minded…

Why not just say they’re jealous because he and his friends are better looking?

If Saunders were at all self-critical, anything but precious, he would be asking questions that hurt, that cut to the bone of his aggrandizing assumptions, questions that become obvious upon asking them. Why not, for instance, ask Trump supporters what they thought of CivilWarLand in Bad Decline? Well, because the chances of any of them reading any of his work aside from “CommComm” (and only then because it won the World Fantasy Award in 2010) were virtually nil.

So then why not ask why none of these people has read anything written by him or any of his friends or their friends? Well, he’s already given us a reason for that: the death of Wilson.

Okay, so Wilson is dead, effectively rendering your attempts to reach and challenge those who most need to be challenged with your fiction toothless. And so you… what? Shrug your shoulders? Continue merely entertaining those whom you find the least abrasive?

If I’m right, then what we’re witnessing is so much bigger than Trump. We are tender. We are beautiful. We are vicious. And we are capable of believing anything to secure what we perceive as our claim. What matters here is that we’ve just plugged billions of stone-age brains chiselled by hundreds of millions of years of geography into a world without any. We have tripped across our technology and now we find ourselves in crash space, a domain where the transformation of our problems has rendered our traditional solutions obsolete.

It doesn’t matter if you actually are on their side or not, whatever that might mean. What matters is that you have been identified as an outgroup competitor, and that none of the authority you think your expertise warrants will be conceded to you. All the bottlenecks that once secured your universal claims are melting away, and you need to find some other way to discharge your progressive, prosocial aspirations. Think of all the sensitive young talent sifting through your pedagogical fingers. What do you teach them? How to be wise? How to contribute to their community? Or how to play the game? How to secure the approval of those just like you—and so, how to systematically alienate them from their greater culture?

So. Much. Waste. So much beauty, wisdom, all of it aimed at nowhere… tossed, among other places, into the heap of crumpled Kleenexes called The New Yorker.

Who would have thunk it? The best way to pluck the wise from the heart of our culture was to simply afford them the means to associate almost exclusively with one another, then trust to human nature, our penchant for evolving dialects and values in isolation. The edumacated no longer have the luxury of speaking among themselves for the edification of those servile enough to listen of their own accord. The ancient imperative to actively engage, to have the courage to reach out to the unlikeminded, to write for someone else, has been thrust back upon the artist. In the days of Wilson, we could trust to argument, simply because extreme thoughts had to run a gamut of moderate souls. Not so anymore.

If not art, then argument. If not argument, then art. Invade folk culture. Glory in delighting those who make your life possible–and take pride in making them think.

Sometimes they’re the idiot and sometimes we’re the idiot–that seems to be the way this thing works. To witness so many people so tangled in instinctive chauvinisms and cartoon narratives is to witness a catastrophic failure of culture and education. This is what Trump is exploiting, not some insipid reluctance to call stupid stupid.

I was fairly bowled over a few weeks back when my neighbour told me he was getting his cousin in Florida to send him a Trump hat. I immediately asked him if he was crazy.

“Name one Donald Trump who has done right by history!” I demanded, attempting to play Wilson, albeit minus the decorum and the fence.

Shrug. Wild eyes and a genuine smile. “Then I hope he burns it down.”

“How could you mean that?”

“I dunno, brother. Can’t be any worse than this fucking shit.”

Nothing I could say could make him feel any different. He’s got the internet.*

 

*[Note to readers: This post is receiving a great deal of Facebook traffic, and relatively little critical comment, which tells me individuals are saving their comments for whatever ingroup they happen to belong to, thus illustrating the very dynamic critiqued in the piece. Sound off! Dare to dissent in ideologically mixed company, or demonstrate the degree to which you need others to agree before raising your voice.]

The Dim Future of Human Brilliance

by rsbakker

Moths to a flame

Humans are what might be called targeted shallow information consumers in otherwise unified deep information environments. We generally skim only what information we need—from our environments or ourselves—to effect reproduction, and nothing more. We neglect gamma radiation for good reason: ‘deep’ environmental information that makes no reproductive difference makes no cognitive difference. As the product of innumerable ancestral ecologies, human cognitive biology is ecological, adapted to specific, high-impact environments. As ecological, one might expect that human cognitive biology is every bit as vulnerable to ecological change as any other biological system.

Under the rubric of  the Semantic Apocalypse, the ecological vulnerability of human cognitive biology has been my focus here for quite some time at Three Pound Brain. Blind to deep structures, human cognition largely turns on cues, sensitivity to information differentially related to the systems cognized.  Sociocognition, where a mere handful of behavioural cues can trigger any number of predictive/explanatory assumptions, is paradigmatic of this. Think, for instance, how easy it was for Ashley Madison to convince its predominantly male customers that living women were checking their profiles.  This dependence on cues underscores a corresponding dependence on background invariance: sever the differential relations between the cues and systems to be cognized (the way Ashley Madison did) and what should be sociocognition, the solution of some fellow human, becomes confusion (we find ourselves in ‘crash space’) or worse, exploitation (we find ourselves in instrumentalized crash space, or ‘cheat space’).

So the questions I think we need to be asking are:

What effect does deep information have on our cognitive ecologies? The so-called ‘data deluge’ is nothing but an explosion in the availability of deep or ancestrally inaccessible information. What happens when targeted shallow information consumers suddenly find themselves awash in different kinds of deep information? A myriad of potential examples come to mind. Think of the way medicalization drives accommodation creep, how instructors are gradually losing the ability to judge character in the classroom. Think of the ‘fear of crime’ phenomena, how the assessment of ancestrally unavailable information against implicit, ancestral baselines skews general perceptions of criminal threat. For that matter, think of the free will debate, or the way mechanistic cognition scrambles intentional cognition more generally: these are paradigmatic instances of the way deep information, the primary deliverance of science, crashes the targeted and shallow cognitive capacities that comprise our evolutionary inheritance.

What effect does background variation have on targeted, shallow modes of cognition? What happens when cues become differentially detached, or ‘decoupled,’ from their ancestral targets? Where the first question deals with the way the availability of deep information (literally, not metaphorically) pollutes cognitive ecologies, the ways human cognition requires the absence of certain information, this question deals with the way human cognition requires the presence of certain environmental continuities. There’s actually been an enormous amount of research done on this question in a wide variety of topical guises. Nikolaas Tinbergen coined the term “supernormal stimuli” to designate ecologically variant cuing, particularly the way exaggerated stimuli can trigger misapplications of different heuristic regimes. He famously showed how gull chicks, for instance, could be fooled into pecking false “super beaks” for food given only a brighter-than-natural red spot. In point of fact, you see supernormal stimuli in dramatic action anytime you see artificial outdoor lighting surrounded by a haze of bugs: insects that use lunar transverse orientation to travel at night continually correct their course vis a vis streetlights, porch lights, and so on, causing them to spiral directly into them. What Tinbergen and subsequent ethology researchers have demonstrated is the ubiquity of cue-based cognition, the fact that all organisms are targeted, shallow information consumers in unified deep information environments.

Deirdre Barrett has recently applied the idea to modern society, but lacking any theory of meaning, she finds herself limited to pointing out suggestive speculative parallels between ecological readings and phenomena that are semantically overdetermined otherwise. For me this question calves into a wide variety of domain-specific forms, but there’s an important distinction to be made between the decoupling of cues generally and strategic decoupling, between ‘crash space’ and ‘cheat space.’ Where the former involves incidental cognitive incapacity, human versions of transverse orientation, the latter involves engineered cognitive incapacity. The Ashley Madison case I referenced above provides an excellent example of simply how little information is needed to cue our sociocognitive systems in online environments. In one sense, this facility evidences the remarkable efficiency of human sociocognition, the fact that it can do so much with so little. But, as with specialization in evolution more generally, this efficiency comes at the cost of ecological dependency: you can only neglect information in problem-solving so long as the systems ignored remain relatively constant.

And this is basically the foundational premise of the Semantic Apocalypse: intentional cognition, as a radically specialized system, is especially vulnerable to both crashing and cheating. The very power of our sociocognitive systems is what makes them so liable to be duped (think religious anthropomorphism), as well as so easy to dupe. When Sherry Turkle, for instance, bemoans the ease with which various human-computer interfaces, or ‘HCIs,’ push our ‘Darwinian buttons’ she is talking about the vulnerability of sociocognitive cues to various cheats (but since she, like Barrett, lacks any theory of meaning, she finds herself in similar explanatory straits). In a variety of experimental contexts, for instance, people have been found to trust artificial interlocutors over human ones. Simple tweaks in the voices and appearance of HCIs have a dramatic impact on our perceptions of those encounters—we are in fact easily manipulated, cued to draw erroneous conclusions, given what are quite literally cartoonish stimuli. So the so-called ‘internet of things,’ the distribution of intelligence throughout our artifactual ecologies, takes on a far more sinister cast when viewed through the lens of human sociocognitive specialization. Populating our ecologies with gadgets designed to cue our sociocognitive capacities ‘out of school’ will only degrade the overall utility of those capacities. Since those capacities underwrite what we call meaning or ‘intentionality,’ the collapse of our ancestral sociocognitive ecologies signals the ‘death of meaning.’

The future of human cognition looks dim. We can say this because we know human cognition is heuristic, and that specific forms of heuristic cognition turn on specific forms of ecological stability, the very forms that our ongoing technological revolution promises to sweep away. Blind Brain Theory, in other words, offers a theory of meaning that not only explains away the hard problem, but can also leverage predictions regarding the fate of our civilization. It makes me dizzy thinking about it, and suspicious—the empty can, as they say, rattles the loudest. But this preposterous scope is precisely what we should expect from a genuinely naturalistic account of intentional phenomena. The power of mechanistic cognition lies in the way it scales with complexity, allowing us to build hierarchies of components and subcomponents. To naturalize meaning is to understand the soul in terms continuous with the cosmos.

This is precisely what we should expect from a theory delivering the Holy Grail, the naturalization of meaning.

You could even argue that the unsettling, even horrifying consequences evidence its veracity, given there’s so many more ways for the world to contradict our parochial conceits than to appease them. We should expect things will end ugly.