Three Pound Brain

No bells, just whistling in the dark…

Month: April, 2018

The Crash of Truth: A Critical Review of Post-Truth by Lee C. Mcintyre

by rsbakker

Lee Mcintyre is a philosopher of science at Boston University, and author of Dark Ages: The Case for a Science of Human Behaviour. I read Post-truth on the basis of Fareed Zakaria’s enthusiastic endorsement on CNN’s GPS, so I fully expected to like it more than I ultimately did. It does an admirable job scouting the cognitive ecology of post-truth, but because it fails to understand that ecology in ecological terms, the dynamic itself remains obscured. The best Mcintyre can do is assemble and interrogate the usual suspects. As a result, his case ultimately devolves into what amounts to yet another ingroup appeal.

As perhaps, we should expect, given the actual nature of the problem.

Mcintyre begins with a transcript of an interview where CNN’s Alisyn Camerota presses Newt Gingrich at the 2016 Republican convention on Trump’s assertions regarding crime:

GINGRICH: No, but what I said is equally true. People feel more threatened.

CAMEROTA: Feel it, yes. They feel it, but the facts don’t support it.

GINGRICH: As a political candidate, I’ll go with how people feel and let you go with the theoreticians.

There’s a terror you feel in days like these. I felt that terror most recently, I think, watching Sarah Huckabee Sanders insisting that the out-going National Security Advisor, General H. R. McMaster, had declared that no one had been tougher on Russia than Trump after a journalist had quoted him saying almost exactly otherwise. I had been walking through the living-room and the exchange stopped me in my tracks. Never in my life had I ever witnessed a Whitehouse Official so fecklessly, so obviously, contradict what everyone in the room had just heard. It reminded me of the psychotic episodes I witnessed as a young man working tobacco with a friend who suffered schizophrenia—only this was a social psychosis. Nothing was wrong with Sarah Huckabee Sanders. Rather than lying in malfunctioning neural machinery, this discrepancy lay in malfunctioning social machinery. She could say what she said because she knew that statements appearing incoherent to those knowing what H. R. McMaster had actually said would not appear as such to those ignorant of or indifferent to what he had actually said.  She knew, in other words, that even though the journalists in the room saw this:

given the information available to their perspective, the audience that really mattered would see this:

which is to say, something rendered coherent for neglecting that information.

The task Mcintyre sets himself in this brief treatise is to explain how such a thing could have come to pass, to explain, not how a sitting President could lie, but how he could lie without consequences. When Sarah Huckabee Sanders asserts that H. R. McMaster’s claim that the Administration is not doing enough is actually the claim that no Administration has done more she’s relying on innumerable background facts that simply did not obtain a mere generation ago. The social machinery of truth-telling has fundamentally changed. If we look at the sideways picture of Disney’s faux New York skyline as the ‘deep information view,’ and the head-on picture as the ‘shallow information view,’ the question becomes one of how she could trust that her audience, despite the availability of deep information, would nevertheless affirm the illusion of coherence provided by the shallow information view. As Mcintyre writes, “what is striking about the idea of post-truth is not just that truth is being challenged, but that it is being challenged as a mechanism for asserting political dominance.” Sanders, you could say, is availing herself of new mechanisms, ones antagonistic to the traditional mechanisms of communicating the semantic authority of deep information. Somehow, someway, the communication of deep information has ceased to command the kinds of general assent it once did. It’s almost preposterous on the face of it: in attributing Trump’s claims to McMaster, Sanders is gambling that somehow, either by dint of corruption, delusion, or neglect, her false claim will discharge functions ideally belonging to truthful claims, such as informing subsequent behaviour. For whatever reason, the circumstances once preventing such mass dissociations of deep and shallow information ecologies have yielded to circumstances that no longer do.

Mcintyre provides a chapter by chapter account of those new circumstances. For reasons that will become apparent, I’ll skip his initial chapter, which he devotes to defining ‘post-truth,’ and return to it in the end.

Science Denial

He provides clear, pithy outlines of the history of the tobacco industry’s seminal decision to argue the science, to wage what amounts to an organized disinformation campaign. He describes the ways resource companies adapted these tactics to scramble the message and undermine the authority of climate science. And by ‘disinformation,’ he means this literally, given “that even while ExxonMobil was spending money to obfuscate the facts about climate change, they were making plans to explore new drilling opportunities in the Arctic once the polar ice cap had melted.” This part of the story is pretty well-known, I think, but Mcintyre tells the tale in a way that pricks the numbness of familiarity, reminding us of the boggling scale of what these campaigns achieved: generating a political/cultural alliance that is—not simply bent on—hastening untold misery and global economic loss in the name of short term parochial economic gain.

Cognitive Bias

He gives a curiously (given his background) two-dimensional sketch of the role cognitive bias plays in the problem, focusing primarily on cognitive dissonance, our need to minimize cognitive discrepancies, and the backfire effect, how counter-arguments actually strengthen, as opposed to mitigate, commitment to positions. (I would recommend Steven Sloman and Philip Fernbach’s The Knowledge Illusion for a more thorough consideration of the dynamics involved). He discusses research showing the profound ways that social identification, even cued by things so flimsy as coloured wristbands, profoundly transforms our moral determinations. But he underestimates, I think, the profound nature of what Dan Kahan and his colleagues call the “Tragedy of the Risk-Perception Commons,” the individual rationality of espousing irrational collective claims. There’s so much research directly pertinent to his thesis that he passes over in silence, especially that belonging to ecological rationality.

Traditional versus social media

If Mcintyre’s consideration of the cognitive science left me dissatisfied, I thoroughly enjoyed his consideration of media’s contribution to the problem of post-truth. He reminds us that the existence of entities, like Fox News, disguising advocacy as disinterested reporting, is the historical norm, not the rule. Disinterested journalistic reporting was more the result how AP, which served papers grinding different political axes, required stories expressing as little overt bias as possible. Rather than seize upon this ecological insight (more on this below), he narrates the gradual rise of television news from small, money-losing network endeavours, to money-making enterprises culminating in CNN, Fox, MSNBC, and the return of ‘yellow journalism.’

He provides a sobering assessment of the eclipse of traditional media, and the historically unprecedented rise of social media. Here, more than anywhere else, we find Mcintyre taking steps toward a genuine cognitive ecological understanding of the problem:

“In the past, perhaps our cognitive biases were ameliorated by our interactions with others. It is ironic to think that in today’s media deluge, we could perhaps be more isolated from contrary opinion than when our ancestors were forced to live and work among other members of their tribe, village, or community, who had to interact with one another to get information.”

Since his understanding of the problem is primarily normative, however, he fails to see how cognitive reflexes that misfire in experimental contexts, and so strike observers as normative breakdowns, actually facilitate problem-solving in ancestral contexts. What he notes as ‘ironic’ should strike him (and everyone else) as astounding, as one of the doors that any adequate explanation of post-truth must kick down. But it is heartening, I have to say, to see these ideas begin to penetrate more and more brainpans. Despite the insufficiency of his theoretical tools, Mcintyre glimpses something of the way cognitive technology has impacted human cognitive ecology: “Indeed,” he writes, “what a perfect storm for the exploitation of our ignorance and cognitive biases by those with an agenda to put forward.” But even if the ‘perfect storm’ metaphor captures the complex relational nature of what’s happened, it implies that we find ourselves suffering a spot of bad luck, and nothing more.

Postmodernism

At last he turns to the role postmodernism has played in all this: this is the only chapter where I smelled a ‘legacy effect,’ the sense that the author is trying to shoe-horn some independently published material.

He acknowledges that ‘postmodernism’ is hopelessly overdetermined, but he thinks two theses consistently rise above the noise: the first is that “there is no such thing as objective truth,” and the second is “that any profession of truth is nothing more than a reflection of the political ideology of the person who is making it.”

To his credit, he’s quick to pile on the caveats, to acknowledge the need to critique both the possibility of absolute truth as well as the social power of scientific truth-claims. Because of this, it quickly becomes apparent that his target isn’t so much ‘postmodernism’ as it is social constructivism, the thesis that ‘truth-telling,’ far from connecting us to reality, bullies us into affirming interest serving constructs. This, as it turns out, is the best way to think post-truth “[i]n its purest form” as “when one thinks that the crowd’s reaction actually does change the facts about a lie.”

In other words, for Mcintyre, post-truth is the consequence of too many people believing in social constructivism—or in other words, presuming the wrong theory of truthHis approach to the question of post-truth is that of a traditional philosopher: if the failure is one of correspondence, then the blame has to lie with anti-correspondence theories of truth. The reason Sarah Huckabee Sanders could lie about McMaster’s final speech turns on (among other things) the wide-spread theoretical belief that there is no such thing as objective truth,’ that it’s power plays all the way down.

Thus the (rather thick) irony of citing Daniel Dennett—an interpretivist!—stating that “what the postmodernists did was truly evil” so far as they bear responsibility “for the intellectual fad that made it respectable to be cynical about truth and facts.”

The sin of the postmodern left has very, very little to do with generating semantically irresponsible theoriesDennett’s own positions are actually a good deal more radical in this regard! When it comes to the competing narratives involving ‘meaning of’ questions and answers, Dennett knows we have no choice but to advert to the ‘dramatic idiom’ of intentionality. If the problem were one of providing theoretical ammunition then Dennett is as much a part of the problem as Baudrillard.

And yet Mcintyre caps Dennett’s assertion by asking, “Is there more direct evidence than this?” Not a shining moment, dialectically speaking.

I agree with him that tools have been lifted from postmodernists, but they have been lifted from pragmatists (Dennett’s ilk) as well. Talk of ‘stances’ and ‘language games’ is also rife on the right! And I should know. What’s happening now is the consequence of a trend that I’ve been battling since the turn of the millennium. All my novels constitute self-conscious attempts to short-circuit the conditions responsible for ‘post-truth.’ And I’ve spent thousands of hours trolling the alt-Right (before they were called such) trying to figure out what was going on. The longest online debate I ever had was with a fundamentalist Christian who belonged to a group using Thomas Kuhn to justify their belief in the literal truth of Genesis.

Defining Post-truth

Which brings us, as promised, back to the book’s beginning, the chapter that I skipped, where, in the course of refining his definition of post-truth, Mcintyre acknowledges that no one knows what the hell truth is:

“It is important at this point to give at least a minimal definition of truth. Perhaps the most famous is that of Aristotle, who said: ‘to say of what is that it is not, or of what is not, that it is, is false, while to say of what is that it is, and what of is not that it is not, is true.’ Naturally, philosophers have fought for centuries over whether this sort of “correspondence” view is correct, whereby we judge the truth of a statement only by how well it fits reality. Other prominent conceptions of truth (coherentist, pragmatist, semantic) reflect a diversity of opinion among philosophers about the proper theory of truth, even while—as a value—there seems little dispute that truth is important.”

He provides a minimal definition with one hand—truth as correspondence—which he immediately admits is merely speculative! Truth, he’s admitting, is both indispensable and inscrutable. And yet this inscrutability, he thinks, need not hobble the attempt to understand post-truth: “For now, however, the question at hand is not whether we have the proper theory of truth, but how to make sense of the different ways that people subvert truth.”

In other words, we don’t need to know what is being subverted to agree that it is being subverted. But this goes without saying; the question is whether we need to know what is being subverted to explain what Mcintyre is purporting to explain, namely, how truth is being subverted. How do we determine what’s gone wrong with truth when we don’t even know what truth is?

Mcintyre begins Post-truth, in other words, by admitting that no canonical formulation of his explanandum exists, that it remains a matter of mere speculation. Truth remains one of humanity’s confounding questions.

But if truth is in question, then shouldn’t the blame fall upon those who question truth? Perhaps the problem isn’t this or that philosophy so much as philosophy itself. We see as much at so many turns in Mcintyre’s account:

“Why not doubt the mainstream news or embrace a conspiracy theory? Indeed, if news is just political expression, why not make it up? Whose facts should be dominant? Whose perspective is the right one? Thus is postmodernism the godfather of post-truth.”

Certainly, the latter two questions belong to philosophy as whole, and not postmodernism in particular. To that extent, the two former questions—so far as they follow from the latter—have to be seen as falling out of philosophy in general, and not just some ‘philosophical bad apples.’

But does it make sense to blame philosophy, to suggest we should have never questioned the nature of truth? Of course not.

The real question, the one that I think any serious attempt to understand post-truth needs to reckon, is the one Mcintyre breezes by in the first chapter: Why do we find truth so difficult to understand?

On the one hand, truth seems to be crashing. On the other, we have yet to take a step beyond Aristotle when it comes to answering the question of the nature of truth. The latter is the primary obstacle, since the only way to truly understand the nature of the crash is to understand the nature of truth. Could the crash and the inscrutability of truth be related? Could post-truth somehow turn on our inability to explain truth?

Adaptive Anamorphosis

Truth lies murdered in the Calais Coach, and Mcintyre has assembled all the suspects: denialism, cognitive biases, traditional and social media, and (though he knows it not) philosophy. He knows all of them had some part to play, either directly, or as accessories, but the Calais Coach remains locked—his crime scene is a black box. He doesn’t even have a body!

For me, however, post-truth is a prediction come to pass—a manifestation of what I’ve long called the ‘semantic apocalypse.’ Far from a perfect storm of suspects coming together in unlikely ways to murder ‘all of factual reality,’ it is an inevitable consequence of our rapidly transforming cognitive ecologies.

Biologically speaking, human communication and cooperation represent astounding evolutionary achievements. Human cognition is the most complicated thing human cognition has ever encountered: only now are we beginning to reverse-engineer its nature, and to use that knowledge to engineer unprecedented cognitive artifacts. We know that cognition is structurally and dynamically composite, heavily reliant on heuristic specialization to solve its social and natural environments. The astronomical complexity of human cognition means that sociocognition and metacognition are especially reliant on composite, source-insensitive systems, devices turning on available cues that correlate, given that various hidden regularities obtain, with specific outcomes. Despite being legion, we manage to synchronize with our fellows and our environments without the least awareness of the cognitive machinery responsible.

We suffer medial neglect, a systematic insensitivity to our own nature—a nature that includes this insensitivity. Like every other organism on this planet we cognize without cognizing the concurrent act of cognition. Well, almost like every other organism. Where other species utterly depend on the reliability of their cognitive capacities, have no way of repairing failures in various enabling—medial—systems, we do have recourse. Despite our blindness to the machinery of human cognition, we’ve developed a number of different ways to nudge that machinery—whack the TV set, you could say.

Truth-talk is one of those ways. Truth-talk allows us to minimize communicative discrepancies absent, once again, sensitivity to the complexities involved. Truth-talk provides a way to circumvent medial neglect, to resolve problems belonging to the enabling dimension of cognition despite our systematic insensitivity to the facts of that dimension. When medial issues—problems pertaining to cognitive function—arise, truth-talk allows for the metabolically inexpensive recovery of social and environmental synchronization. Incompatible claims can be sorted, at least so far as our ancestors required in prehistoric cognitive ecologies. The tribe can be healed, despite its profound ignorance of natures.

To say human cognition is heuristic is to say it is ecologically dependent, that it requires the neglected regularities underwriting the utility of our cues remain intact. Overthrow those regularities, and you overthrow human cognition. So, where our ancestors could simply trust the systematic relationship between retinal signals and environments while hunting, we have to remove our VR goggles before raiding the fridge. Where our ancestors could simply trust the systematic relationship between the text on the page or the voice in our ear and the existence of a fellow human, we have to worry about chatbots and ‘conversational user interfaces.’ Where our ancestors could automatically depend on the systematic relationship between their ingroup peers and the environments they reported, we need to search Wikipedia—trust strangers. More generally, where our ancestors could trust the general reliability (and therefore general irrelevance) of their cognitive reflexes, we find ourselves confronted with an ever growing and complicating set of circumstances where our reflexes can no longer be trusted to solve social problems.

The tribe, it seems, cannot be healed.

And, unfortunately, this is the very problem we should expect given the technical (tactical and technological) radicalization of human cognitive ecology.* Philosophy, and now, cognitive science, provide the communicative tactics required to neutralize (or ‘threshold’) truth-talk. Cognitive technologies, meanwhile, continually complicate the once direct systematic relationships between our suites of cognitive reflexes and our social and natural environments. The internet doesn’t simply render the sum of human knowledge available, it also renders the sum of human rationalization available as well. The curious and the informed, meanwhile, no longer need suffer the company of the incurious and the uninformed, and vice versa. The presumptive moral superiority of the former stands revealed: and in ever greater numbers the latter counter-identify, with a violence aggravated by phenomena such as the ‘online disinhibition effect.’ (One thing Mcintyre never pauses to consider is the degree to which he and his ilk are hated, despised, so much so as to see partners in traditional foreign adversaries, and to think lies and slander simply redress lies and slander). Populations begin spontaneously self-selecting. Big data identifies the vulnerable, who are showered with sociocognitive cues—atrocity tales to threaten, caricatures to amuse—engineered to provoke ingroup identification and outgroup alienation. In addition to ‘backfiring,’ counter-arguments are perceived as weapons, evidence of outgroup contempt for you and your own. And as the cognitive tactics become ever more adept at manipulating our biases, ever more scientifically informed, and as the cognitive technology becomes ever more sophisticated, ever more destructive of our ancestral cognitive habitat, the break between the two groups, we should expect, will only become more, not less, profound.

None of this is intuitive, of course. Medial neglect means reflection is source blind, and so inclined to conceive things in super-ecological terms. Thus the value of the prop building analogy I posed at the beginning.

Disney’s massive Manhattan anamorph depends on the viewer’s perspectival position within the installation to assure the occlusion of incompatible information. The degrees of cognitive freedom this position possesses—basically, how far one can wander this way and that—depends on the size and sophistication of the anamorph. The stability of illusion, in other words, entirely depends on the viewer: the deeper one investigates, the less stable the anamorph becomes. Their dependence on cognitive ‘sweet spots’ is their signature vulnerability.

The cognitive fragility of the anamorph, however, resides in the fact that we can move, while it cannot. Overcoming this fragility, then, either requires 1) de-animating observation, 2) complicating the anamorph, or 3) animating the anamorph. The problem we face can be understood as the problem of adaptive cognitive anamorphosis, the way cognitive science, in combination with cognitive technology, enables the de-animation of information consumers by gaming sociocognitive cues, while both complicating and animating the artifactual anamorphic information they consume.

Once a certain threshold is crossed, Sarah Huckabee Sanders can lie without shame or apology on national television. We don’t know what we don’t know. Mcintyre references the notorious Dunning-Kruger effect, the way cognitive incompetence correlates with incompetent assessments of competence, but the underlying mechanism is more basic: cognitive systems lacking access to information function independent of that information. Medial neglect assures we take the sufficiency of our perspectives for granted absent information indicating insufficiency or ‘medial misalignment.’ Trusting our biology and community is automatic. Perhaps we refuse to move, to even consider the information belonging to:

But if we do move, the anamorph, thanks to cognitive technology, adapts, the prop-facades grow prop sides, and the deep (globally synchronized) information presented above, has to compete with ‘faux deep’ information. The question becomes one of who has been systematically deceived—a question that ingroup biases have already answered in illusion’s favour. We can return to our less inquisitive peers and assure them they were right all along.

What is ‘post-truth’? Insofar as it names anything it refers to diminishing capacity of globally, versus locally, synchronized claims to drive public discourse. It’s almost as if, via technology, nature is retooling itself to conceal itself by creating adaptive ‘faux realities.’ It’s all artifactual, all biologically ‘constructed’: the question is whether our cognitive predicament facilitates global (or deep) synchronization geared to what happens to be the case, or facilitates local (or shallow) synchronization geared to ingroup expectations and hidden political and commercial interests.

There’s no contest between spooky correspondence and spooky construction. There’s no ‘assertion of ideological supremacy,’ just cognitive critters (us) stranded in a rapidly transforming cognitive ecology that has become too sophisticated to see, and too powerful to credit. Post-truth, in other words, is an inevitable consequence of scientific progress, particularly as it pertains to cognitive technologies.

Sarah Huckabee Sanders can lie without shame or apology on national television because Trump was able to lure millions of Americans across a radically transformed (and transforming) anamorphic threshold. And we should find this terrifying. Most doomed democracies elect their executioner. In his The Death of Democracy: Hitler’s Rise to Power, Benjamin Carter Hett blames the success of Nazism on the “reality deficit” suffered by the German people. “Hostility to reality,” he writes, “translated into contempt for politics, or, rather, desire for a politics that was somehow not political: a thing that can never be” (14). But where Germany in the 1930’s had every reason to despise the real, “a lost war that had cost the nation almost two million of her sons, a widely unpopular revolution, a seemingly unjust peace settlement, and economic chaos accompanied by huge social and technological change” (13), America finds itself suffering only the latter. The difference lies in the way the latter allows for the cultivation and exploitation of this hostility in an age of unparalleled peace and prosperity. In the German case, the reality itself drove the populace to embrace atavistic political fantasies. Thanks to technology, we can now achieve the same effect using only human cognitive shortcomings and corporate greed.

Buckle up. No matter what happens to Trump, the social dysfunction he expresses belongs to the very structure of our civilization. Competition for the market he’s identified is only going to intensify.

 

Killing Bartleby (Before It’s Too Late)

by rsbakker

Why did I not die at birth,

come forth from the womb and expire?

Why did the knees receive me?

Or why the breasts, that I should suck?

For then I should have lain down and been quiet;

I should have slept; then I should have been at rest,

with kings and counselors of the earth

who rebuilt ruins for themselves…

—Job 3:11-14 (RSV)

 

“Bartleby, the Scrivener: A Story of Wall-Street”: I made the mistake of rereading this little gem a few weeks back. Section I, below, retells the story with an eye to heuristic neglect. Section II leverages this retelling into a critique of readings, like those belonging to the philosophers Gilles Deleuze and Slavoj Zizek, that fall into the narrator’s trap of exceptionalizing Bartleby. If you happen to know anyone interested in Bartleby criticism, by all means encourage them to defend their ‘doctrine of assumptions.’

 

I

The story begins with the unnamed narrator identifying two ignorances, one social and the other personal. The first involves Bartleby’s profession, that “somewhat singular set of men, of whom as yet nothing that I know of has ever been written.” Human scriveners, like human computers, hail from a time when social complexities demanded the undertaking of mechanical cognitive labours, the discharge of tasks too procedural to rest easy in the human soul. Copies are all the ‘system’ requires of them, pure documentary repetition. It isn’t so much that their individuality does not matter, but that it matters too much, perturbing (‘blotting’) the function of the whole. So far as social machinery is legal machinery, you could say law-copyists belong to the neglected innards of mid-19th century society. Bartleby belongs to what might be called the caste of most invisible men.

What makes him worthy of literary visibility turns on a second manifestation of ignorance, this one belonging to the narrator. “What my own astonished eyes saw of Bartleby,” he tells us, “that is all I know of him, except, indeed, one vague report which will appear in the sequel.” And even though the narrator thinks this interpersonal inscrutability constitutes “an irreparable loss to literature,” it turns out to be the very fact upon which the literary obsession with “Bartleby, the Scrivener” hangs. Bartleby is so visible because he is the most hidden of the hidden men.

Since comprehending the dimensions of a black box buried within a black box is impossible, the narrator has no choice but to illuminate the latter, to provide an accounting of Bartleby’s ecology: “Ere introducing the scrivener, as he first appeared to me, it is fit I make some mention of myself, my employees, my business, my chambers, and general surroundings; because some such description is indispensable to an adequate understanding of the chief character about to be presented.” In a sense, Bartleby is nothing apart from his ultimately profound impact on this ecology, such is his mystery.

Aside from inklings of pettiness, the narrator’s primary attribute, we learn, is also invisibility, the degree to which he disappears into his social syntactic role. “I am one of those unambitious lawyers who never addresses a jury, or in any way draws down public applause; but in the cool tranquility of a snug retreat, do a snug business among rich men’s bonds and mortgages and title-deeds,” he tells us. “All who know me, consider me an eminently safe man.” He is, in other words, the part that does not break down, and so, like Heidegger’s famed hammer, never becomes something present to hand, an object of investigation in his own right.

His description of his two existing scriveners demonstrates that his ‘safety’ is to some extent rhetorical, consisting in his ability to explain away inconsistencies, real or imagined. Between Turkey’s afternoon drunkenness and Nipper’s foul morning temperament, you could say his office is perpetually compromised, but the narrator chooses to characterize it otherwise, in terms of each man mechanically cancelling out the incompetence of the other. “Their fits relieved each other like guards,” the narrator informs us, resulting in “a good natural arrangement under the circumstances.”

He depicts what might be called an economy of procedural and interpersonal reflexes, a deterministic ecology consisting of strictly legal or syntactic demands, all turning on the irrelevance of the discharging individual, the absence of ‘blots,’ and a stochastic ecology of sometimes conflicting personalities. Not only does he instinctively understand the insoluble nature of the latter, he also understands the importance of apology, the power of language to square those circles that refuse to be squared. When he comes “within an ace” of firing Turkey, the drunken scrivener need only bow and say what amounts to nothing to mollify his employer. As with bonds and mortgages and title-deeds, the content does not so much matter as does the syntax, the discharge of social procedure. Everyone in his office “up stairs at No.—Wall-street” is a misfit, and the narrator is a compulsive ‘fitter,’ forever searching for ways to rationalize, mythologize, and so normalize, the idiosyncrasies of his interpersonal circumstances.

And of course, he and his fellows are entombed by the walls of Wall Street, enjoying ‘unobstructed views’ of obstructions. Theirs is a subterranean ecology, every bit as “deficient in what landscape painters call ‘life’” as the labour that consumes them.

Enter Bartleby. “After a few words touching his qualifications,” the narrator informs us, “I engaged him, glad to have among my corps of copyists a man of so singularly sedate an aspect, which I thought might operate beneficially upon the flighty temper of Turkey, and the fiery one of Nippers.” Absent any superficial sign of idiosyncrasy, he seems the perfect ecological fit. The narrator gives the man a desk behind a screen in his own office, a corner possessing a window upon obstruction.

After three days, he calls out to Bartleby to examine the accuracy of a document, reflexively assuming the man would discharge the task without delay, only to hear Bartleby, obscure behind his green screen, say the fateful words that would confound, not only our narrator, but countless readers and critics for generations to come: “I would prefer not to.” The narrator is gobsmacked:

“I sat awhile in perfect silence, rallying my stunned faculties. Immediately it occurred to me that my ears had deceived me, or Bartleby had entirely misunderstood my meaning. I repeated my request in the clearest tone I could assume. But in quite as clear a one came the previous reply, “I would prefer not to.””

Given the “natural expectancy of instant compliance,” the narrator assumes the breakdown is communicative. When he realizes this isn’t the case, he confronts Bartleby directly, to the same effect:

“Not a wrinkle of agitation rippled him. Had there been the least uneasiness, anger, impatience or impertinence in his manner; in other words, had there been any thing ordinarily human about him, doubtless I should have violently dismissed him from the premises. But as it was, I should have as soon thought of turning my pale plaster-of-paris bust of Cicero out of doors.”

Realizing that he has been comprehended, the narrator assumes willful defiance, that Bartleby seeks to provoke him, and that, accordingly, the man will present the cues belonging to interpersonal power struggles more generally. When Bartleby manifests none of these signs, the hapless narrator lacks the social script he requires to solve the problem. Turning out the scrivener becomes as unthinkable as surrendering his bust of Cicero, which is to say, the very emblem of his legal vocation.

The next time Bartleby refuses to read, the narrator demands an explanation, asking, “Why do you refuse?” To which Bartleby replies, once again, “I would prefer not to.” When the narrator presses, resolved “to reason with him,” he realizes that dysrationalia is not the problem: “It seemed to me that while I had been addressing him, he carefully revolved every statement that I made; fully comprehended the meaning; could not gainsay the irresistible conclusions; but, at the same time, some paramount consideration prevailed with him to reply as he did.”

If Bartleby were non compos mentis, then he could be ‘medicalized,’ reduced to something the narrator would find intelligible—something providing some script for action. Instead, the scrivener understands, or manifests as much, leaving the narrator groping for evidence of his own rationality:

“It is not seldom the case that when a man is browbeaten in some unprecedented and violently unreasonable way, he begins to stagger in his own plainest faith. He begins, as it were, vaguely to surmise that, wonderful as it may be, all the justice and all the reason is on the other side. Accordingly, if any disinterested persons are present, he turns to them for some reinforcement for his own faltering mind.”

For a claim to be rational it must be rational to everyone. Each of us is stranded with our own perspective, and each of us possesses only the dimmest perspective on that perspective: rationality is something we can only assume. This is why ‘truth’ (especially in ‘normative’ matters (politics)) so often amounts to a ‘numbers game,’ a matter of tallying up guesses. Our blindness to our cognitive orientation—medial neglect—combined with the generativity of the human brain and the capriciousness of our environments, requires the communicative policing of cognitive idiosyncrasies. Whatever rationality consists in, minimally it functions to minimize discrepancies between individuals, sometimes vis a vis their environments and sometimes not. Reason, like the narrator, makes things fit.

The ‘disinterested persons’ the narrator turns to are themselves misfits, with “Nippers’ ugly mood on duty and Turkey’s off.” The irony here, and what critics are prone to find most interesting, is that the three are anything but disinterested. The more thought-provoking fact, however, lies in the way they agree with their employer despite the wild variance of their answers. For all the idiosyncrasies of its constituents, the office ecology automatically manages to conserve its ‘paramount consideration’: functionality.

Baffled unto inaction, the narrator suffers bouts of explaining away Bartleby’s discrepancies in terms of his material and moral utilities. The fact of his indulgences alternately congratulates and exasperates him: Bartleby becomes (and remains) a bi-stable sociocognitive figure, alternately aggressor and victim. “Nothing so aggravates an earnest person as a passive resistance,” the narrator explains. “If the individual so resisted be of a not inhumane temper, and the resisting one perfectly harmless in his passivity; then, in the better moods of the former, he will endeavor charitably to construe to his imagination what proves impossible to be solved by his judgment.” To be earnest is to be prone to minimize social discrepancies, to optimize via the integrations of others. The passivity of “I would prefer not to” poises Bartleby upon a predictive-processing threshold, one where the vicissitudes of mood are enough to transform him from a ‘penniless wight’ into a ‘brooding Marius’ and back again. The signals driving the charitable assessment are constantly interfering with the signals driving the uncharitable assessment, forcing the different neural hypotheses to alternate.

Via this dissonance, the scrivener begins to train him, with each “I would prefer not to” tending “to lessen the probability of [his] repeating the inadvertence.”

The ensuing narrative establishes two facts. First, we discover that Bartleby belongs to the office ecology, and in a manner more profound than even the narrator, let alone any one of his employees. Discovering Bartleby indisposed in his office on a Sunday, the narrator finds himself fleeing his own premises, alternately lost in “sad fancyings—chimeras, doubtless, of a sick and silly brain” and “[p]resentiments of strange discoveries”—strung between delusion and revelation.

Second, we learn that Bartleby, despite belonging to the office ecology, nevertheless signals its ruination:

“Somehow, of late I had got into the way of involuntarily using this word “prefer” upon all sorts of not exactly suitable occasions. And I trembled to think that my contact with the scrivener had already and seriously affected me in a mental way. And what further and deeper aberration might it not yet produce?”

When the narrator catches Turkey also saying “prefer,” he says, “So you have got the word too,” as if a verbal tick could be caught as a cold. Turkey manifests cryptonesia. Nippers does the same not moments afterward—ever bit as unconsciously as Turkey. Knowing nothing of the way humans have evolved to unconsciously copy linguistic behaviour, the narrator construes Bartleby as a kind of contagion—or pollutant, a threat to his delicately balanced office ecology. He once again determines he must rid his office of the scrivener’s insidious influence, but, under that influence, once again allows prudence—or the appearance of such—to dissuade immediate action.

Bartleby at last refuses to copy, irrevocably undoing the foundation of the narrator’s ersatz rationalizations. “And what is the reason?” the narrator demands to know. Staring at the brick wall just beyond his window, Bartleby finally offers a different explanation: “Do you not see the reason for yourself.” Though syntactically structured as a question, this statement possesses no question mark in Melville’s original version (as it does, for instance, in the version anthologized by Norton). And indeed, the narrator misses the very reason implied by his own narrative—the wall that occupied so many of Bartleby’s reveries—and confabulates an apology instead: work induced ‘impaired vision.’

But this rationalization, like all the others, is quickly exhausted. The internal logic of the office ecology is entirely dependent on the logic of Wall-street: the text continually references the functional exigencies commanding the ebb and flow of their lives, the way “necessities connected with my business tyrannized over all other considerations.” The narrator, when all is said and done, is an instrument of the Law and the countless institutions dependent upon it. At long last he fires Bartleby rather than merely resolving to do so.

He celebrates his long-deferred decisiveness while walking home, only to once again confront the blank wall the scrivener has become:

“My procedure seemed as sagacious as ever—but only in theory. How it would prove in practice—there was the rub. It was truly a beautiful thought to have assumed Bartleby’s departure; but, after all, that assumption was simply my own, and none of Bartleby’s. The great point was, not whether I had assumed that he would quit me, but whether he would prefer so to do. He was more a man of preferences than assumptions.”

And so, the great philosophical debate, both within the text and its critical reception, is set into motion. Lost in rumination, the narrator overhears someone say, “I’ll take odds he doesn’t,” on the street, and angrily retorts, assuming the man was referring to Bartleby, and not, as was actually the case, an upcoming election. Bartleby’s ‘passive resistance’ has so transformed his cognitive ecology as to crash his ability to make sense of his fellow man. Meaning, at least so far as it exists in his small pocket of the world, has lost its traditional stability.

Of course, the stranger’s voice, though speaking of a different matter altogether, had spoken true. Bartleby prefers not to leave the office that has become his home.

“What was to be done? or, if nothing could be done, was there any thing further that I could assume in the matter? Yes, as before I had prospectively assumed that Bartleby would depart, so now I might retrospectively assume that departed he was. In the legitimate carrying out of this assumption, I might enter my office in a great hurry, and pretending not to see Bartleby at all, walk straight against him as if he were air. Such a proceeding would in a singular degree have the appearance of a home-thrust. It was hardly possible that Bartleby could withstand such an application of the doctrine of assumptions.”

The ‘home-thrust,’ in other words, is to simply pretend, to physically enact the assumption of Bartleby’s absence, to not only ignore him, but to neglect him altogether, to the point of walking through him if need be. “But upon second thoughts the success of the plan seemed rather dubious,” the narrator realizes. “I resolved to argue the matter over with him again,” even though argument, Sellars famed ‘game of giving and asking for reasons,’ is something Bartleby prefers not to recognize.

When the application of reason fails once again, the narrator at last entertains the thought of killing Bartleby, realizing “the circumstance of being alone in a solitary office, up stairs, of a building entirely unhallowed by humanizing domestic associations” is one tailor-made for the commission of murder. Even acts of evil have their ecological preconditions. But rather than seize Bartleby, he ‘grapples and throws’ the murderous temptation, recalling the Christian injunction to love his neighbour. As research suggests, imagination correlates with indecision, the ability to entertain (theorize) possible outcomes: the narrator is nothing if not an inspired social confabulator. For every action-demanding malignancy he ponders, his aversion to confrontation occasions another reason for exemption, which is all he needs to reduce the discrepancies posed.

He resigns himself to the man:

“Gradually I slid into the persuasion that these troubles of mine touching the scrivener, had been all predestinated from eternity, and Bartleby was billeted upon me for some mysterious purpose of an all-wise Providence, which it was not for a mere mortal like me to fathom. Yes, Bartleby, stay there behind your screen, thought I; I shall persecute you no more; you are harmless and noiseless as any of these old chairs; in short, I never feel so private as when I know you are here. At last I see it, I feel it; I penetrate to the predestinated purpose of my life. I am content. Others may have loftier parts to enact; but my mission in this world, Bartleby, is to furnish you with office-room for such period as you may see fit to remain.”

But this story, for all its grandiosity, likewise melts before the recalcitrant scrivener. The comical notion that furnishing Bartleby an office could have cosmic significance merely furnishes a means of ignoring what cannot be ignored: how the man compromises, in ways crude and subtle, the systems of assumptions, the network of rational reflexes, comprising the ecology of Wall-street. In other words, the narrator’s clients are noticing…

“Then something severe, something unusual must be done. What! surely you will not have him collared by a constable, and commit his innocent pallor to the common jail? And upon what ground could you procure such a thing to be done?—a vagrant, is he? What! he a vagrant, a wanderer, who refuses to budge? It is because he will not be a vagrant, then, that you seek to count him as a vagrant. That is too absurd. No visible means of support: there I have him. Wrong again: for indubitably he does support himself, and that is the only unanswerable proof that any man can show of his possessing the means so to do.”

At last invisibility must be sacrificed, and regularity undone. The narrator ratchets through the facts of the scrivener’s cognitive bi-stability. An innocent criminal. An immovable vagrant. Unsupported yet standing. Reason itself cracks about him. And what reason cannot touch only fight or flight can undo. If the ecology cannot survive Bartleby, and Bartleby is immovable, then the ecology must be torn down and reestablished elsewhere.

It’s tempting to read this story in ‘buddy terms,’ to think that the peculiarities of Bartleby only possess the power they do given the peculiarities of the narrator. (One of the interesting things about the yarn is the way it both congratulates and insults the neuroticism of the critic, who, having canonized Bartleby, cannot but flatter themselves both by thinking they would have endured Bartleby the way the narrator does, and by thinking that surely they wouldn’t be so disabled by the man). The narrator’s decision to relocate allows us to see the universality of his type, how others possessing far less history with the scrivener are themselves driven to apologize, to exhaust all ‘quiet’ means of minimizing discrepancies. “[S]ome fears are entertained of a mob,” his old landlord warns him, desperate to purge the scrivener from No.—Wall-street.

Threatened with exposure in the papers—visibility—the narrator once again confronts Bartleby the scrivener. This time he comes bearing possibilities of gainful employment, greener pastures, some earnest, some sarcastic, only to be told, “I would prefer not to,” with the addition of, “I am not particular.” And indeed, as Bartleby’s preference severs ever more ecological connections, he seems to become ever more super-ecological, something outside the human communicative habitat. Repulsed yet again, the narrator flees Wall-street altogether.

Bartleby, meanwhile, is imprisoned in the Tombs, the name given to the House of Detention in lower Manhattan. A walled street is replaced by a walled yard—which, the narrator will tell Bartleby, “is not so sad a place as one might think,” the irony being, of course, that with sky and grass the Tombs actually represent an improvement over Wall-street. Bartleby, for his part, only has eyes for the walls—his unobstructed view of obstruction. To assure his former scrivener is well fed, the narrator engages the prison cook, who asks him whether Bartleby is a forger, likening the man to Monroe Edwards, a famed slavetrader and counterfeiter in Melville’s day. Despite the criminal connotations of Nippers, the narrator assures the man he was “never socially acquainted with any forgers.”

On his next visit, he discovers that Bartleby’s metaphoric ‘dead wall reveries’ have become literal. The narrator finds him “huddled at the base of the wall, his knees drawn up, and lying on his side, his head touching the cold stones,” dead for starvation. Cutting the last, most fundamental ecological reflex of all—the consumption of food—Bartleby has finally touched the face of obstruction… oblivion.

The story proper ends with one last misinterpretation: the cook assuming that Bartleby sleeps. And even here, at this final juncture, the narrator apologizes rather than corrects, quoting Job 3:14, using the Holy Bible, perhaps, to “mason up his remains in the wall.” Melville, however, seems to be gesturing to the fundamental problem underwriting the whole of his tale, the problem of meaning, quoting a fragment of Job in extremis, asking God why he should have been born at all, if his lot was only desolation. What meaning resides in such a life? Why not die an innocent?

Like Bartleby.

What the narrator terms the “sequel” consists of no more than two paragraphs (set apart by a ‘wall’ of eight asterisks), the first divulging “one little item of rumor” which may or may not be more or less true, the second famously consisting in, “Ah Bartleby! Ah humanity!” The rumour occasioning these apostrophic cries suggests “that Bartleby had been a subordinate clerk in the Dead Letter Office at Washington, from which he had been suddenly removed by a change of administration.”

What moves the narrator to passions too complicated to scrutinize is nothing other than the ecology of such a prospect: “Conceive a man by nature and misfortune prone to a pallid hopelessness, can any business seem more fitted to heighten it than that of continually handling these dead letters, and assorting them for the flames?” Here at last, he thinks, we find some glimpse of the scrivener’s original habitat: dead letters potentially fund the reason the man forever pondered dead walls. Rather than a forger, one who cheats systems, Bartleby is an undertaker, one who presides over their crashing. The narrator paints his final rationalization, Bartleby mediating an ecology of fatal communicative interruptions:

“Sometimes from out the folded paper the pale clerk takes a ring:—the finger it was meant for, perhaps, moulders in the grave; a bank-note sent in swiftest charity:—he whom it would relieve, nor eats nor hungers any more; pardon for those who died despairing; hope for those died unhoping; good tidings for those who died stifled by unrelieved calamities. On errands of life, these letters speed to death.”

An ecology, in other words, consisting of quotidian ecological failures, life lost for the interruption of some crucial material connection, be it ink or gold. Thus, are Bartleby and humanity entangled in the failures falling out of neglect, the idiosyncratic, the addresses improperly copied, and the ill-timed, the words addressed to those already dead. A meta-ecology where discrepancies can never be healed only consigned to oblivion.

But, of course, were Bartleby still living, this ‘sad fancying’ would likewise turn out to be a ‘chimera of a sick and silly brain.’ Just another way to brick over the questions. If the narrator finds consolation, the wreckage of his story remains.

 

II

I admit that I feel more like Ahab than Ishmael… most of the time. But I’m not so much obsessed by the White Whale as by what is obliterated when it’s revealed as yet another mere cetacean. Be it the wrecking of The Pequod, or the flight of the office at No.— Wall-street, the problem of meaning is my White Whale. “Bartleby, the Scrivener” is compelling, I think, to the degree it lends that problem the dimensionality of narrative.

Where in Moby-Dick, the relation between the inscrutable and the human is presented via Ishmael, which is to say the third person, in Bartleby, the relation is presented in the second: the narrator is Ahab, every bit as obsessed with his own pale emblem of unaccountable discrepancy—every bit as maddened. The violence is merely sublimated in quotidian discursivity.

The labour of Ishmael falls to the critic. “Life is so short, and so ridiculous and irrational (from a certain point of view),” Melville writes to John C. Hoadley in 1877, “that one knows not what to make of it, unless—well, finish the sentence for yourself.” A great many critics have, spawning what Dan McCall termed (some time ago now) the ‘Bartleby Industry.’ There’s so many interpretations, in fact, that the only determinate thing one can say regarding the text is that it systematically underdetermines every attempt to determine its ‘meaning.’

In the ecology of literary and philosophical critique, Bartleby remains a crucial watering hole in an ever-shrinking reservation of the humanities. A great number of these interpretations share the narrator’s founding assumption, that Bartleby—the character—represents something exceptional. Consider, for instance, Deleuze in “Bartleby; or, the Formula.”

“If Bartleby had refused, he could still be seen as a rebel or insurrectionary, and as such would still have a social role. But the formula stymies all speech acts, and at the same time, it makes Bartelby a pure outsider [exclu] to whom no social position can be attributed. This is what the attorney glimpses with dread: all his hopes of bringing Bartleby back to reason are dashed because they rest on a logic of presuppositions according to which an employer ‘expects’ to be obeyed, or a kind of friend listened to, whereas Bartleby has invented a new logic, a logic of preference, which is enough to undermine the presuppositions of language as a whole.” 73

Or consider Zizek, who uses Bartleby to conclude The Parallax View no less:

“In his refusal of the Master’s order, Bartleby does not negate the predicate; rather, he affirms a nonpredicate: he does not say that he doesn’t want to do it; he says that he prefers (wants) not to do it. This is how we pass from the politics of “resistance” or “protestation,” which parasitizes upon what it negates, to a politics which opens up a new space outside the hegemonic position and its negation.” 380-1

Bartleby begets ‘Bartleby politics,’ the possibility of a relation to what stands outside relationality, a “move from something to nothing, from the gap between two ‘somethings’ to the gap that separates a something from nothing, from the void of its own place” (381). Bartleby isn’t simply an outsider on this account, he’s a pure outsider, more limit than liminal. And this, of course, is the very assumption that the narrator himself carries away intact: that Bartleby constitutes something ontologically or logically exceptional.

I no longer share this assumption. Like Borges in his “Prologue to Herman Melville’s “Bartleby,” I see “the symbol of the whale is less apt for suggesting the universe is vicious than for suggesting its vastness, its inhumanity, its bestial or enigmatic stupidity.” Melville, for all the wide-eyed grandiloquence of his prose, was a squinty-eyed skeptic. “These men are all cracked right across the brow,” he would write of philosophers such as Emerson. “And never will the pullers-down be able to cope with the builders-up.” For him, the interest always lies in the distances between lofty discourse and the bloody mundanities it purports to solve. As he writes to Hawthorne in 1851:

“And perhaps after all, there is no secret. We incline to think that the Problem of the Universe is like the Freemason’s mighty secret, so terrible to all children. It turns out, at last, to consist in a triangle, a mallet, and an apron—nothing more! We incline to think that God cannot explain His own secrets, and that He would like a little more information upon certain points Himself. We mortals astonish Him as much as He us.”

It’s an all too human reflex. Ignorance becomes justification for the stories we want to tell, and we are filled with “oracular gibberish” as a result.

So what if Bartleby holds no secrets outside the ‘contagion of nihilism’ that Borges ascribes to him?

As a novelist, I cannot but read the tale, with its manifest despair and gallows humour, as the expression of another novelist teetering on the edge of professional ruin. Melville conceived and wrote “Bartleby, the Scrivener” during a dark period of his life. Both Moby-Dick and Pierre had proved to be critical and commercial failures. As Melville would write to Hawthorne:

“What I feel most moved to write, that is banned—it will not pay. Yet, altogether write the other way I cannot. So the product is a final hash, and all my books are botches.”

Forgeries, neither artistic nor official. Two species of neuroticism plague full-time writers, particularly if they possess, as Melville most certainly did, a reflective bent. There’s the neuroticism that drives a writer to write, the compulsion to create, and there’s the neuroticism secondary to a writer’s consciousness of this prior incapacity, the neurotic compulsion to rationalize one’s neuroticism.

Why, for instance, am I writing this now? Am I a literary critic? No. Am I being paid to write this? No. Are there things I should be writing instead? Buddy, you have no idea. So why don’t I write as I should?

Well, quite simply, I would prefer not to.

And why is this? Is it because I have some glorious spark in me? Some essential secret? Am I, like Bartleby, a pure outsider?

Or am I just a fucking idiot? A failed copyist.

For critics, the latter is pretty much the only answer possible when it comes to living writers who genuinely fail to copy. No matter how hard we wave discrepancy’s flag, we remain discrepancy minimization machines—particularly where social cognition is concerned. Living literary dissenters cue reflexes devoted to living threats: the only good discrepancy is a dead discrepancy. As the narrator discovers, attributing something exceptional becomes far easier once the dissenter is dead. Once the source falls silent, the consequences possess the freedom to dispute things as they please.

Writers themselves, however, discover they are divided, that Ahab is not Ahab, but Ishmael as well, the spinner of tales about tales. A failed copyist. A hapless lawyer. Gazing at obstruction, chasing the whale, spinning rationalization after rationalization, confabulating as a human must, taking meagre heart in spasms of critical fantasy.

Endless interpretative self-deception. As much as I recognize Bartleby, I know the narrator only too well. This is why for me, “Bartleby, the Scrivener” is best seen as a prank on the literary establishment, a virus uploaded with each and every Introduction to American Literature class, one assuring that the critic forever bumbles as the narrator bumbles, waddling the easy way, the expected way, embodying more than applying the ‘doctrine of assumptions.’ Bartleby is the paradigmatic idiot, both in the ancient Greek sense of idios, private unto inscrutable, and idiosyncratic unto useless. But for the sake of vanity and cowardice, we make of him something vast, more than a metaphor for x. The character of Bartleby, on this reading, is not so much key to understanding something ‘absolute’ as he is key to understanding human conceit—which is to say, the confabulatory stupidity of the critic.

But explaining the prank, of course, amounts to falling for the prank (this is the key to its power). No matter how mundane one’s interpretation of Bartleby, as an authorial double, as a literary prank, it remains simply one more interpretation, further evidence of the narrative’s profound indeterminacy. ‘Negative exceptionalists’ like Deleuze or Zizek (or Agamben) need only point out this fact to rescue their case—don’t they? Even if Melville conceived Bartleby as his neurotic alter-ego, the word-crazed husband whose unaccountable preferences had reduced his family to penury (and so, charity), he nonetheless happened upon “a zone of indetermination or indiscernibility in which neither words nor characters can be distinguished” (“Bartleby, or the Formula,” 76).

No matter how high one stacks their mundane interpretations of Bartleby—as an authorial alter-ego, a psycho-sociological casualty, an exemplar of passive resistance, or so on—the profundity of his rationality crashing function remains every bit as profound, exceptional. Doesn’t it? After-all, nothing essential binds the distal intent of the author (itself nothing but another narrative) to the proximate effect of the text, which is to “send language itself into flight” (76). Once we set aside the biographical, psychological, historical, economic, political, and so on, does not this formal function remain? And is it not irreducible, exceptional?

That depends whether you think,

is exceptional. What should we say about Necker Cubes? Do they mark the point where the visibility of the visible collapses, generating ‘a zone of indetermination or indiscernibility in which neither indents nor protrusions can be distinguished’? Are they ‘pure figures,’ efficacies that stand outside the possibility of intelligible geometry? Or do they merely present the visual cortex with the demand to distinguish between indents and protrusions absent the information required to settle that demand, thus stranding visual experience upon the predictive threshold of both? Are they bi-stable images?

The first explanation pretty clearly mistakes a heuristic breakdown in the cognition of visual information with an exceptional visual object, something intrinsically indeterminate—something super-geometrical, in fact. When we encounter something visually indeterminate, we immediately blame our vision, which is to say, the invisible, enabling dimension of visual cognition. Visual discrepancies had real reproductive consequences, evolutionarily speaking. Thanks to medial neglect, we had no way of cognizing the ecological nature of vision, so we could only blink, peer, squint, rub our eyes, or change our position. If the discrepancy persisted, we wondered at it, and if we could, transformed it into something useful—be it cuing environmental forms on cave or cathedral walls (‘visual representations’) or cuing wonder with kaleidoscopes at Victorian exhibitions.

Likewise, Deleuze and Zizek (and many, many others) are mistaking a heuristic breakdown in the cognition of social information with an exceptional social entity, something intrinsically indeterminate—something super-social. Imagine encountering a Bartleby in your own place of employ. Imagine your employer not simply tolerating him, but enabling him, allowing him to drift ever deeper into anorexic catatonia. Initially, when we encounter something socially indeterminate in vivo, we typically blame communication—as does the narrator with Bartleby. Social discrepancies, one might imagine, had profound reproductive consequences (given that reproduction is itself social). The narrator’s sensitivity to such discrepancies is the sensitivity that all of us share. Given medial neglect, however, we have no way of cognizing the ecological nature of social cognition. So we check with our colleagues just to be sure (‘Am I losing my mind here?’), then we blame the breakdown in rational reflexes on the man himself. We gossip, test out this or that pet theory, pester spouses who, insensitive to potential micropolitical discrepancies, urge us to file a complaint with someone somewhere. Eventually, we either quit the place, get the poor sod some help, or transform him into something useful, like “Bartleby politics” or what have you. This is the prank that Melville lays out with the narrator—the prank that all post-modern appropriations of this tale trip into headlong…

The ecological nature of cognition entails the blindness of cognition to its ecological nature. We are distributed systems: we evolved to take as much of our environments for granted as we possibly could, accessing as little as possible to solve as many problems as possible. Experience and cognition turn on shallow information ecologies, blind systems turning on reliable (because reliably generated) environmental frequencies to solve problems—especially communicative problems. Absent the requisite systems and environments, these ecologies crash, result in the application of cognitive systems to situations they cannot hope to solve. Those who have dealt with addicted or mentally-ill loved ones know the profundity of these crashes first-hand, the way the unseen reflexes (‘preferences’) governing everyday interactions cast you into dismay and confusion time and again, all for want of applicability. There’s the face, the eyes, all the cues signaling them as them, and then… everything collapses into mealy alarm and confusion. Bartleby, with his dissenting preference, does precisely the same: Melville provides exquisite experiential descriptions of the dumbfounding characteristic of sociocognitive crashes.

Bartleby need not be a ‘pure outsider’ to do this. He just needs to provide enough information to demand disambiguation, but not enough information to provide it. “I would prefer not to”—Bartleby’s ‘formula,’ according to Deleuze—is anything but ‘minimal’: its performance functions the way it does because of the intricate communicative ecology it belongs to. But given medial neglect, our blindness to ecology, the formula is prone to strike us as something quite different, as something possessing no ecology.

It certainly strikes Deleuze as such:

“The formula is devastating because it eliminates the preferable just as mercilessly as any nonpreferred. It not only abolishes the term it refers to, and that it rejects, but also abolishes the other term it seemed to preserve, and that becomes impossible. In fact, it renders them indistinct: it hollows out an ever expanding zone of indiscernibility or indetermination between some nonpreferred activities and a preferable activity. All particularity, all reference is abolished.” 71

Since preferences affirm, ‘preferring not to’ (expressed in the subjunctive no less) can be read as an affirmative negation: it affirms the negation of the narrator’s request. Since nothing else is affirmed, there’s a peculiar sense in which ‘preferring not to’ possesses no reference whatsoever. Medial neglect assures that reflection on the formula occludes the enabling ecology, that asking what the formula does will result in fetishization, the attribution of efficacy in an explanatory vacuum. Suddenly ‘preferring not to’ appears to be a ‘semantic disintegration grenade,’ something essentially disruptive.

In point of natural fact, however, human sociocognition is fundamentally interactive, consisting in the synchronization of radically heuristic systems given only the most superficial information. Understanding one another is a radically interdependent affair. Bartleby presents all the information cuing social reliability, therefore consistently cuing predictions of reliability that turn out to be faulty. The narrator subsequently rummages through the various tools we possess to solve harmless acts of unreliability given medial neglect—tools which have no applicability in Bartleby’s case. Not only does Bartleby crash the network of predictive reflexes constituting the office ecology, he crashes the sociocognitive hacks that humans in general use to troubleshoot such breakdowns. He does so, not because of some arcane semantic power belonging to the ‘formula,’ but because he manifests as a sociocognitive Necker-Cube, cuing noncoercive troubleshooting routines that have no application given whatever his malfunction happens to be.

This is the profound human fact that Melville’s skeptical imagination fastened upon, as well as the reason Bartleby is ‘nothing in particular’: all human social cognition is fundamentally ecological. Consider, once again, the passage where the narrator entertains the possibility of neglecting Bartleby altogether, simply pretending he was absent:

“What was to be done? or, if nothing could be done, was there any thing further that I could assume in the matter? Yes, as before I had prospectively assumed that Bartleby would depart, so now I might retrospectively assume that departed he was. In the legitimate carrying out of this assumption, I might enter my office in a great hurry, and pretending not to see Bartleby at all, walk straight against him as if he were air. Such a proceeding would in a singular degree have the appearance of a home-thrust. It was hardly possible that Bartleby could withstand such an application of the doctrine of assumptions. But upon second thoughts the success of the plan seemed rather dubious. I resolved to argue the matter over with him again.”

Having reached the limits sociocognitive application, he proposes simply ignoring any subsequent failure in prediction, in effect, wishing the Bartlebian crash space away. The problem, of course, is that it ‘takes two to tango’: he has no choice but to ‘argue the matter again’ because the ‘doctrine of assumptions’ is interactional, ecological. What Melville has fastened upon here is the way the astronomical complexity of the sociocognitive (and metacognitive) systems involved holds us hostage, in effect, to their interactional reliability. Meaning depends on maddening sociocognitive intricacies.

The entirety of the story illustrates the fragility of this cognitive ecosystem despite its all-consuming power. Time and again Bartleby is characterized as an ecological casualty of the industrialization of social relations, be it the mass disposal of undelivered letters or the mass reproduction of legally binding documentation. Like ‘computer,’ ‘copier’ names something that was once human but has since become technology. But even as Bartleby’s breakdown expresses the system’s power to break the maladapted, it also reveals its boggling vulnerability, the ease with which it evaporates into like-minded conspiracies and ‘mere pretend.’ So long as everyone plays along—functions reliably—this interdependence remains occluded, and the irrationality (the discrepancy generating stupidity) of the whole never needs be confronted.

In other words, the lesson of Bartleby can be profound, as profound as human communication and cognition itself, without implying anything exceptional. Stupidity, blind, obdurate obliviousness, is all that is required. A minister’s black veil, a bit of crepe poised upon the right interactional interface, can throw whole interpretative communities from their pins. The obstruction, the blank wall, need not conceal anything magical to crash the gossamer ecologies of human life. It need only appear to be a window, or more cunning still, a window upon a wall. We need only be blind to the interactional machinery of looking to hallucinate absolute horizons. Blind to the meat of life.

And in this sense, we can accuse the negative exceptionalists such as Deleuze and Zizek not simply with ignoring life, the very topos of literature, but with concealing the threat that the technologization of life poses to life. Only in an ecology can we understand the way victims can at once be assailants absent aporia, how Bartleby, overthrown by the technosocial ecologies of his age, can in turn overthrow that technosocial ecology. Only understanding life for what we know it to be—biological—allows us to see the profound threat the endless technological rationalization of human sociocognitive ecologies poses to the viability of those ecologies. For Bartleby, by revealing the ecological fragility of human social cognition, how break begets break, reveals the antithesis between ‘progress’ and ‘meaning,’ how the former can only carry the latter so far before crashing.

As Deleuze and Zizek have it, Bartleby holds open a space of essential resistance. As the reading here has it, Bartleby provides a grim warning regarding the ecological fragility of human social cognition. One can even look at him as a blueprint for the potential weaponization of anthropomorphic artificial intelligence, systems designed to strand individual decision-making upon thresholds, to command inaction via the strategic presentation of cues. Far from representing some messianic discrepancy, apophatic proof of transcendence, he represents the way we ourselves become cognitive pollutants when abandoned to polluted cognitive ecologies.

Notes Toward a Cognitive Biology of Theoretical Physics

by rsbakker

My favourite example of what I’ve been calling the ‘scandal of self-understanding’ is the remarkable—even gobsmacking—fact that we can explain the origins of the universe itself while remaining utterly unable to explain this explanation. You could say the great, grand blindspot in physics is physics itself. Imagine raising a gothic cathedral absent anything but the murkiest consciousness of hands! What’s more, imagine thinking this incapacity entirely natural, to raise rooves, not only blind to lifting, but blind to this blindness as well. Small wonder so many think knowledge an irreducible miracle.

This blindness to cognitive means reveals a quite odd condition on progress in physics: that it need not understand itself to understand nature. So far, that is.

Certainly, this fact is one worth consideration in its own right. Since heuristic neglect leverages a general, thoroughly naturalistic theory of cognition, its relevance should apply to all of our cognitive endeavours, including the very hinge of Pandora’s Box, physics. Since I have no skin in any academic game, I need not allow ingroup expectations pin my commitments to any institutional blind alley. I’m free to take original assumptions to problems invulnerable to existing assumptions. And even though I lack the technical expertise to make the least dent in the science, I can perhaps suggest novel points of departure for those who do.

Physics is far from alone in suffering this second-order blindness. Biologically speaking, almost all problems are solved absent access to the conditions of problem-solving. Motor cortices ‘know’ as much about themselves as the fingers they control. Cognition is almost always utterly oblivious to the contemporaneous act of cognizing.

Call this trivial fact medial neglect: the congenital insensitivity of cognition to contemporaneous cognizing. A number of dramatic consequences fall out of this empirical platitude. How does human cognition overcome medial neglect? Our brains are, as a matter of fact, utterly insensitive to their own biological constitution. They cannot immediately cognize themselves for what they are. So then how do they cognize their own cognitive capacities?

Obviously, otherwise. In ways that are useful rather than true. In ways that circumvent medial neglect. Heuristically.

Given medial neglect, it simply follows that we must cognize problematic systems assuming what might be called meta-irrelevance, that no substantial knowledge of our knowing is required to leverage knowing. For instance, this present act of communication on my part requires that countless facts obtain, not the least of which is a tremendous amount of biological and historical similarity, that you and I share roughly the same physiology and educational background. If I were suffering psychosis, or you were raised by wolves, then this communicative exchange could only happen if we could somehow repair these discrepancies. Absent such second-order capacity, our communication depends on the absence of such second-order problems, and therefore on the irrelevance of second-order knowledge to achieve whatever it is we want to achieve.

Medial neglect entails meta-irrelevance, the capacity to solve problems absent the capacity to solve for that capacity. We can distinguish between the meta-irrelevance of our frame, the absence of defeating circumstances, and the meta-irrelevance of our constitution, the absence of cognitive incapacities. One of the fascinating things about this distinction is the way the two great theoretical edifices of physics, general relativity and the standard model of particle physics, required overcoming each form of meta-irrelevance. With general relativity, Einstein had to overcome a form of frame neglect to see space and time as part and parcel of the machinery of the universe. With quantum mechanics, Bohr and others had to overcome a form of constitutive neglect and invent a new rationality. When cognizing the universe on the greatest scales, your frame of reference makes a tremendous difference to what you see. When cognizing reality at infinitesimal scales, your cognitive biology makes a tremendous difference to what you see. In each case, you cannot understand the fundamentals short understanding yourself as part of the system cognized.

Our cognitive biology, in other words, is only irrelevant to cognitive determinations in classical (ancestral) problem ecologies. This explains why general relativity was more ‘insight’ driven, while the standard model was much more experimentally driven. General relativity, which belongs to classical mechanics, only strains meta-irrelevance (forces us to consider our cognitive capacities) at its extremes. Quantum mechanics snaps it from the outset. Resolving meta-irrelevance required conceding both methodology and intuition before physicists could report, with numerous provisos, the ‘quantum world.’ Understanding which classical questions can and cannot be asked of quantum mechanics amounts to charting the extent of meta-irrelevance, the degree to which our cognitive biology (in addition to our cognitive history) can be neglected. The limits of classical interrogation are the limits of our cognitive biology vis a vis the microscopic, the point where many (but not all) of our physical intuitions trip into crash space.

The notorious debate between Einstein and Bohr regarding whether quantum mechanics is complete and so reveals an exceptional (classically inconsistent) nature, or incomplete, and so reveals the existence of hidden variables, bears some striking similarities to debates regarding the nature of experience and cognition. If quantum mechanics is complete, as Bohr maintained, then our basic cognitive biology is relevant to our understanding of the microscopic. If quantum mechanics is incomplete, as Einstein maintained, then our basic cognitive biology is irrelevant to our understanding of the microscopic—the problem lies in our cognitive history, which is to say, the kinds of theories we bring to bear. The central issue, in other words, is the same issue structuring debates regarding the nature of knowledge and experience: whether the apparently exceptional nature of the quantum, like the exceptional nature of experience and cognition, isn’t an artifact of any incapacity on our part. The primary question, in other words, is whether our position or constitution is relevant to understanding the conundrums posed, on the one hand, by quantum mechanics, and on the other hand, by knowledge and experience.

(It’s worth noting, here, that this comparison seems to contradict the way I normally use quantum mechanics to argue the need to abandon biologically entrenched intuitions. But if quantum mechanics is both exceptional (insofar as it violates classical mechanics) and scientifically warranted, cannot the intentionalist claim the same? Where intentionalists use the empirical power of operationalizations of intentional posits (such as beliefs) to argue their objectivity, quantum realists use the empirical power of quantum mechanical postulates (such as wave-functions) to argue their objectivity. But there’s two key differences undermining this apparently happy analogy: first, where intentionalism is nothing if not intuitive, quantum mechanics is, to put it mildly, anything but. And second, quantum mechanics is the most powerful, most applicable theory in the history of science, whereas intentionalism is plagued both by issues of reproducibility within experimental contexts and issues of generalization beyond those contexts.)

With quantum mechanics, the collapse of meta-irrelevance, the need to identify and suspend cognitive reflexes (sort between questions), is compelled by the deep information cognitive ecologies devised by physicists. The more elementary things get, the less applicable the machinery of human cognition becomes. The meta-irrelevance of human cognition, you could say, maps out our ‘scalar neglect-structure,’ the degree to which knowledge and experience are geared to solve the proximate and granular. Science provided the prostheses required to extend our humble capacities to solve the macroscopic. Despite our ancestral neglect-structure, our basic cognitive capacities possessed cosmic applicability—we wanted only for the genius of Einstein to discover how. But when it came to the microscopic, the intuitive became a liability. “We are all agreed that your theory is crazy,” Bohr told Wolfgang Pauli once. “The question which divides us is whether it is crazy enough to have a chance of being correct.”

On the view sketched here, the fundamental divide between general relativity and quantum mechanics lies in the latter’s cognitive biological relevance. This suggests that quantum mechanics, if not the more fundamental theory, functions in a problem-ecology where general relativity simply has no application. Most physicists see quantum mechanics as more fundamental but their arguments tend to be formal and ontological as opposed to ecological. As we saw above, the independence heuristic, the presumption of meta-irrelevance, is the default, core to all our cognitive orientations—and this is as true of physicists as it is of anyone. Physicists understand the debate, in other words, with a tendency to overlook the relevance of their cognitive biology, and so presume the gap between general relativity and quantum mechanics is merely mathematical or conceptual. The failure of biological irrelevance, however, exposes the physical dimensions of the problem, how the issue lies in the constitution of human cognition.

Theoretical physics has always understood that humans are physical systems, entropic conduits, like all things living. But appreciating the fact of cognitive biology is one thing and appreciating the activity of cognitive biology is quite another. When we sweep away all the second-order clutter, quantum mechanics is something us organisms do, a behavioural product of the very nature quantum mechanics reveals. Our cognitive nature, the ancestral defaults geared to optimize ancestral circumstances, systematically confounds our attempts to cognize nature. Quantum mechanics shows we are natural in such a way as to stymy our attempts to understand nature, short theoretical gerrymandering via robust experimental feedback.

This raises the spectre that human cognition is constitutionally incapable of unifying general relativity and quantum mechanics. It could be the case that a nonclassical macroscopic theory could supplant general relativity and subsume quantum mechanics, but short the kinds of experimental data available to the pioneers of quantum mechanics, we simply have no way of isolating the questions that apply from the questions that don’t, and so sorting signal from noise. The truth could be ‘out there,’ lying somewhere beyond our biological capacities, occupying a space that only our machines can hope to fathom. If the quantum theorization of gravity fails, and it becomes clear that quantum mechanics is only heuristically applicable to classical contexts, then the cognitive biological position outlined here suggests we might have to become something other than what we are to fathom the universe as a whole. Re-engineering neural configurations via learning alone (theory formation) may no longer be enough.

The failure of cognitive biological relevance in quantum mechanics underscores what might be called the problem of diminishing applicability, how the further our constitution is pushed from our ancestral, ecological sweet spots, the systems we evolved to take for granted, the less we can presume meta-irrelevance, the more we should expect our cognitive biological inheritance to require remediation, lest it crash.

After Yesterday: Review and Commentary of Catherine Malabou’s Before Tomorrow: Epigenesis and Rationality

by rsbakker

Experiments like the Wason Selection Task dramatically demonstrate the fractionate, heuristically specialized nature of human cognition. Dress the same logical confound in social garb and it suddenly becomes effortless. We are legion, both with reference to our environments and to ourselves. The great bulk of human cognition neglects the general nature of things, targeting cues instead, information correlated to subsequent events. We metacognize none of this.

Insofar as Catherine Malabou concedes the facts of neurobiology she concedes these facts.

In Before Tomorrow: Epigenesis and Rationality, she attempts to rescue the transcendental via a conception of ‘transcendental epigenesis.’ The book orbits about section 27 (pp. 173-175 in my beaten Kemp-Smith translation) of the Transcendental Deduction in the second edition of The Critique of Pure Reason, where Kant considers the vexed question of the source of the agreement of the transcendental and the empirical, conceptuality and experience. Kant considers three possibilities: the agreement is empirically sourced, transcendentally sourced, or fundamentally (divinely) given. Since the first and the third contradict the necessity of the transcendental, he opts for the second, which he cryptically describes as “the epigenesis of pure reason” (174), a phrase which has perplexed Kant scholars ever since.

She examines a cluster of different theories on Kant’s meaning, each pressing Kant toward either empirical or theological contingency, and thus the very contradiction he attempts to avoid with his invocation of ‘epigenesis.’ Malabou undertakes a defense of Kantian transcendental epigenesis in the context of contemporary neurobiology, transforming Kant’s dilemma into a diagnosis of the dilemma she sees confronting Continental philosophy as a whole.

Via Foucault, she argues the historicity of transcendence as epigenesis understood as the invention of meaning (which she thinks is irreducible). “[N]o biologist,” she writes, “examines the relation between genetics and epigenetics in terms of meaning.” Via Heidegger (“who is no doubt the deepest of all of Kant’s readers”) she argues that the ecstatic temporality of transcendence reveals the derivative nature of empirical and theological appropriations, which both cover over primordial time (time before time). She ultimately parts with Heidegger on the issue of primordiality, but she takes away the phenomenological interpolation of past, present, and future, building toward the argument that epigenesis is never simply archaeological, but aimed as well—teleological.

Meillasoux seems to overthrow the primordial via reference to the ancestral, the time before the time before time, but he ultimately fails to deliver on the project of contingency. For all the initial praise Malabou expresses for his project, he ultimately provides her with a critical foil, an example of how not to reach beyond the Kantian tradition. (I especially enjoyed her Heideggerean critique of his time before the time before time as being, quite obviously (I think), the time after the time before time).

She ultimately alights on the Critique of Judgment, with a particular emphasis on section 81, which contains another notorious reference to epigenesis. The problem, once again, was that reading ‘the epigenesis of pure reason’ empirically—neurobiologically—obliterates the transcendental. Reading it formally, on the other hand, renders it static and inexplicable. What Malabou requires is some way of squaring the transcendental with the cognitive scientific revolution, lest Continental philosophy dwindle into a museum relic. She uses the mingling of causal and teleological efficacy Kant describes in the Third Critique as her ‘contact point’ between the transcendental and the empirical, since it is in the purposiveness of life that contingency and necessity are brought together.

Combining this with ecstatic temporality on the hand and neurobiological life on the other reveals an epigenesis that bridges the divide between life and thought in the course of explaining the adaptivity of reason without short-circuiting transcendence: “insofar as its movement is also the movement of the reason that thinks it, insofar as there is no rationality without epigenesis, without self-adjustment, without the modification of the old by the new, the natural and objective time of epigenesis may also be considered to be the subjective and pure time of the formation of horizon by and for thought.”

And so is the place of cognitive science made clear: “what neurobiology makes possible today through its increasingly refined description of brain mechanisms and its use of increasingly effective imaging techniques is the actual taking into account, by thought, of its own life.” The epigenetic ratchet now includes the cognitive sciences; philosophical meaning can now be generated on the basis of the biology of life. “What the neurobiological perspective lacks fundamentally,” she writes, “is the theoretical accounting for the new type of reflexivity that it enables and in which all of its philosophical interest lies.” Transcendental epigenesis, Malabou thinks, allows neurobiologically informed philosophy, one attuned to the “adventure of subjectivity,” to inform neurobiology.

She concludes, interestingly, with a defense of her analogical methodology, something I’ve criticized her for previously (and actually asked her about at a public lecture she gave in 2015). I agree that we’re all compelled to resort to cartoons when discussing these matters, true, but the problem is that we have no way of arbitrating whether our analogies render some dynamic tractable, or merely express some coincidental formal homology, short their abductive power, their ability to render domains scrutable. It is the power of a metaphor to clarify more than it merely matches that is the yardstick of theoretical analogical adequacy.

In some ways, I genuinely loved this book, especially for the way it reads like a metaphysical whodunnit, constantly tying varied interpretations to the same source material, continually interrogating different suspects, dismissing them with a handful of crucial clues in hand. This is the kind of book I once adored: an extended meditation on a decisive philosophical issue anchored by close readings of genuinely perplexing texts.

Unfortunately, I’m pretty sure Malabou’s approach completely misconstrues the nature of the problem the cognitive sciences pose to Continental philosophy. As a result, I fear she obscures the disaster about to befall, not simply her tradition, but arguably the whole of humanity.

When viewed from a merely neurobiological perspective, cognitive systems and environments form cognitive ecologies—their ‘epigenetic’ interdependence comes baked in. Insofar as Malabou agrees with this, she agrees that the real question has nothing to do with ‘correlation,’ the intentional agreement of concept and object, but rather with the question of how experience and cognition as they appear to philosophical reflection can be reconciled with the facts of our cognitive ecologies as scientifically reported. The problem, in other words, is the biology of metacognition. To put it into Kantian terms, the cognitive sciences amount to a metacritique of reason, a multibillion dollar colonization of Kant’s traditional domain. Like so much life, metacognition turns out to be a fractionate, radically heuristic affair, ancestrally geared to practical problem-solving. Not only does this imperil Kant’s account of cognition, it signals the disenchantment of the human soul. The fate of the transcendental is a secondary concern at best, one that illustrates rather than isolates the problem. The sciences have overthrown the traditional discourses of every single domain they have colonized. The burning question is why should the Continental philosophical discourse on the human soul prove an exception?

The only ‘argument’ that Malabou makes in this regard, the claim upon which all of her arguments hang, also comes from Kant:

“In the Critique of Pure Reason, when discussing the schema of the triangle, Kant asserts that there are realities that “can never exist anywhere except in thought.” If we share this view, as I do, then the validity of the transcendental is upheld. Yes, there are realities that exist nowhere but in thought.”

So long as we believe in ‘realities of thought,’ Continental philosophy is assured its domain. But are these ‘realities’ what they seem? Remember Hume: “It is remarkable concerning the operations of the mind that, though most intimately present to us, yet, whenever they become the object of reflection, they seem involved in obscurity; nor can the eye readily find those lines and boundaries, which discriminate and distinguish them” (Enquiry Concerning Human Understanding, 7). The information available to traditional speculative reflection is less than ideal. Given this evidential insecurity, how will the tradition cope with the increasing amounts of cognitive scientific information flooding society?

The problem, in other words, is both epistemic and social. Epistemically, the reality of thought need not satisfy our traditional conceptions, which suggests, all things being equal, that it will very likely contradict them. And socially, no matter how one sets about ontologically out-fundamentalizing the sciences, the fact remains that ‘ontologically out-fundamentalizing’ is the very discursive game that is being marginalized—disenchanted.

Regarding the epistemic problem. For all the attention Malabou pays to section 81 of the Third Critique, she overlooks the way Kant begins by remarking on the limits of cognition. The fact is, he’s dumbfounded: “It is beyond our reason’s grasp how this reconciliation of two wholly different kinds of causality is possible: the causality of nature in its universal lawfulness, with [the causality of] an idea that confines nature to a particular form for which nature itself contains no basis whatsoever.” Our cognition of efficacy is divided between what can be sourced in nature and what cannot be sourced, between causes and purposes, and somehow, someway, they conspire to render living systems intelligible. The evidence of this basic fractionation lies plain in experience, but the nature of its origin and activity remain occluded: it belongs to “the being in itself of which we know merely the appearance.”

In one swoop, Kant metacognizes the complexity of cognition (two wholly different forms), the limits of metacognizing that complexity (inscrutable to reflection), and the efficacy of that complexity (enabling cognition of animate things). Thanks to the expansion of the cognitive scientific domain, all three of these insights now possess empirical analogues. As far as complexity is concerned, we know that humans possess a myriad of specialized cognitive systems. Kant’s ‘two kinds of causality’ correlates with two families of cognitive systems observed in infants, the one geared to the inanimate world, mechanical troubleshooting, the other to the animate world, biological troubleshooting. The cognitive pathologies belonging Williams Syndrome and Autism Spectrum Disorder demonstrate profound cleavages between physical and psychological cognition. The existence of metacognitive limits is also a matter of established empirical fact, operative in any number of phenomena explored by the ecological rationality and cognitive heuristics and biases research programs. In fact, the mere existence of cognitive science, which is invested in discovering those aspects of experience and cognition we are utterly insensitive to, demonstrates the profundity of human medial neglect, our utter blindness to the enabling machinery of cognition as such.

And recent research is also revealing the degree to which humans are hardwired to posit opportunistic efficacies. Given the enormity and complexity of endogenous and exogenous environments, organisms have no hope of sourcing the information constituting its cognitive ecologies. No surprise, neural networks (like the machine learning systems they inspired) are exquisitely adapted to the isolation of systematic correlations—patterns. Neglecting the nature of the systems involved, they focus on correlations between availabilities, isolating those observable precursors allowing the prediction of subsequent, reproductively significant observables such as behaviour. Confusing correlation with causation may be the bane of scientists, but for the rest of us, the reliance of ‘proxies’ often pays real cognitive dividends.

Humans are hardwired to both neglect their own cognitive complexity and to fetishize their environments, to impute efficacies serving local, practical cognitive determinations. Stranded in the most complicated system ever encountered, human metacognition cannot but comprise a congeries of source-insensitive systems geared to the adventitious solution of practical problems—like holding one’s tongue, or having second thoughts, or dwelling on the past, and so on. In everyday contexts, it never occurs to question the sources of these activities. Given neglect of the actual sources, we intuit spontaneity whenever we retask our metacognitive motley with reporting the source of these or any other cognitive activities.

We have very good empirical reasons to believe that the above is true. So, what do we do with transcendental speculation a la Kant? Do we ignore what cognitive science has learned about the fractionation, limits, and default propensities of human metacognition? Do we assume he was onto something distinct, a second, physically inexplicable order enabling cognition of the empirical in addition to the physically explicable (because empirical) order that we know (thanks to strokes, etc.) enables cognition of the empirical? Or do we assume that Kant was onto something dimly, which, given his ignorance of cognitive science, he construed dogmatically as distinct? Do we recognize the a priori as a fetishization of medial neglect, as way to make sense of the fractionate, heuristic nature of cognition absent any knowledge of that nature?

The problem with defending the first, transcendental thesis is that the evidence supporting the second empirical hypothesis will simply continue to accumulate. This is where the social problem rears its head, why the kind of domain overlap demonstrated above almost certainly signals the doom of Malabou’s discursive tradition. Continental philosophers need to understand how disenchantment works, how the mere juxtaposition of traditional and scientific claims socially delegitimizes the former. The more cognitive science learns about experience and cognition, the less relevant and less credible traditional philosophical discourses on the nature of experience and cognition will become.

The cognitive scientific metacritique of reason, you could say, reveals the transcendental as an artifact of our immaturity, of an age when we hearkened to the a priori as our speculative authority. Malabou not only believes in this speculative authority, she believes that science itself must answer to it. Rather than understanding the discursive tools of science epigenetically, refined and organized via scientific practice, she understands them presuppositionally, as beholden to this or that (perpetually underdetermined) traditional philosophical interpretation of conditions, hidden implicatures that must be unpacked to assure cognitive legitimacy—implicatures that clearly seem to stand outside ecology, thus requiring more philosophical interpretation to provide cognitive legitimacy. The great irony, of course, is that scientists eschew her brand of presuppositional ‘legitimacy’ to conserve their own legitimacy. Stomping around in semantic puddles is generally a counterproductive way to achieve operational clarity—a priori exercises in conceptual definition are notoriously futile. Science turns on finding answerable questions in questions answered. If gerrymandering definitions geared to local experimental contexts does the trick, then so be it. The philosophical groping and fumbling involved is valuable only so far as it serves this end. Is this problematic? Certainly. Is this a problem speculative ontological interpretation can solve? Not at all.

Something new is needed. Something radical, not in the sense of discursive novelty, but in a way that existentially threatens the tradition—and offends accordingly.

I agree entirely when Malabou writes:

“Clearly, it is of the utmost necessity today to rethink relations between the biological and the transcendental, even if it is to the detriment of the latter. But who’s doing so? And why do continental philosophers reject the neurobiological approach to the problem from the outset?”

This was the revelation I had in 1999, attempting to reconcile fundamental ontology and neuroscience for the final chapter of my dissertation. I felt the selfsame exhaustion, the nagging sense that it was all just a venal game, a discursive ingroup ruse. I turned my back on philosophy, began writing fiction, not realizing I was far from alone in my defection. When I returned, ‘correlation’ had replaced ‘presence’ as the new ‘ontologically problematic presupposition.’ At long last, I thought, Continental philosophy had recognized that intentionality—meaning—was the problem. But rather than turn to cognitive science to “search for the origin of thinking outside of consciousness and will,” the Speculative Realists I encountered (with the exception of thinkers like David Roden) embraced traditional vocabularies. Their break with traditional Kantian philosophy, I realized, did not amount to a break with traditional intentional philosophy. Far from calling attention to the problem, ‘correlation’ merely focused intellectual animus toward an effigy, an institutional emblem, stranding the 21st century Speculative Realists in the very interpretative mire they used to impugn 20th century Continental philosophy. Correlation was a hopeful, but ultimately misleading diagnosis. The problem isn’t that cognitive systems and environments are interdependent, the problem is that this interdependence is conceived intentionally. Think about it. Why do we find the intentional interdependence of cognition and experience so vexing when the ecological interdependence of cognitive systems and environments is simply given in biology? What is it about intentionality?

Be it dogmatically or critically conceived, what we call ‘intentionality’ is a metacognitive artifact of the way source-insensitive modes of cognition, like intentional cognition, systematically defer the question of sources. A transcendental source is a sourceless source—an ‘originary repetition’ admitting an epigenetic gloss—because intentional cognition, whether applied to thought or the world, is source-insensitive cognition. To apply intentional cognition to the question of the nature of intentional cognition, as the tradition does compulsively, is to trip into metacognitive crash space, a point where intuitions, like those Malabou so elegantly tracks in Before Tomorrow, can only confound the question they purport to solve.

Derrida understood, at least as far as his (or perhaps any) intentional vocabulary could take him. He understood that cognition as cognized is a ‘cut-out,’ an amnesiac intermediary, appearing sourceless, fully present, something outside ecology, and as such doomed to be overthrown by ecology. He, more-so than Kant, hesitates upon the metacognitive limit, full-well understanding the futility of transgressing it. But since he presumed the default application of intentional cognition to the problem of cognition necessary, he presumed the inevitability of tripping into crash space as well, believing that reflection could not but transgress its limits and succumb to the metaphysics of presence. Thus his ‘quasi-transcendentals,’ his own sideways concession to the Kantian quagmire. And thus deconstruction, the crashing of super-ecological claims by adducing what must be neglected—ecology—to maintain the illusion of presence.

And so, you could say the most surprising absence in Malabou’s text is her teacher, who whispers merely from various turns in her discourse.

“No one,” she writes, “has yet thought to ask what continental philosophy might become after this “break.” Not true. I’ve spent years now prospecting the desert of the real, the post-intentional landscape that, if I’m right, humanity is doomed to wander into and evaporate. I too was a Derridean once, so I know a path exists between her understanding and mine. I urge her to set aside the institutional defense mechanisms as I once did: charges of scientism or performative contradiction simply beg the question against the worst-case scenario. I invite her to come see what philosophy and the future look like after the death of transcendence, if only to understand the monstrosity of her discursive other. I challenge her to think post-human thoughts—to understand cognition materially, rather than what traditional authority has made of it. I implore her to see how the combination of science and capital is driving our native cognitive ecologies to extinction on an exponential curve.

And I encourage everyone to ask why, when it comes to the topic of meaning, we insist on believing in happy endings? We evolved to neglect our fundamental ecological nature, to strategically hallucinate spontaneities to better ignore the astronomical complexities beneath. Subreption has always been our mandatory baseline. As the cognitive ecologies underwriting those subreptive functions undergo ever more profound transformations, the more dysfunctional our ancestral baseline will become. With the dawning of AI and enhancement, the abstract problem of meaning has become a civilizational crisis.

Best we prepare for the worst and leave what was human to hope.

Exploding the Manifest and Scientific Images of Man

by rsbakker

 

This is how one pictures the angel of history. His face is turned toward the past. Where we perceive a chain of events, he sees one single catastrophe which keeps piling wreckage upon wreckage and hurls it in front of his feet. The angel would like to stay, awaken the dead, and make whole what has been smashed. But a storm is blowing from Paradise; it has got caught in his wings with such violence that the angel can no longer close them. The storm irresistibly propels him into the future to which his back is turned, while the pile of debris before him grows skyward. This storm is what we call progress. –Benjamin, Theses on the Philosophy of History

 

What I would like to do is show how Sellars’ manifest and scientific images of humanity are best understood in terms of shallow cognitive ecologies and deep information environments. Expressed in Sellars’ own terms, you could say the primary problem with his characterization is that it is a manifest, rather than scientific, understanding of the distinction. It generates the problems it does (for example, in Brassier or Dennett) because it inherits the very cognitive limitations it purports to explain. At best, Sellars take is too granular, and ultimately too deceptive to function as much more than a stop-sign when it comes to questions regarding the constitution and interrelation of different human cognitive modes. Far from a way to categorize and escape the conundrums of traditional philosophy, it provides yet one more way to bake them in.

 

Cognitive Images

Things begin, for Sellars, in the original image, our prehistorical self-understanding. The manifest image consists in the ‘correlational and categorial refinement’ of this self-understanding. And the scientific image consists in everything discovered about man beyond the limits of correlational and categorial refinement (while relying on these refinements all the same). The manifest image, in other words, is an attenuation of the original image, whereas the scientific image is an addition to the manifest image (that problematizes the manifest image). Importantly, all three are understood as kinds of ‘conceptual frameworks’ (though he sometime refers to the original image as ‘preconceptual.’

The original framework, Sellars tells us, conceptualizes all objects as ways of being persons—it personalizes its environments. The manifest image, then, can be seen as “the modification of an image in which all the objects are capable of the full range of personal activity” (12). The correlational and categorial refinement consists in ‘pruning’ the degree to which they are personalized. The accumulation of correlational inductions (patterns of appearance) undermined the plausibility of environmental agencies and so drove categorial innovation, creating a nature consisting of ‘truncated persons,’ a world that was habitual as opposed to mechanical. This new image of man, Sellars claims, is “the framework in terms of which man came to be aware of himself as man-in-the-world” (6). As such, the manifest image is the image interrogated by the philosophical tradition, which given the limited correlational and categorial resources available to it, remained blind to the communicative—social—conditions of conceptual frameworks, and so, the manifest image of man. Apprehending this would require the scientific image, the conceptual complex “derived from the fruits of postulational theory construction,” yet still turning on the conceptual resources of the manifest image.

For Sellars, the distinction between the two images turns not so much on what we commonly regard to be ‘scientific’ or not (which is why he thinks the manifest image is scientific in certain respects), but on the primary cognitive strategies utilized. “The contrast I have in mind,” he writes, “is not that between an unscientific conception of man-in-the-world and a scientific one, but between that conception which limits itself to what correlational techniques can tell us about perceptible and introspectable events and that which postulates imperceptible objects and events for the purpose of explaining correlations among perceptibles” (19). This distinction, as it turns out, only captures part of what we typically think of as ‘scientific.’ A great deal of scientific work is correlational, bent on describing patterns in sets of perceptibles as opposed to postulating imperceptibles to explain those sets. This is why he suggests that terming the scientific image the ‘theoretical image’ might prove more accurate, if less rhetorically satisfying. The scientific image is postulational because it posits what isn’t manifest—what wasn’t available to our historical or prehistorical ancestors, namely, knowledge of man as “a complex physical system” (25).

The key to overcoming the antipathy between the two images, Sellars thinks, lies in the indispensability of the communally grounded conceptual framework of the manifest image to both images. The reason we should yield ontological priority to the scientific image derives from the conceptual priority of the manifest image. Their domains need not overlap. “[T]he conceptual framework of persons,” he writes, “is not something that needs to be reconciled with the scientific image, but rather something to be joined to it” (40). To do this, we need to “directly relate the world as conceived by scientific theory to our purposes and make it our world and no longer an alien appendage to the world in which we do our living” (40).

Being in the ‘logical space of reasons,’ or playing the ‘game of giving and asking for reasons,’ requires social competence, which requires sensitivity to norms and purposes. The entities and relations populating Sellars normative metaphysics exist only in social contexts, only so far as they discharge pragmatic functions. The reliance of the scientific image on these pragmatic functions renders them indispensable, forcing us to adopt ‘stereoscopic vision,’ to acknowledge the conceptual priority of the manifest even as we yield ontological priority to the scientific.

 

Cognitive Ecologies

The interactional sum of organisms and their environments constitutes an ecology. A ‘cognitive ecology,’ then, can be understood as the interactional sum of organisms and their environments as it pertains to the selection of behaviours.

A deep information environment is simply the sum of difference-making differences available for possible human cognition. We could, given the proper neurobiology, perceive radio waves, but we don’t. We could, given the proper neurobiology, hear dog whistles, but we don’t. We could, given the proper neurobiology, see paramecia, but we don’t. Of course, we now possess instrumentation allowing us to do all these things, but this just testifies to the way science accesses deep information environments. As finite, our cognitive ecology, though embedded in deep information environments, engages only select fractions of it. As biologically finite, in other words, human cognitive ecology is insensitive to most all deep information. When a magician tricks you, for instance, they’re exploiting your neglect-structure, ‘forcing’ your attention toward ephemera while they manipulate behind the scenes.

Given the complexity of biology, the structure of our cognitive ecology lies outside the capacity of our cognitive ecology. Human cognitive ecology cannot but neglect the high dimensional facts of human cognitive ecology. Our intractability imposes inscrutability. This means that human metacognition and sociocognition are radically heuristic, systems adapted to solving systems they otherwise neglect.

Human cognition possesses two basic modes, one that is source-insensitive, or heuristic, relying on cues to predict behaviour, and one that is source-sensitive, or mechanical, relying on causal contexts to predict behaviour. The radical economies provided by the former is offset by narrow ranges of applicability and dependence on background regularities. The general applicability of the latter is offset by its cost. Human cognitive ecology can be said to be shallow to the extent it turns on source-insensitive modes of cognition, and deep to the extent it turns on source-sensitive modes. Given the radical intractability of human cognition, we should expect metacognition and sociocognition to be radically shallow, utterly dependent on cues and contexts. Not only are we blind to the enabling dimension of experience and cognition, we are blind to this blindness. We suffer medial neglect.

This provides a parsimonious alternative to understanding the structure and development of human self-understanding. We began in an age of what might be called ‘medial innocence,’ when our cognitive ecologies were almost exclusively shallow, incorporating causal determinations only to cognize local events. Given their ignorance of nature, our ancestors could not but cognize it via source-insensitive modes. They did not so much ‘personalize’ the world, as Sellars claims, as use source-insensitive modes opportunistically. They understood each other and themselves as far as they needed to resolve practical issues. They understood argument as far as they needed to troubleshoot their reports. Aside from these specialized ways of surmounting their intractability, they were utterly ignorant of their nature.

Our ancestral medial innocence began eroding as soon as humanity began gaming various heuristic systems out of school, spoofing their visual and auditory systems, knapping them into cultural inheritances, slowly expanding and multiplying potential problem-ecologies within the constraints of oral culture. Writing, as a cognitive technology, had a tremendous impact on human cognitive ecology. Literacy allowed speech to be visually frozen and carved up for interrogation. The gaming of our heuristics began in earnest, the knapping of countless cognitive tools. As did the questions. Our ancient medial innocence bloomed into a myriad of medial confusions.

Confusions. Not, as Sellars would have it, a manifest image. Sellars calls it ‘manifest’ because it’s correlational, source-insensitive, bound to the information available. The fact that it’s manifest means that it’s available—nothing more. Given medial innocence, that availability was geared to practical ancestral applications. The shallowness of our cognitive ecology was adapted to the specificity of the problems faced by our ancestors. Retasking those shallow resources to solve for their own nature, not surprisingly, generated endless disputation. Combined with the efficiencies provided by coinage and domestication during the ‘axial age,’ literacy did not so much trigger ‘man’s encounter with man,’ as Sellars suggests, as occasion humanity’s encounter with the question of humanity, and the kinds cognitive illusions secondary to the application of metacognitive and sociocognitive heuristics to the theoretical question of experience and cognition.

The birth of philosophy is the birth of discursive crash space. We have no problem reflecting on thoughts or experiences, but as soon as we reflect on the nature of thoughts and experiences, we find ourselves stymied, piling guesses upon guesses. Despite our genius for metacognitive innovation, what’s manifest in our shallow cognitive ecologies is woefully incapable of solving for the nature of human cognitive ecology. Precisely because reflecting on the nature of thoughts and experiences is a metacognitive innovation, something without evolutionary precedent, we neglect the insufficiency of the resources available. Artifacts of the lack of information are systematically mistaken for positive features. The systematicity of these crashes licenses the intuition that some common structure lurks ‘beneath’ the disputation—that for all their disagreements, the disputants are ‘onto something.’ The neglect-structure belonging to human metacognitive ecology gradually forms the ontological canon of the ‘first-person’ (see “On Alien Philosophy” for a more full-blooded account). And so, we persisted, generation after generation, insisting on the sufficiency of those resources. Since sociocognitive terms cue sociocognitive modes of cognition, the application of these modes to the theoretical problem of human experience and cognition struck us as intuitive. Since the specialization of these modes renders them incompatible with source-sensitive modes, some, like Wittgenstein and Sellars, went so far as to insist on the exclusive applicability of those resources to the problem of human experience and cognition.

Despite the profundity of metacognitive traps like these, the development of our sourcesensitive cognitive modes continued reckoning more and more of our deep environment. At first this process was informal, but as time passed and the optimal form and application of these modes resolved from the folk clutter, we began cognizing more and more of the world in deep environmental terms. The collective behavioural nexuses of science took shape. Time and again, traditions funded by source-insensitive speculation on the nature of some domain found themselves outcompeted and ultimately displaced. The world was ‘disenchanted’; more and more of the grand machinery of the natural universe was revealed. But as powerful as these individual and collective source-sensitive modes of cognition proved, the complexity of human cognitive ecology insured that we would, for the interim, remain beyond their reach. Though an artifactual consequence of shallow ecological neglect-structures, the ‘first-person’ retained cognitive legitimacy. Despite the paradoxes, the conundrums, the interminable disputation, the immediacy of our faulty metacognitive intuitions convinced us that we alone were exempt, that we were the lone exception in the desert landscape of the real. So long as science lacked the resources to reveal the deep environmental facts of our nature, we could continue rationalizing our conceit.

 

Ecology versus Image

As should be clear, Sellars’ characterization of the images of man falls squarely within this tradition of rationalization, the attempt to explain away our exceptionalism. One of the stranger claims Sellars makes in this celebrated essay involves the scientific status of his own discursive exposition of the images and their interrelation. The problem, he writes, is that the social sources of the manifest image are not themselves manifest. As a result, the manifest image lacks the resources to explain its own structure and dynamics: “It is in the scientific image of man in the world that we begin to see the main outlines of the way in which man came to have an image of himself-in-the-world” (17). Understanding our self-understanding requires reaching beyond the manifest and postulating the social axis of human conceptuality, something, he implies, that only becomes available when we can see group phenomena as ‘evolutionary developments.’

Remember Sellars’ caveats regarding ‘correlational science’ and the sense in which the manifest image can be construed as scientific? (7) Here, we see how that leaky demarcation of the manifest (as correlational) and the scientific (as theoretical) serves his downstream equivocation of his manifest discourse with scientific discourse. If science is correlational, as he admits, then philosophy is also postulational—as he well knows. But if each image helps itself to the cognitive modes belonging to the other, then Sellars assertion that the distinction lies between a conception limited to ‘correlational techniques’ and one committed to the ‘postulation of imperceptibles’ (19) is either mistaken or incomplete. Traditional philosophy is nothing if not theoretical, which is to say, in the business of postulating ontologies.

Suppressing this fact allows him to pose his own traditional philosophical posits as (somehow) belonging to the scientific image of man-in-the-world. What are ‘spaces of reasons’ or ‘conceptual frameworks’ if not postulates used to explain the manifest phenomena of cognition? But then how do these posits contribute to the image of man as a ‘complex physical system’? Sellars understands the difficulty here “as long as the ultimate constituents of the scientific image are particles forming ever more complex systems of particles” (37). This is what ultimately motivates the structure of his ‘stereoscopic view,’ where ontological precedence is conceded to the scientific image, while cognition itself remains safely in the humanistic hands of the manifest image…

Which is to say, lost to crash space.

Are human neuroheuristic systems welded into ‘conceptual frameworks’ forming an ‘irreducible’ and ‘autonomous’ inferential regime? Obviously not. But we can now see why, given the confounds secondary to metacognitive neglect, they might report as such in philosophical reflection. Our ancestors bickered. In other words, our capacity to collectively resolve communicative and behavioural discrepancies belongs to our medial innocence: intentional idioms antedate our attempts to theoretically understand intentionality. Uttering them, not surprisingly, activates intentional cognitive systems, because, ancestrally speaking, intentional idioms always belonged to problem-ecologies requiring these systems to solve. It was all but inevitable that questioning the nature of intentional idioms would trigger the theoretical application of intentional cognition. Given the degree to which intentional cognition turns on neglect, our millennial inability to collectively make sense of ourselves, medial confusion, was all but inevitable as well. Intentional cognition cannot explain the nature of anything, insofar as natures are general, and the problem ecology of intentional cognition is specific. This is why, far from decisively resolving our cognitive straits, Sellars’ normative metaphysics merely complicates it, using the same overdetermined posits to make new(ish) guesses that can only serve as grist for more disputation.

But if his approach is ultimately hopeless, how is he able to track the development in human self-understanding at all? For one, he understands the centrality of behaviour. But rather than understand behaviour naturalistically, in terms of systems of dispositions and regularities, he understands it intentionally, via modes adapted to neglect physical super-complexities. Guesses regarding hidden systems of physically inexplicable efficacies—’conceptual frameworks’—are offered as basic explanations of human behaviour construed as ‘action.’

He also understands that distinct cognitive modes are at play. But rather than see this distinction biologically, as the difference between complex physical systems, he conceives it conceptually, which is to say, via source-insensitive systems incapable of charting, let alone explaining our cognitive complexity. Thus, his confounding reliance on what might be called manifest postulation, deep environmental explanation via shallow ecological (intentional) posits.

And he understands the centrality of information availability. But rather than see this availability biologically, as the play of physically interdependent capacities and resources, he conceives it, once again, conceptually. All differences make differences somehow. Information consists of differences selected (neurally or evolutionarily) by the production of prior behaviours. Information consists in those differences prone to make select systematic differences, which is to say, feed the function of various complex physical systems. Medial neglect assures that the general interdependence of information and cognitive system appears nowhere in experience or cognition. Once humanity began retasking its metacognitive capacities, it was bound to hallucinate a countless array of ‘givens.’ Sellars is at pains to stress the medial (enabling) dimension of experience and cognition, the inability of manifest deliverances to account for the form of thought (16). Suffering medial neglect, cued to misapply heuristics belonging to intentional cognition, he posits ‘conceptual frameworks’ as a means of accommodating the general interdependence of information and cognitive system. The naturalistic inscrutability of conceptual frameworks renders them local cognitive prime movers (after all, source-insensitive posits can only come first), assuring the ‘conceptual priority’ of the manifest image.

The issue of information availability, for him, is always conceptual, which is to say, always heuristically conditioned, which is to say, always bound to systematically distort what is the case. Where the enabling dimension of cognition belongs to the deep environments on a cognitive ecological account, it belongs to communities on Sellars’ inferentialist account. As result, he has no clear way of seeing how the increasingly technologically mediated accumulation of ancestrally unavailable information drives the development of human self-understanding.

The contrast between shallow (source-insensitive) cognitive ecologies and deep information environments opens the question of the development of human self-understanding to the high-dimensional messiness of life. The long migratory path from the medial innocence of our preliterate past to the medial chaos of our ongoing cognitive technological revolution has nothing to do with the “projection of man-in-the-world on the human understanding” (5) given the development of ‘conceptual frameworks.’ It has to do with blind medial adaptation to transforming cognitive ecologies. What complicates this adaptation, what delivers us from medial innocence to chaos, is the heuristic nature of source-insensitive cognitive modes. Their specificity, their inscrutability, not to mention their hypersensitivity (the ease with which problems outside their ability cue their application) all but doomed us to perpetual, discursive disarray.

Images. Games. Conceptual frameworks. None of these shallow ecological posits are required to make sense of our path from ancestral ignorance to present conundrum. And we must discard them, if we hope to finally turn and face our future, gaze upon the universe with the universe’s own eyes.