Three Pound Brain

No bells, just whistling in the dark…

Flies, Frogs, and Fishhooks*

by rsbakker

[Revisited this the other day after reading Gallagher’s account of lizard catching in Enactivist Interventions (recommended to me by Dirk a ways back) and it struck me as worth reposting. But where Gallagher thinks the neglect characteristic of lizard catching implies only to the inapplicability of neurobiology to the question of free-will, I think that neglect can be used to resolve a great number of mysteries regarding intentionality and cognition. I hope he finds this piece.]

 

So, me and my buddies occasionally went frog hunting when we were kids. We’d knot a string on a fishhook, swing the line over the pond’s edge, and bam! frogs would strike at them. Up, up they were hauled, nude for being amphibian, hoots and hollers measuring their relative size.  Then they were dumped in a bucket.

We were just kids. We knew nothing about biology or evolution, let alone cognition. Despite this ignorance, we had no difficulty whatsoever explaining why it was so easy to catch the frogs: they were too stupid to tell the difference between fishhooks and flies.

Contrast this with the biological view I have available now. Given the capacity of Anuran visual cognition and the information sampled, frogs exhibit systematic insensitivities to the difference between fishhooks and flies. Anuran visual cognition not only evolved to catch flies, it evolved to catch flies as cheaply as possible. Without fishhooks to filter the less fishhook sensitive from the more fishhook sensitive, frogs had no way of evolving the capacity to distinguish flies from fishhooks.

Our old childhood theory is pretty clearly a normative one, explaining the frogs’ failure in terms what they ought to do (the dumb buggers). The frogs were mistaking fishhooks for flies. But if you look closely, you’ll notice how the latter theory communicates a similar normative component only in biological guise. Adducing evolutionary history pretty clearly allows us to say the proper function of Anuran cognition is to catch flies.

Ruth Millikan famously used this intentional crack in the empirical explanatory door to develop her influential version of teleosemantics, the attempt to derive semantic normativity from the biological normativity evident in proper functions. Eyes are for seeing, tongues for talking or catching flies; everything has been evolutionarily filtered to accomplish ends. So long as biological phenomena possess functions, it seems obvious functions are objectively real. So far as functions entail ‘satisfaction conditions,’ we can argue that normativity is objectively real. Given this anchor, the trick then becomes one of explaining normativity more generally.

The controversy caused by Language, Thought, and Other Biological Categories was immediate. But for all the principled problems that have since belaboured teleosemantic approaches, the real problem is that they remain as underdetermined as the day they were born. Debates, rather than striking out in various empirical directions, remain perpetually mired in ‘mere philosophy.’ After decades of pursuit, the naturalization of intentionality project, Uriah Kriegl notes, “bears all the hallmarks of a degenerating research program” (Sources of Normativity, 5).

Now the easy way to explain this failure is to point out that finding, as Millikan does, right-wrong talk buried in the heart of biological explanation does not amount to finding right and wrong buried in the heart of biology. It seems far less extravagant to suppose ‘proper function’ provides us with a short cut, a way to communicate/troubleshoot this or that actionable upshot of Anuran evolutionary history absent any knowledge of that history.

Recall my boyhood theory that frogs were simply too stupid to distinguish flies from fishhooks. Absent all knowledge of evolution and biomechanics, my friends and I found a way to communicate something lethal regarding frogs. We knew what frog eyes and frog tongues and frog brains and so on were for. Just like that. The theory possessed a rather narrow range of application to be true, but it was nothing if not cheap, and potentially invaluable if one were, say, starving. Anuran physiology, ethology, and evolutionary history simply did not exist for us, and yet we were able to pluck the unfortunate amphibians from the pond at will. As naïve children, we lived in a shallow information environment, one absent the great bulk of deep information provided by the sciences. And as far as frog catching was concerned, this made no difference whatsoever, simply because we were the evolutionary products of numberless such environments. Like fishhooks with frogs, theories of evolution had no impact on the human genome. Animal behavior and the communication of animal behavior, on the other hand, possessed a tremendous impact—they were the flies.

Which brings us back to the easy answer posed above, the idea that teleosemantics fails for confusing a cognitive short-cut for a natural phenomenon. Absent any way of cognizing our deep information environments, our ancestors evolved countless ways to solve various, specific problems absent such cognition. Rather than track all the regularities engulfing us, we take them for granted—just like a frog.

The easy answer, in other words, is to assume that theoretical applications of normative subsystems are themselves ecological (as is this very instant of cognition). After all, my childhood theory was nothing if not heuristic, which is to say, geared to the solution of complex physical systems absent complex physical knowledge of them. Terms like ‘about’ or ‘for,’ you could say, belong to systems dedicated to solving systems absent biomechanical cognition.

Which is why kids can use them.

Small wonder then, that attempts to naturalize ‘aboutness’ or ‘forness’—or any other apparent intentional phenomena—cause the theoretical fits they do. Such attempts amount to human versions of confusing flies for fishhooks! They are shallow information terms geared to the solution of shallow information problems. They ‘solve’—filter behaviors via feedback—by playing on otherwise neglected regularities in our deep environments, relying on causal correlations to the systems requiring solution, rather than cognizing those systems in physical terms. That is their naturalization—their deep information story.

‘Function,’ on the other hand, is a shallow information tool geared to the solution of deep information problems. What makes a bit of the world specifically ‘functional’ is its relation to our capacity to cognize consequences in a source neglecting yet source compatible way. As my childhood example shows, functions can be known independent of biology. The constitutive story, like the developmental one, can be filled in afterward. Functional cognition lets us neglect an astronomical number of biological details. To say what a mechanism is for is to know what a mechanism will do without saying what makes a mechanism tick. But unlike intentional cognition more generally, functional cognition remains entirely compatible with causality. This potent combination of high-dimensional compatibility and neglect is what renders it invaluable, providing the degrees of cognitive freedom required to tackle complexities across scales.

The intuition underwriting teleosemantics hits upon what is in fact a crucial crossroads between cognitive systems, where the amnesiac power of should facilitates, rather than circumvents, causal cognition. But rather than interrogate the prospect of theoretically retasking a child’s explanatory tool, Millikan, like everyone else, presumes felicity, that intuitions secondary to such retasking are genuinely cognitive. Because they neglect the neglect-structure of their inquiry, they flatter cunning children with objectivity, so sparing their own (coincidentally) perpetually underdetermined intuitions. Time and again they apply systems selected for brushed-sun afternoons along the pond’s edge to the theoretical problem of their own nature. The lures dangle in their reflection. They strike at fishhook after fishhook, and find themselves hauled skyward, manhandled by shadows before being dropped into buckets on the shore.

*Originally posted January 23rd, 2018

On the Death of Meaning

by rsbakker

My copy of New Directions In Philosophy and Literature arrived yesterday…

New Directions

The anthology features an introduction by Claire Colebrook, as well as papers by Graham Harman, Graham Priest, Charlie Blake, and more. A prepub version of my contribution, “On the Death of Meaning,” can be found here.

Exploding the Manifest and Scientific Images of Man*

by rsbakker

 

This is how one pictures the angel of history. His face is turned toward the past. Where we perceive a chain of events, he sees one single catastrophe which keeps piling wreckage upon wreckage and hurls it in front of his feet. The angel would like to stay, awaken the dead, and make whole what has been smashed. But a storm is blowing from Paradise; it has got caught in his wings with such violence that the angel can no longer close them. The storm irresistibly propels him into the future to which his back is turned, while the pile of debris before him grows skyward. This storm is what we call progress. –Benjamin, Theses on the Philosophy of History

 

What I would like to do is show how Sellars’ manifest and scientific images of humanity are best understood in terms of shallow cognitive ecologies and deep information environments. Expressed in Sellars’ own terms, you could say the primary problem with his characterization is that it is a manifest, rather than scientific, understanding of the distinction. It generates the problems it does (for example, in Brassier or Dennett) because it inherits the very cognitive limitations it purports to explain. At best, Sellars take is too granular, and ultimately too deceptive to function as much more than a stop-sign when it comes to questions regarding the constitution and interrelation of different human cognitive modes. Far from a way to categorize and escape the conundrums of traditional philosophy, it provides yet one more way to bake them in.

 

Cognitive Images

Things begin, for Sellars, in the original image, our prehistorical self-understanding. The manifest image consists in the ‘correlational and categorial refinement’ of this self-understanding. And the scientific image consists in everything discovered about man beyond the limits of correlational and categorial refinement (while relying on these refinements all the same). The manifest image, in other words, is an attenuation of the original image, whereas the scientific image is an addition to the manifest image (that problematizes the manifest image). Importantly, all three are understood as kinds of ‘conceptual frameworks’ (though he sometime refers to the original image as ‘preconceptual.’

The original framework, Sellars tells us, conceptualizes all objects as ways of being persons—it personalizes its environments. The manifest image, then, can be seen as “the modification of an image in which all the objects are capable of the full range of personal activity” (12). The correlational and categorial refinement consists in ‘pruning’ the degree to which they are personalized. The accumulation of correlational inductions (patterns of appearance) undermined the plausibility of environmental agencies and so drove categorial innovation, creating a nature consisting of ‘truncated persons,’ a world that was habitual as opposed to mechanical. This new image of man, Sellars claims, is “the framework in terms of which man came to be aware of himself as man-in-the-world” (6). As such, the manifest image is the image interrogated by the philosophical tradition, which given the limited correlational and categorial resources available to it, remained blind to the communicative—social—conditions of conceptual frameworks, and so, the manifest image of man. Apprehending this would require the scientific image, the conceptual complex “derived from the fruits of postulational theory construction,” yet still turning on the conceptual resources of the manifest image.

For Sellars, the distinction between the two images turns not so much on what we commonly regard to be ‘scientific’ or not (which is why he thinks the manifest image is scientific in certain respects), but on the primary cognitive strategies utilized. “The contrast I have in mind,” he writes, “is not that between an unscientific conception of man-in-the-world and a scientific one, but between that conception which limits itself to what correlational techniques can tell us about perceptible and introspectable events and that which postulates imperceptible objects and events for the purpose of explaining correlations among perceptibles” (19). This distinction, as it turns out, only captures part of what we typically think of as ‘scientific.’ A great deal of scientific work is correlational, bent on describing patterns in sets of perceptibles as opposed to postulating imperceptibles to explain those sets. This is why he suggests that terming the scientific image the ‘theoretical image’ might prove more accurate, if less rhetorically satisfying. The scientific image is postulational because it posits what isn’t manifest—what wasn’t available to our historical or prehistorical ancestors, namely, knowledge of man as “a complex physical system” (25).

The key to overcoming the antipathy between the two images, Sellars thinks, lies in the indispensability of the communally grounded conceptual framework of the manifest image to both images. The reason we should yield ontological priority to the scientific image derives from the conceptual priority of the manifest image. Their domains need not overlap. “[T]he conceptual framework of persons,” he writes, “is not something that needs to be reconciled with the scientific image, but rather something to be joined to it” (40). To do this, we need to “directly relate the world as conceived by scientific theory to our purposes and make it our world and no longer an alien appendage to the world in which we do our living” (40).

Being in the ‘logical space of reasons,’ or playing the ‘game of giving and asking for reasons,’ requires social competence, which requires sensitivity to norms and purposes. The entities and relations populating Sellars normative metaphysics exist only in social contexts, only so far as they discharge pragmatic functions. The reliance of the scientific image on these pragmatic functions renders them indispensable, forcing us to adopt ‘stereoscopic vision,’ to acknowledge the conceptual priority of the manifest even as we yield ontological priority to the scientific.

 

Cognitive Ecologies

The interactional sum of organisms and their environments constitutes an ecology. A ‘cognitive ecology,’ then, can be understood as the interactional sum of organisms and their environments as it pertains to the selection of behaviours.

A deep information environment is simply the sum of difference-making differences available for possible human cognition. We could, given the proper neurobiology, perceive radio waves, but we don’t. We could, given the proper neurobiology, hear dog whistles, but we don’t. We could, given the proper neurobiology, see paramecia, but we don’t. Of course, we now possess instrumentation allowing us to do all these things, but this just testifies to the way science accesses deep information environments. As finite, our cognitive ecology, though embedded in deep information environments, engages only select fractions of it. As biologically finite, in other words, human cognitive ecology is insensitive to most all deep information. When a magician tricks you, for instance, they’re exploiting your neglect-structure, ‘forcing’ your attention toward ephemera while they manipulate behind the scenes.

Given the complexity of biology, the structure of our cognitive ecology lies outside the capacity of our cognitive ecology. Human cognitive ecology cannot but neglect the high dimensional facts of human cognitive ecology. Our intractability imposes inscrutability. This means that human metacognition and sociocognition are radically heuristic, systems adapted to solving systems they otherwise neglect.

Human cognition possesses two basic modes, one that is source-insensitive, or heuristic, relying on cues to predict behaviour, and one that is source-sensitive, or mechanical, relying on causal contexts to predict behaviour. The radical economies provided by the former is offset by narrow ranges of applicability and dependence on background regularities. The general applicability of the latter is offset by its cost. Human cognitive ecology can be said to be shallow to the extent it turns on source-insensitive modes of cognition, and deep to the extent it turns on source-sensitive modes. Given the radical intractability of human cognition, we should expect metacognition and sociocognition to be radically shallow, utterly dependent on cues and contexts. Not only are we blind to the enabling dimension of experience and cognition, we are blind to this blindness. We suffer medial neglect.

This provides a parsimonious alternative to understanding the structure and development of human self-understanding. We began in an age of what might be called ‘medial innocence,’ when our cognitive ecologies were almost exclusively shallow, incorporating causal determinations only to cognize local events. Given their ignorance of nature, our ancestors could not but cognize it via source-insensitive modes. They did not so much ‘personalize’ the world, as Sellars claims, as use source-insensitive modes opportunistically. They understood each other and themselves as far as they needed to resolve practical issues. They understood argument as far as they needed to troubleshoot their reports. Aside from these specialized ways of surmounting their intractability, they were utterly ignorant of their nature.

Our ancestral medial innocence began eroding as soon as humanity began gaming various heuristic systems out of school, spoofing their visual and auditory systems, knapping them into cultural inheritances, slowly expanding and multiplying potential problem-ecologies within the constraints of oral culture. Writing, as a cognitive technology, had a tremendous impact on human cognitive ecology. Literacy allowed speech to be visually frozen and carved up for interrogation. The gaming of our heuristics began in earnest, the knapping of countless cognitive tools. As did the questions. Our ancient medial innocence bloomed into a myriad of medial confusions.

Confusions. Not, as Sellars would have it, a manifest image. Sellars calls it ‘manifest’ because it’s correlational, source-insensitive, bound to the information available. The fact that it’s manifest means that it’s available—nothing more. Given medial innocence, that availability was geared to practical ancestral applications. The shallowness of our cognitive ecology was adapted to the specificity of the problems faced by our ancestors. Retasking those shallow resources to solve for their own nature, not surprisingly, generated endless disputation. Combined with the efficiencies provided by coinage and domestication during the ‘axial age,’ literacy did not so much trigger ‘man’s encounter with man,’ as Sellars suggests, as occasion humanity’s encounter with the question of humanity, and the kinds cognitive illusions secondary to the application of metacognitive and sociocognitive heuristics to the theoretical question of experience and cognition.

The birth of philosophy is the birth of discursive crash space. We have no problem reflecting on thoughts or experiences, but as soon as we reflect on the nature of thoughts and experiences, we find ourselves stymied, piling guesses upon guesses. Despite our genius for metacognitive innovation, what’s manifest in our shallow cognitive ecologies is woefully incapable of solving for the nature of human cognitive ecology. Precisely because reflecting on the nature of thoughts and experiences is a metacognitive innovation, something without evolutionary precedent, we neglect the insufficiency of the resources available. Artifacts of the lack of information are systematically mistaken for positive features. The systematicity of these crashes licenses the intuition that some common structure lurks ‘beneath’ the disputation—that for all their disagreements, the disputants are ‘onto something.’ The neglect-structure belonging to human metacognitive ecology gradually forms the ontological canon of the ‘first-person’ (see “On Alien Philosophy” for a more full-blooded account). And so, we persisted, generation after generation, insisting on the sufficiency of those resources. Since sociocognitive terms cue sociocognitive modes of cognition, the application of these modes to the theoretical problem of human experience and cognition struck us as intuitive. Since the specialization of these modes renders them incompatible with source-sensitive modes, some, like Wittgenstein and Sellars, went so far as to insist on the exclusive applicability of those resources to the problem of human experience and cognition.

Despite the profundity of metacognitive traps like these, the development of our sourcesensitive cognitive modes continued reckoning more and more of our deep environment. At first this process was informal, but as time passed and the optimal form and application of these modes resolved from the folk clutter, we began cognizing more and more of the world in deep environmental terms. The collective behavioural nexuses of science took shape. Time and again, traditions funded by source-insensitive speculation on the nature of some domain found themselves outcompeted and ultimately displaced. The world was ‘disenchanted’; more and more of the grand machinery of the natural universe was revealed. But as powerful as these individual and collective source-sensitive modes of cognition proved, the complexity of human cognitive ecology insured that we would, for the interim, remain beyond their reach. Though an artifactual consequence of shallow ecological neglect-structures, the ‘first-person’ retained cognitive legitimacy. Despite the paradoxes, the conundrums, the interminable disputation, the immediacy of our faulty metacognitive intuitions convinced us that we alone were exempt, that we were the lone exception in the desert landscape of the real. So long as science lacked the resources to reveal the deep environmental facts of our nature, we could continue rationalizing our conceit.

 

Ecology versus Image

As should be clear, Sellars’ characterization of the images of man falls squarely within this tradition of rationalization, the attempt to explain away our exceptionalism. One of the stranger claims Sellars makes in this celebrated essay involves the scientific status of his own discursive exposition of the images and their interrelation. The problem, he writes, is that the social sources of the manifest image are not themselves manifest. As a result, the manifest image lacks the resources to explain its own structure and dynamics: “It is in the scientific image of man in the world that we begin to see the main outlines of the way in which man came to have an image of himself-in-the-world” (17). Understanding our self-understanding requires reaching beyond the manifest and postulating the social axis of human conceptuality, something, he implies, that only becomes available when we can see group phenomena as ‘evolutionary developments.’

Remember Sellars’ caveats regarding ‘correlational science’ and the sense in which the manifest image can be construed as scientific? (7) Here, we see how that leaky demarcation of the manifest (as correlational) and the scientific (as theoretical) serves his downstream equivocation of his manifest discourse with scientific discourse. If science is correlational, as he admits, then philosophy is also postulational—as he well knows. But if each image helps itself to the cognitive modes belonging to the other, then Sellars assertion that the distinction lies between a conception limited to ‘correlational techniques’ and one committed to the ‘postulation of imperceptibles’ (19) is either mistaken or incomplete. Traditional philosophy is nothing if not theoretical, which is to say, in the business of postulating ontologies.

Suppressing this fact allows him to pose his own traditional philosophical posits as (somehow) belonging to the scientific image of man-in-the-world. What are ‘spaces of reasons’ or ‘conceptual frameworks’ if not postulates used to explain the manifest phenomena of cognition? But then how do these posits contribute to the image of man as a ‘complex physical system’? Sellars understands the difficulty here “as long as the ultimate constituents of the scientific image are particles forming ever more complex systems of particles” (37). This is what ultimately motivates the structure of his ‘stereoscopic view,’ where ontological precedence is conceded to the scientific image, while cognition itself remains safely in the humanistic hands of the manifest image…

Which is to say, lost to crash space.

Are human neuroheuristic systems welded into ‘conceptual frameworks’ forming an ‘irreducible’ and ‘autonomous’ inferential regime? Obviously not. But we can now see why, given the confounds secondary to metacognitive neglect, they might report as such in philosophical reflection. Our ancestors bickered. In other words, our capacity to collectively resolve communicative and behavioural discrepancies belongs to our medial innocence: intentional idioms antedate our attempts to theoretically understand intentionality. Uttering them, not surprisingly, activates intentional cognitive systems, because, ancestrally speaking, intentional idioms always belonged to problem-ecologies requiring these systems to solve. It was all but inevitable that questioning the nature of intentional idioms would trigger the theoretical application of intentional cognition. Given the degree to which intentional cognition turns on neglect, our millennial inability to collectively make sense of ourselves, medial confusion, was all but inevitable as well. Intentional cognition cannot explain the nature of anything, insofar as natures are general, and the problem ecology of intentional cognition is specific. This is why, far from decisively resolving our cognitive straits, Sellars’ normative metaphysics merely complicates it, using the same overdetermined posits to make new(ish) guesses that can only serve as grist for more disputation.

But if his approach is ultimately hopeless, how is he able to track the development in human self-understanding at all? For one, he understands the centrality of behaviour. But rather than understand behaviour naturalistically, in terms of systems of dispositions and regularities, he understands it intentionally, via modes adapted to neglect physical super-complexities. Guesses regarding hidden systems of physically inexplicable efficacies—’conceptual frameworks’—are offered as basic explanations of human behaviour construed as ‘action.’

He also understands that distinct cognitive modes are at play. But rather than see this distinction biologically, as the difference between complex physical systems, he conceives it conceptually, which is to say, via source-insensitive systems incapable of charting, let alone explaining our cognitive complexity. Thus, his confounding reliance on what might be called manifest postulation, deep environmental explanation via shallow ecological (intentional) posits.

And he understands the centrality of information availability. But rather than see this availability biologically, as the play of physically interdependent capacities and resources, he conceives it, once again, conceptually. All differences make differences somehow. Information consists of differences selected (neurally or evolutionarily) by the production of prior behaviours. Information consists in those differences prone to make select systematic differences, which is to say, feed the function of various complex physical systems. Medial neglect assures that the general interdependence of information and cognitive system appears nowhere in experience or cognition. Once humanity began retasking its metacognitive capacities, it was bound to hallucinate a countless array of ‘givens.’ Sellars is at pains to stress the medial (enabling) dimension of experience and cognition, the inability of manifest deliverances to account for the form of thought (16). Suffering medial neglect, cued to misapply heuristics belonging to intentional cognition, he posits ‘conceptual frameworks’ as a means of accommodating the general interdependence of information and cognitive system. The naturalistic inscrutability of conceptual frameworks renders them local cognitive prime movers (after all, source-insensitive posits can only come first), assuring the ‘conceptual priority’ of the manifest image.

The issue of information availability, for him, is always conceptual, which is to say, always heuristically conditioned, which is to say, always bound to systematically distort what is the case. Where the enabling dimension of cognition belongs to the deep environments on a cognitive ecological account, it belongs to communities on Sellars’ inferentialist account. As result, he has no clear way of seeing how the increasingly technologically mediated accumulation of ancestrally unavailable information drives the development of human self-understanding.

The contrast between shallow (source-insensitive) cognitive ecologies and deep information environments opens the question of the development of human self-understanding to the high-dimensional messiness of life. The long migratory path from the medial innocence of our preliterate past to the medial chaos of our ongoing cognitive technological revolution has nothing to do with the “projection of man-in-the-world on the human understanding” (5) given the development of ‘conceptual frameworks.’ It has to do with blind medial adaptation to transforming cognitive ecologies. What complicates this adaptation, what delivers us from medial innocence to chaos, is the heuristic nature of source-insensitive cognitive modes. Their specificity, their inscrutability, not to mention their hypersensitivity (the ease with which problems outside their ability cue their application) all but doomed us to perpetual, discursive disarray.

Images. Games. Conceptual frameworks. None of these shallow ecological posits are required to make sense of our path from ancestral ignorance to present conundrum. And we must discard them, if we hope to finally turn and face our future, gaze upon the universe with the universe’s own eyes.

 

*Originally posted, April 2nd, 2018.

Postcards from Planet Analogue

by rsbakker

IMG_2108

So, I’m slowly emerging from my analogue cocoon. Imagine no internet interaction for almost a year… In quick succession, I turned 50, concluded my 33-year narrative obsession with the publication of The Unholy Consult, and achieved my 20-year theoretical goal with the publication of “On Alien Philosophy.” On the down side, my arthritis had worsened to the point where mowing the lawn became something I could only accomplish on ‘good days’—where taking four ibuprofens at a time was the rule, not the exception.

Change was upon me, whether I liked it or not. Only the form was in question.

At first, I started working on The End of Meaning, a non-fiction book attempting to sum the abstruse matters we’ve covered here in a manner that would be generally accessible. But my house is over 130 years old, so I also had a long list of renovation projects I wanted to complete. My arthritis lent a ‘now or never’ urgency to these projects—so I forced myself to persist despite the pain and my lifelong aversion to renovations. I grew up encircled by gutted walls. I’ve demolished. I’ve roofed. I’ve framed. I’ve spent entire afternoons straightening bent nails!

I was convinced that my appetite for construction would quickly peter out, and that my hunger to write would consume all—the way it always has. I replaced my rear screen door with a gorgeous glass one I got on clearance. Since parts were missing, I was forced to cut and hammer an old eavestrough nail into a spindle. So, there I was, pounding nails once again! The thing is my youthful alienation was nowhere to be found. The feeling of accomplishment I got installing that door was nothing short of ridiculous.

IMG_2109

Next on the list was repairing the roof of my 130-year-old barn. Certainly, that would send me scampering back to the computer screen!

No such luck. The job sucked ass, to be sure, but I felt… invigorated, I guess. Renewed. Taking four ibuprofens had become the exception once again.

I began rethinking things. All the time I’ve spent pondering ancestral neglect structures had made me nostalgic for the analogue cognitive ecologies of my youth. But were they so idyllic as I remembered?

So, every morning after delivering my wife to work and my daughter to school I set to work rebuilding my old barn from the inside out. I accessed the web only via my phone, and then only to do those things I could do in the analogue days: buy books, research how-to, check the news and weather. I neglected everything else—to my professional and interpersonal detriment I’m sure! There’s no way to sort the effects of physical labour from the effects of an analogue neglect structure, I know, but I’ll be damned if they didn’t seem to be of a piece. Working with your hands means working with brute matter. After a lifetime spent sculpting smoke, continually arguing the reality of my creations, the determinacy and the permanence of my work, let alone the immediate understanding it evoked in others, were blessed indeed. Nothing need be questioned. Nothing need be defended. For once, it was what it fucking was.

IMG_2110

Matter has no voice. The tools we evolved to manage it run as deep as life itself, whereas the tools we evolved to manage one another only run as deep as we do. And man-o-man, does it show.

IMG_2111

Now, I have a swank office in the loft of an antique barn. More importantly, I’m down to one or two ibuprofen a day—if I remember to take them at all. I feel ten years younger.

So, forgive me my absence, or my awkwardness crawling back into my old digital cockpit. Sometimes you need to go missing for a while, lest you go missing for good.

 

 

Enlightenment How? Pinker’s Tutelary Natures*

by rsbakker

 

The fate of civilization, Steven Pinker thinks, hangs upon our commitment to enlightenment values. Enlightenment Now: The Case for Reason, Science, Humanism and Progress constitutes his attempt to shore up those commitments in a culture grown antagonistic to them. This is a great book, well worth the read for the examples and quotations Pinker endlessly adduces, but even though I found myself nodding far more often than not, one glaring fact continually leaks through: Enlightenment Now is a book about a process, namely ‘progress,’ that as yet remains mired in ‘tutelary natures.’ As Kevin Williamson puts it in the National Review, Pinker “leaps, without warrant, from physical science to metaphysical certitude.”

What is his naturalization of meaning? Or morality? Or cognition—especially cognition! How does one assess the cognitive revolution that is the Enlightenment short understanding the nature of cognition? How does one prognosticate something one does not scientifically understand?

At one point he offers that “[t]he principles of information, computation, and control bridge the chasm between the physical world of cause and effect and the mental world of knowledge, intelligence, and purpose” (22). Granted, he’s a psychologist: operationalizations of information, computation, and control are his empirical bread and butter. But operationalizing intentional concepts in experimental contexts is a far cry from naturalizing intentional concepts. He entirely neglects to mention that his ‘bridge’ is merely a pragmatic, institutional one, that cognitive science remains, despite decades of research and billions of dollars in resources, unable to formulate its explananda, let alone explain them. He mentions a great number of philosophers, but he fails to mention what the presence of those philosophers in his thetic wheelhouse means.

All he ultimately has, on the one hand, is a kind of ‘ta-da’ argument, the exhaustive statistical inventory of the bounty of reason, science, and humanism, and on the other hand (which he largely keeps hidden behind his back), he has the ‘tu quoque,’ the question-begging presumption that one can only argue against reason (as it is traditionally understood) by presupposing reason (as it is traditionally understood). “We don’t believe in reason,” he writes, “we use reason” (352). Pending any scientific verdict on the nature of ‘reason,’ however, these kinds of transcendental arguments amount to little more than fancy foot-stomping.

This is one of those books that make me wish I could travel back in time to catch the author drafting notes. So much brilliance, so much erudition, all devoted to beating straw—at least as far as ‘Second Culture’ Enlightenment critiques are concerned. Nietzsche is the most glaring example. Ignoring Nietzsche the physiologist, the empirically-minded skeptic, and reducing him to his subsequent misappropriation by fascist, existential, and postmodernist thought, Pinker writes:

Disdaining the commitment to truth-seeking among scientists and Enlightenment thinkers, Nietzsche asserted that “there are no facts, only interpretations,” and that “truth is a kind of error without which a certain species of life could not live.” (Of course, this left him unable to explain why we should believe that those statements are true.) 446

Although it’s true that Nietzsche (like Pinker) lacked any scientifically compelling theory of cognition, what he did understand was its relation to power, the fact that “when you face an adversary alone, your best weapon may be an ax, but when you face an adversary in front of a throng of bystanders, your best weapon may be an argument” (415). To argue that all knowledge is contextual isn’t to argue that all knowledge is fundamentally equal (and therefore not knowledge at all), only that it is bound to its time and place, a creature possessing its own ecology, its own conditions of failure and flourishing. The Nietzschean thought experiment is actually quite a simple one: What happens when we turn Enlightenment skepticism loose upon Enlightenment values? For Nietzsche, Enlightenment Now, though it regularly pays lip service to the ramshackle, reversal-prone nature of progress, serves to conceal the empirical fact of cognitive ecology, that we remain, for all our enlightened noise-making to the contrary, animals bent on minimizing discrepancies. The Enlightenment only survives its own skepticism, Nietzsche thought, in the transvaluation of value, which he conceived—unfortunately—in atavistic or morally regressive terms.

This underwrites the subsequent critique of the Enlightenment we find in Adorno—another thinker whom Pinker grossly underestimates. Though science is able to determine the more—to provide more food, shelter, security, etc.—it has the social consequence underdetermining (and so undermining) the better, stranding civilization with a nihilistic consumerism, where ‘meaningfulness’ becomes just another commodity, which is to say, nothing meaningful at all. Adorno’s whole diagnosis turns on the way science monopolizes rationality, the way it renders moral discourses like Pinker’s mere conjectural exercises (regarding the value of certain values), turning on leaps of faith (on the nature of cognition, etc.), bound to dissolve into disputation. Although both Nietzsche and Adorno believed science needed to be understood as a living, high dimensional entity, neither harboured any delusions as to where they stood in the cognitive pecking order. Unlike Pinker.

Whatever their failings, Nietzsche and Adorno glimpsed a profound truth regarding ‘reason, science, humanism, and progress,’ one that lurks throughout Pinker’s entire account. Both understood that cognition, whatever it amounts to, is ecological. Steven Pinker’s claim to fame, of course, lies in the cognitive ecological analysis of different cultural phenomena—this was the whole reason I was so keen to read this book. (In How the Mind Works, for instance, he famously calls music ‘auditory cheese-cake.’) Nevertheless, I think both Nietzsche and Adorno understood the ecological upshot of the Enlightenment in way that Pinker, as an avowed humanist, simply cannot. In fact, Pinker need only follow through on his modus operandi to see how and why the Enlightenment is not what he thinks it is—as well as why we have good reason to fear that Trumpism is no ‘blip.’

Time and again Pinker likens the process of Enlightenment, the movement away from our tutelary natures, in terms of a conflict between ancestral cognitive predilections and scientifically and culturally revolutionized environments. “Humans today,” he writes, “rely on cognitive faculties that worked well enough in traditional societies, but which we now see are infested with bugs” (25). And the number of bugs that Pinker references in the course of the book is nothing short of prodigious. We tend to estimate frequencies according to ease of retrieval. We tend to fear losses more than we hope for gains. We tend to believe as our group believes. We’re prone to tribalism. We tend to forget past misfortune, and to succumb to nostalgia. The list goes on and on.

What redeems us, Pinker argues, is the human capacity for abstraction and combinatorial recursion, which allows us to endlessly optimize our behaviour. We are a self-correcting species:

So for all the flaws in human nature, it contains the seeds of its own improvement, as long as it comes up with norms and institutions that channel parochial interests into universal benefits. Among those norms are free speech, nonviolence, cooperation, cosmopolitanism, human rights, and an acknowledgment of human fallibility, and among the institutions are science, education, media, democratic government, international organizations, and markets. Not coincidentally, these were the major brainchildren of the Enlightenment. 28

We are the products of ancestral cognitive ecologies, yes, but our capacity for optimizing our capacities allows us to overcome our ‘flawed natures,’ become something better than what we were. “The challenge for us today,” Pinker writes, “is to design an informational environment in which that ability prevails over the ones that lead us into folly” (355).

And here we encounter the paradox that Enlightenment Now never considers, even though Pinker presupposes it continually. The challenge for us today is to construct an informational environment that mitigates the problems arising out of our previous environmental constructions. The ‘bugs’ in human nature that need to be fixed were once ancestral features. What has rendered these adaptations ‘buggy’ is nothing other than the ‘march of progress.’ A central premise of Enlightenment Now is that human cognitive ecology, the complex formed by our capacities and our environments, has fallen out of whack in this way or that, cuing us to apply atavistic modes of problem-solving out of school. The paradox is that the very bugs Pinker thinks only the Enlightenment can solve are the very bugs the Enlightenment has created.

What Nietzsche and Adorno glimpsed, each in their own murky way, was a recursive flaw in Enlightenment logic, the way the rationalization of everything meant the rationalization of rationalization, and how this has to short-circuit human meaning. Both saw the problem in the implementation, in the physiology of thought and community, not in the abstract. So where Pinker seeks to “to restate the ideals of the Enlightenment in the language and concepts of the 21st century” (5), we can likewise restate Nietzsche and Adorno’s critiques of the Enlightenment in Pinker’s own biological idiom.

The problem with the Enlightenment is a cognitive ecological problem. The technical (rational and technological) remediation of our cognitive ecologies transforms those ecologies, generating the need for further technical remediation. Our technical cognitive ecologies are thus drifting ever further from our ancestral cognitive ecologies. Human sociocognition and metacognition in particular are radically heuristic, and as such dependent on countless environmental invariants. Before even considering more, smarter intervention as a solution to the ambient consequences of prior interventions, the big question has to be how far—and how fast—can humanity go? At what point (or what velocity) does a recognizably human cognitive ecology cease to exist?

This question has nothing to do with nostalgia or declinism, no more than any question of ecological viability in times of environmental transformation. It also clearly follows from Pinker’s own empirical commitments.

 

The Death of Progress (at the Hand of Progress)

The formula is simple. Enlightenment reason solves natures, allowing the development of technology, generally relieving humanity of countless ancestral afflictions. But Enlightenment reason is only now solving its own nature. Pinker, in the absence of that solution, is arguing that the formula remains reliable if not quite as simple. And if all things were equal, his optimistic induction would carry the day—at least for me. As it stands, I’m with Nietzsche and Adorno. All things are not equal… and we would see this clearly, I think, were it not for the intentional obscurities comprising humanism. Far from the latest, greatest hope that Pinker makes it out to be, I fear humanism constitutes yet another nexus of traditional intuitions that must be overcome. The last stand of ancestral authority.

I agree this conclusion is catastrophic, “the greatest intellectual collapse in the history of our species” (vii), as an old polemical foe of Pinker’s, Jerry Fodor (1987) calls it. Nevertheless, short grasping this conclusion, I fear we court a disaster far greater still.

Hitherto, the light cast by the Enlightenment left us largely in the dark, guessing at the lay of interior shadows. We can mathematically model the first instants of creation, and yet we remain thoroughly baffled by our ability to do so. So far, the march of moral progress has turned on the revolutionizing our material environments: we need only renovate our self-understanding enough to accommodate this revolution. Humanism can be seen as the ‘good enough’ product of this renovation, a retooling of folk vocabularies and folk reports to accommodate the radical environmental and interpersonal transformations occurring around them. The discourses are myriad, the definitions are endlessly disputed, nevertheless humanism provisioned us with the cognitive flexibility required to flourish in an age of environmental disenchantment and transformation. Once we understand the pertinent facts of human cognitive ecology, its status as an ad hoc ‘tutelary nature’ becomes plain.

Just what are these pertinent facts? First, there is a profound distinction between natural or causal cognition, and intentional cognition. Developmental research shows that infants begin exhibiting distinct physical versus psychological cognitive capacities within the first year of life. Research into Asperger Syndrome (Baron-Cohen et al 2001) and Autism Spectrum Disorder (Binnie and Williams 2003) consistently reveals a cleavage between intuitive social cognitive capacities, ‘theory-of-mind’ or ‘folk psychology,’ and intuitive mechanical cognitive capacities, or ‘folk physics.’ Intuitive social cognitive capacities demonstrate significant heritability (Ebstein et al 2010, Scourfield et al 1999) in twin and family studies. Adults suffering Williams Syndrome (a genetic developmental disorder affecting spatial cognition) demonstrate profound impairments on intuitive physics tasks, but not intuitive psychology tasks (Kamps et al 2017). The distinction between intentional and natural cognition, in other words, is not merely a philosophical assertion, but a matter of established scientific fact.

Second, cognitive systems are mechanically intractable. From the standpoint of cognition, the most significant property of cognitive systems is their astronomical complexity: to solve for cognitive systems is to solve for what are perhaps the most complicated systems in the known universe. The industrial scale of the cognitive sciences provides dramatic evidence of this complexity: the scientific investigation of the human brain arguably constitutes the most massive cognitive endeavor in human history. (In the past six fiscal years, from 2012 to 2017, the National Institute of Health [21/01/2017] alone will have spent more than 113 billion dollars funding research bent on solving some corner of the human soul. This includes, in addition to the neurosciences proper, research into Basic Behavioral and Social Science (8.597 billion), Behavioral and Social Science (22.515 billion), Brain Disorders (23.702 billion), Mental Health (13.699 billion), and Neurodegeneration (10.183 billion)).

Despite this intractability, however, our cognitive systems solve for cognitive systems all the time. And they do so, moreover, expending imperceptible resources and absent any access to the astronomical complexities responsible—which is to say, given very little information. Which delivers us to our third pertinent fact: the capacity of cognitive systems to solve for cognitive systems is radically heuristic. It consists of ‘fast and frugal’ tools, not so much sacrificing accuracy as applicability in problem-solving (Todd and Gigerenzer 2012). When one cognitive system solves for another it relies on available cues, granular information made available via behaviour, utterly neglecting the biomechanical information that is the stock and trade of the cognitive sciences. This radically limits their domain of applicability.

The heuristic nature of intentional cognition is evidenced by the ease with which it is cued. Thus, the fourth pertinent fact: intentional cognition is hypersensitive. Anthropomorphism, the attribution of human cognitive characteristics to systems possessing none, evidences the promiscuous application of human intentional cognition to intentional cues, our tendency to run afoul what might be called intentional pareidolia, the disposition to cognize minds where no minds exist (Waytz et al 2014). The Heider-Simmel illusion, an animation consisting of no more than shapes moving about a screen, dramatically evidences this hypersensitivity, insofar as viewers invariably see versions of a romantic drama (Heider and Simmel 1944). Research in Human-Computer Interaction continues to explore this hypersensitivity in a wide variety of contexts involving artificial systems (Nass and Moon 2000, Appel et al 2012). The identification and exploitation of our intentional reflexes has become a massive commercial research project (so-called ‘affective computing’) in its own right (Yonck 2017).

Intentional pareidolia underscores the fact that intentional cognition, as heuristic, is geared to solve a specific range of problems. In this sense, it closely parallels facial pareidolia, the tendency to cognize faces where no faces exist. Intentional cognition, in other words, is both domain-specific, and readily misapplied.

The incompatibility between intentional and mechanical cognitive systems, then, is precisely what we should expect, given the radically heuristic nature of the former. Humanity evolved in shallow cognitive ecologies, mechanically inscrutable environments. Only the most immediate and granular causes could be cognized, so we evolved a plethora of ways to do without deep environmental information, to isolate saliencies correlated with various outcomes (much as machine learning).

Human intentional cognition neglects the intractable task of cognizing natural facts, leaping to conclusions on the basis of whatever information it can scrounge. In this sense it’s constantly gambling that certain invariant backgrounds obtain, or conversely, that what it sees is all that matters. This is just another way to say that intentional cognition is ecological, which in turn is just another way to say that it can degrade, even collapse, given the loss of certain background invariants.

The important thing to note, here, of course, is how Enlightenment progress appears to be ultimately inimical to human intentional cognition. We can only assume that, over time, the unrestricted rationalization of our environments will gradually degrade, then eventually overthrow the invariances sustaining intentional cognition. The argument is straightforward:

1) Intentional cognition depends on cognitive ecological invariances.

2) Scientific progress entails the continual transformation of cognitive ecological invariances.

Thus, 3) scientific progress entails the collapse of intentional cognition.

But this argument oversimplifies matters. To see as much one need only consider the way a semantic apocalypse—the collapse of intentional cognition—differs from say a nuclear or zombie apocalypse. The Walking Dead, for instance, abounds with savvy applications of intentional cognition. The physical systems underwriting meaning, in other words, are not the same as the physical systems underwriting modern civilization. So long as some few of us linger, meaning lingers.

Intentional cognition, you might think, is only as weak or as hardy as we are. No matter what the apocalyptic scenario, if humans survive it survives. But as autistic spectrum disorder demonstrates, this is plainly not the case. Intentional cognition possesses profound constitutive dependencies (as those suffering the misfortune of watching a loved one succumb to strokes or neurodegenerative disease knows first-hand). Research into the psychological effects of solitary confinement, on the other hand, show that intentional cognition also possesses profound environmental dependencies as well. Starve the brain of intentional cues, and it will eventually begin to invent them.

The viability of intentional cognition, in other words, depends not on us, but on a particular cognitive ecology peculiar to us. The question of the threshold of a semantic apocalypse becomes the question of the stability of certain onboard biological invariances correlated to a background of certain environmental invariances. Change the constitutive or environmental invariances underwriting intentional cognition too much, and you can expect it will crash, generate more problems than solutions.

The hypersensitivity of intentional cognition either evinced by solitary confinement or more generally by anthropomorphism demonstrates the threat of systematic misapplication, the mode’s dependence on cue authenticity. (Sherry Turkle’s (2007) concerns regarding ‘Darwinian buttons,’ or Deidre Barrett’s (2010) with ‘supernormal stimuli,’ touch on this issue). So, one way of inducing semantic apocalypse, we might surmise, lies in the proliferation of counterfeit cues, information that triggers intentional determinations that confound, rather than solve any problems. One way to degrade cognitive ecologies, in other words, is to populate environments with artifacts cuing intentional cognition ‘out of school,’ which is to say, circumstances cheating or crashing them.

The morbidity of intentional cognition demonstrates the mode’s dependence on its own physiology. What makes this more than platitudinal is the way this physiology is attuned to the greater, enabling cognitive ecology. Since environments always vary while cognitive systems remain the same, changing the physiology of intentional cognition impacts every intentional cognitive ecology—not only for oneself, but for the rest of humanity as well. Just as our moral cognitive ecology is complicated by the existence of psychopaths, individuals possessing systematically different ways of solving social problems, the existence of ‘augmented’ moral cognizers complicates our moral cognitive ecology as well. This is important because you often find it claimed in transhumanist circles (see, for example, Buchanan 2011), that ‘enhancement,’ the technological upgrading of human cognitive capacities, is what guarantees perpetual Enlightenment. What better way to optimize our values than by reengineering the biology of valuation?

Here, at last, we encounter Nietzsche’s question cloaked in 21st century garb.

And here we can also see where the above argument falls short: it overlooks the inevitability of engineering intentional cognition to accommodate constitutive and environmental transformations. The dependence upon cognitive ecologies asserted in (1) is actually contingent upon the ecological transformation asserted in (2).

1) Intentional cognition depends on constitutive and environmental cognitive ecological invariances.

2) Scientific progress entails the continual transformation of constitutive and environmental cognitive ecological invariances.

Thus, 3) scientific progress entails the collapse of intentional cognition short remedial constitutive transformations.

What Pinker would insist is that enhancement will allow us to overcome our Pleistocene shortcomings, and that our hitherto inexhaustible capacity to adapt will see us through. Even granting the technical capacity to so remediate, the problem with this reformulation is that transforming intentional cognition to account for transforming social environments automatically amounts to a further transformation of social environments. The problem, in other words, is that Enlightenment entails the end of invariances, the end of shared humanity, in fact. Yuval Harari (2017) puts it with characteristic brilliance in Homo Deus:

What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design, or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket? 277

The former dilemma is presently dominating the headlines and is set to be astronomically complicated by the explosion of AI. The latter we can see rising out of literature, clawing its way out of Hollywood, seizing us with video game consoles, engulfing ever more experiential bandwidth. And as I like to remind people, 100 years separates the Blu-Ray from the wax phonograph.

The key to blocking the possibility that the transformative potential of (2) can ameliorate the dependency in (1) lies in underscoring the continual nature of the changes asserted in (2). A cognitive ecology where basic constitutive and environmental facts are in play is no longer recognizable as a human one.

Scientific progress entails the collapse of intentional cognition.

On this view, the coupling of scientific and moral progress is a temporary affair, one doomed to last only so long as cognition itself remained outside the purview of Enlightenment cognition. So long as astronomical complexity assured that the ancestral invariances underwriting cognition remained intact, the revolution of our environments could proceed apace. Our ancestral cognitive equilibria need not be overthrown. In place of materially actionable knowledge regarding ourselves, we developed ‘humanism,’ a sop for rare stipulation and ambient disputation.

But now that our ancestral cognitive equilibria are being overthrown, we should expect scientific and moral progress will become decoupled. And I would argue that the evidence of this is becoming plainer with the passing of every year. Next week, we’ll take a look at several examples.

I fear Donald Trump may be just the beginning.

.

References

Appel, Jana, von der Putten, Astrid, Kramer, Nicole C. and Gratch, Jonathan 2012, ‘Does Humanity Matter? Analyzing the Importance of Social Cues and Perceived Agency of a Computer System for the Emergence of Social Reactions during Human-Computer Interaction’, in Advances in Human-Computer Interaction 2012 <https://www.hindawi.com/journals/ahci/2012/324694/ref/&gt;

Barrett, Deidre 2010, Supernormal Stimuli: How Primal Urges Overran Their Original Evolutionary Purpose (New York: W.W. Norton)

Binnie, Lynne and Williams, Joanne 2003, ‘Intuitive Psychology and Physics Among Children with Autism and Typically Developing Children’, Autism 7

Buchanan, Allen 2011, Better than Human: The Promise and Perils of Enhancing Ourselves (New York: Oxford University Press)

Ebstein, R.P., Israel, S, Chew, S.H., Zhong, S., and Knafo, A. 2010, ‘Genetics of human social behavior’, in Neuron 65

Fodor, Jerry A. 1987, Psychosemantics: The Problem of Meaning in the Philosophy of Mind (Cambridge, MA: The MIT Press)

Harari, Yuval 2017, Homo Deus: A Brief History of Tomorrow (New York: HarperCollins)

Heider, Fritz and Simmel, Marianne 1944, ‘An Experimental Study of Apparent Behaviour,’ in The American Journal of Psychology 57

Kamps, Frederik S., Julian, Joshua B., Battaglia, Peter, Landau, Barbara, Kanwisher, Nancy and Dilks Daniel D 2017, ‘Dissociating intuitive physics from intuitive psychology: Evidence from Williams syndrome’, in Cognition 168

Nass, Clifford and Moon, Youngme 2000, ‘Machines and Mindlessness: Social Responses to Computers’, Journal of Social Issues 56

Pinker, Steven 1997, How the Mind Works (New York: W.W. Norton)

—. 2018, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Viking)

Scourfield J., Martin N., Lewis G. and McGuffin P. 1999, ‘Heritability of social cognitive skills in children and adolescents’, British Journal of Psychiatry 175

Todd, P. and Gigerenzer, G. 2012 ‘What is ecological rationality?’, in Todd, P. and Gigerenzer, G. (eds.) Ecological Rationality: Intelligence in the World (Oxford: Oxford University Press) 3–

30

Turkle, Sherry 2007, ‘Authenticity in the age of digital companions’, Interaction Studies 501-517

Waytz, Adam, Cacioppo, John, and Epley, Nicholas 2014, ‘Who See Human? The Stability and Importance of Individual Differences in Anthropomorphism’, Perspectives in Psychological Science 5

Yonck, Richard 2017, Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence (New York, NY: Arcade Publishing)

 

*Originally posted 20/03/2018

Division By Zero

by rsbakker

 

If we want to know what truth consists in, perhaps we should ask what it is we are building up and tearing down when we make cases for and against the truth.

Like so many others, I found myself riveted by the Kavanaugh confirmation hearings. (My money is on Ford, not simply because I found her testimony compelling, but because her story implicates someone doomed to corroborate Kavanaugh—not the kind of detail you would expect to find in a partisan hit job). Aside from the unsettling realization that mainstream Senate Republicans—as well as Kavanaugh himself!—had adopted Trump’s ‘post-truth’ playbook, what struck me was the precarious way Rachel Mitchell’s questions were poised between ‘victim blaming’ and simple ‘fact finding.’ Had Brett Kavanaugh sexually assaulted her? Right from the beginning, Mitchell began asking questions regarding the provenance and circumstances of her accusation, the implication being that she had been coached by partisan handlers. (As it turns out, she wasn’t). But she was also careful to map the limits of Ford’s memory of the event, the insinuation being that her cognitive capacities could not be trusted. (The problem with this approach, as it turns out, was that Ford, as a psychologist, knows quite a bit about the cognitive capacities at issue, and so was able to identify those limits as precisely the kind of limits one should expect in cases such as hers).

Victim blaming is so instinctive, so common, that we often have difficulty recognizing it as such. Accusing our accusers is a go-to human strategy for managing interpersonal conflict. People are credulous. In the absence of information to the contrary, ‘warning flags,’ we simply take assertions for granted, we trust that everything neglected, everything from cognitive capacity to motivation to circumstances, is irrelevant to the reliability of the claim. Human cognitive reliability, it turns out, depends on a tremendous number of physical factors, which is why impugning the reliability of claims is so dreadfully easy. At one point, Mitchell even insinuates (citing Geiselman and Fisher) that Ford compromised her story by communicating it absent specially trained trauma interviewers. Mitchell goes so far, in other words, to suggest the very format of the ongoing Senate hearing had impacted the reliability of her account. (This is where I thought her downright insidious (especially given her use of humour at this turn), but as it turns out, she was probably being too subtle given that many see this as Mitchell criticizing the Senate proceedings).

When the Republicans finally ditched Mitchell’s plane somewhere in the Atlantic, the attacks ranged the whole of constitutive and circumstantial relevance space (apropos the semantic apocalypse, we are fast approaching the point where crude topographies of this space can be mapped and algorithms developed to exploit it), a Quixotic charge of old white men that had to raise the hackles of even the most conservative women. Cognition requires we neglect countless constitutive and circumstantial factors. Neglect insures that more information is always required to flag potential constitutive and circumstantial confounds. Thus, the spectacle of old men competing for Fox News clips, each of them insisting on the relevance of something pertaining to the production of her claims. We’re not disputing something happened, but how do you know it was Brett? 36 years! Multiple denials!

From the outset, the Republicans had made a calculation: to cue moral outrage at the Democrats, and thus ingroup solidarity among conservatives, regardless of gender. From the outset they understood the peril of cuing outrage against male politicians and ingroup solidarity among women. Having Rachel Mitchell question her prevented cuing competing identifications, not to mention the politically disastrous scripts falling out of them. The Democratic strategy, of course, was to cue both channels, lending them, I think, an intrinsic advantage. (The Republican charge that the Democrats are engineering these accusations for the purposes of political advantage are false, but there’s little doubt that they are gaming them, and as the semantic apocalypse deepens, I think we should expect the production of reputation destroying realities to become big business). If ‘trust’ is understood as the degree to which we do not, blindly or otherwise, interrogate constitutive and circumstantial factors relevant to the claims of others, the enormous importance of group affiliation becomes obvious. Think of the amount of energy expended these past days, all bent on preventing or protecting the default: that Kristine Blasey Ford speaks true. Group identity cues trust, which is to say, spares us the expense of such interrogations.

Think of truth as merely the degree to which we can take constitutive and circumstantial factors for granted relative to behavioural feedback. Truth is where neglect, brute insensitivity to otherwise relevant constitutive and circumstantial factors, does not matter. Kristine Blasey Ford ‘speaks true,’ therefore, when she speaks as one who endured the violence described, nothing more or less. (The disquotational parallel is no coincidence here, I think: what disquotation captures is the primary function of truth talk, to troubleshoot issues involving constitutive and circumstantial factors). If we can take constitutive and circumstantial factors for granted, then third-party investigations of her claims should raise no flags. Our trust should be vindicated.

But there’s a catch. Even when we investigate constitutive and circumstantial factors, we continue to neglect a great many of them as such, relying instead on a variety of heuristic work-arounds. The inaccessibility of the constitutive and circumstantial means we have to troubleshoot constitutive and circumstantial problems absent any reference to their high-dimensional reality. The question of truth, far from a question regarding what can be taken for granted relative to behavioural feedback, becomes a question of whatever happens to be available for deliberative troubleshooting: typically, the claim-maker, the claim, and the world. As a result, we have no idea just what we’re doing when embroiled in spectacles such as Kavanaugh’s Senate confirmation hearing. Everyone is left guessing, groping. The nature of the breakdowns eludes us entirely.

If a claim regards something existent, an undiscovered species of possum, say, the easiest way to verify the truth of the claim is to simply go out and ‘see for yourself’: so far as our capacities and circumstances remain irrelevant and we see the possum, the claim is true. The absence of empirical discrepancies between cognitive systems allows those cognitive systems to continue neglecting their constitution and circumstances, to rely upon other brains the way we rely upon our own: blindly. Call this ‘default synchronization’: the constitutive and circumstantial coincidence required for cooperative behaviour regarding things like new species of possum. Seeing, as the saying goes, is believing.

This, as it turns out, is one of the few ways truth can overcome trust.

If, however, a claim regards something only indirectly accessible, an ‘alleged event’ or a ‘scientific theory,’ say, we have to rely on its consistency with whatever is relevant and accessible, ‘evidence.’ And when that evidence consists of reports, more claims, then the threat is always that our original problem will simply metastasize, and the interrogation of constitutive and circumstantial factors will be multiplied to more and more claims. Both sides frame the claims of the other side as artifacts, manipulations, while they view their own claims as windows, glimpses of truth (or failing that, self-defensive artifacts in service of that truth). The claims of both are equally artifactual, of course, both equally the product of biology and environment. The difference consists only in that behaviour can remain entirely insensitive to the artifactuality of the true claim without running aground. Just as with vision. The window works so well as a figure for truth because visual cognition likewise neglects its constitutive dimension. Visual cognition provides experience with a tremendous amount of information, going so far as to index its reliability (with blur, darkness, glare, and so on), while providing nary a whiff of the machinations responsible. (You could say the so-called ‘view from nowhere’ is literal to the extent ‘nowhere’ references neglect of the constitutive and circumstantial conditions of our view.)

To call attention to constitutive and circumstantial problems is to ‘muddy the waters,’ to scotch the illusion of transparency, and so conserve in-group solidarity. We evolved to manipulate the orientations of isomorphic systems, to husband and herd the constitutive and circumstantial coincidence of those we trust according to how far we trust them. (Representationalism merely adapts and schematizes this basic capacity, thus saddling the whole of cognition with, among other things, the problem of ‘transparency,’ which is to say, an ontologization of constitutive and circumstantial neglect). We reason with one another. Neglect assures that we do so blindly, without the least second-order inkling of what is actually going on. If ‘reason’ is a lesser tool, a neurolinguistic means of policing discrepancies—effecting ‘noise reduction’—within ingroups, as it pretty clearly seems to be in instances such as these, then the ‘rationality’ of something like the Kavanaugh confirmation hearings requires some minimal coincidence, some tendency to identify with as opposed to against, and so to either neglect or overlook the same things. A spontaneous ‘kumbaya’ moment, or something… something information technology is rendering all but impossible.

Either that or some kind of ‘transparency event,’ a Burning of the Reichstag, only in the context of Kavanaugh’s or Ford’s life, something powerful enough to cue trans-group identification.

Or what amounts to the same thing: a common truth.

We’re Fucked. So (Now) What?

by rsbakker

“Conscious self-creation.” This is the nostrum Roy Scranton offers at the end of his now notorious piece, “We’re Doomed. Now What?” Conscious self-creation is the ‘now what,’ the imperative that we must carry across the threshold of apocalypse. After spending several weeks in the company of children I very nearly wept reading this in his latest collection of essays. I laughed instead.

I understand the logic well enough. Social coordination turns on trust, which turns on shared values, which turns on shared narratives. As Scranton writes, “Humans have survived and thrived in some of the most inhospitable environments on Earth, from the deserts of Arabia to the ice fields of the Arctic, because of this ability to organize collective life around symbolic constellations of meaning.” If our imminent self-destruction is the consequence of our traditional narratives, then we, quite obviously, need to come up with better narratives. “We need to work together to transform a global order of meaning focused on accumulation into a new order of meaning that knows the value of limits, transience, and restraint.”

If I laughed, it was because Scranton’s thesis is nowhere near so radical as his title might imply. It consists, on the one hand, in the truism that human survival depends on engineering an environmentally responsible culture, and on the other, the pessimistic claim that this engineering can only happen after our present (obviously irresponsible) culture has self-destructed. The ‘now what,’ in other words, amounts to the same-old same-old, only après le deluge. Just another goddamn narrative.

Scranton would, of course, take issue with my ‘just another goddamn’ modifier. As far as he’s concerned, the narrative he outlines is not just any narrative, it’s THE narrative. And, as the owner of a sophisticated philosophical position, he could endlessly argue its moral and ecological superiority… the same as any other theoretician. And therein lies the fundamental problem. Traditional philosophy is littered with bids to theorize and repair meaning. The very plasticity allowing for its rehabilitation also attests to its instability, which is to say, our prodigious ability to cook narratives up and our congenital inability to make them stick.

Thus, my sorrow, and my fear for children. Scranton, like fairly every soul writing on these topics, presumes our problem lies in the content of our narratives rather than their nature.

Why, for instance, presume meaning will survive the apocalypse? Even though he rhetorically stresses the continuity of nature and meaning, Scranton nevertheless assumes the independence of the latter. But why? If meaning is fundamentally natural, then what in its nature renders it immune to ecological degradation and collapse?

Think about the instability referenced above, the difficulty we have making our narratives collectively compelling. This wasn’t always the case. For the vast bulk of human history, our narratives were simply given. Our preliterate ancestors evolved the plasticity required to adapt their coordinating stories (over the course of generations) to the demands of countless different environments—nothing more or less. The possibility of alternative narratives, let alone ‘conscious self-creation,’ simply did not exist given the metacognitive resources at their disposal. They could change their narrative, to be sure, but incrementally, unconsciously, not so much convinced it was the only game in town as unable to report otherwise.

Despite their plasticity, our narratives provided the occluded (and therefore immovable) frame of reference for all our sociocognitive determinations. We quite simply did not evolve to systematically question the meaning of our lives. The capacity to do so seems to have required literacy, which is to say, a radical transformation of our sociocognitive environment. Writing allowed our ancestors to transcend the limits of memory, to aggregate insights, to record alternatives, to regiment and to interrogate claims. Combined with narrative plasticity, literacy begat a semantic explosion, a proliferation of communicative alternatives that continues to accelerate to this present day.

This is biologically unprecedented. Literacy, it seems safe to say, irrevocably domesticated our ancestral cognitive habitat, allowing us to farm what we once gathered. The plasticity of meaning, our basic ability to adapt our narratives, is the evolutionary product of a particular cognitive ecology, one absent writing. Literacy, you could say, constitutes a form of pollution, something that disrupts preexisting adaptive equilibria. Aside from the cognitive bounty it provides, it has the long-term effect of destabilizing narratives—all narratives.

The reason we find such a characterization jarring is that we subscribe to a narrative (Scranton’s eminently Western narrative) that values literacy as a means of generating new meaning. What fool would argue for illiteracy (and in writing no less!)? No one I know. But the fact remains that with literacy, certain ancestral functions of narrative were doomed to crash. Where once there was blind trust in our meanings, we find ourselves afflicted with questions, forced to troubleshoot what our ancestors took for granted. (This is the contradiction dwelling in the heart of all post-modernisms: the valuation of the very process devaluing meaning, crying ‘More is better!’ as those unable or unwilling to tread water drown).

The biological origins of narrative lie in shallow information cognitive ecologies, circumstances characterized by profound ignorance. What we cannot grasp we poke with sticks. Hitherto we’ve been able to exapt these capacities to great effect, raising a civilization that would make our story-telling ancestors weep, and for wonder far more than horror. But as with all heuristic systems, something must be taken for granted. Only so much can be changed before an ecology collapses altogether. And now we stand on the cusp of a communicative revolution even more profound than literacy, a proliferation, not simply of alternate narratives, but of alternate narrators.

If you sweep the workbench clean, cease looking at meaning as something somehow ‘anomalous’ or ‘transcendent,’ narrative becomes a matter of super-complicated systems, things that can be cut short by a heart attack or stroke. If you refuse to relinquish the meat (which is to say nature), then narratives, like any other biological system, require that particular background conditions obtain. Scranton’s error, in effect, is a more egregious version of the error Harari makes in Homo Deus, the default presumption that meaning somehow lies outside the circuit of ecology. Harari, recall, realizes that humanism, the ‘man-the-meaning-maker’ narrative of Western civilization, is doomed, but his low-dimensional characterization of the ‘intersubjective web of meaning’ as an ‘intermediate level of reality’ convinces him that some other collective narrative must evolve to take its place. He fails to see how the technologies he describes are actively replacing the ancestral social coordinating functions of narrative.

Scranton, perhaps hobbled by the faux-naturalism of Speculative Realism, cannot even concede the wholesale collapse of humanism, only those elements antithetical to environmental sustainability. His philosophical commitments effectively blind him to the intimate connection between the environmental crises he considers throughout the collection, and the semantic collapses he so eloquently describes in the final essay, “What is Thinking Good For?” Log onto the web, he writes, “and you’ll soon find yourself either nauseated by the vertigo that comes from drifting awash in endless waves of repetitive, clickbaity, amnesiac drek, or so benumbed and bedazzled by the sheer volume of ersatz cognition on display that you wind up giving in to the flow and welcoming your own stupefaction as a kind of relief.” Throughout this essay he hovers about, without quite touching, the idea of noise, how the technologically mediated ease of meaning production and consumption has somehow compromised our ability to reliably signal. Our capacity to arbitrate and select signals is an ecological artifact, historically dependent on the ancestral bottleneck of physical presence. Once a precious resource, like-minded commiseration has become cheap as dirt.

But since he frames the problem in the traditional register of ‘thought,’ an entity he acknowledges he cannot definitively define, he has no way of explaining what precisely is going wrong, and so finds himself succumbing to analogue nostalgia, Kantian shades. What is thinking good for? The interruption of cognitive reflex, which is to say, freedom from tutelary natures.’ Thinking, genuine thinking, is a koan.

The problem, of course, is that we now know that it’s tutelary natures all the way down: deliberative interruption is itself a reflex, sometimes instinctive, sometimes learned, but dependent on heuristic cues all the same. ‘Freedom’ is a shallow information ecological artifact, a tool requiring certain kinds of environmental ignorance (an ancestral neglect structure) to reliably discharge its communicative functions. The ‘free will debate’ simply illustrates the myriad ways in which the introduction of mechanical information, the very information human sociocognition has evolved to do without, inevitably crashes the problem-solving power of sociocognition.

The point being that nothing fundamental—and certainly nothing ontological—separates the crash of thought and freedom from the crash of any other environmental ecosystem. Quite without realizing, Scranton is describing the same process in both essays, the global dissolution of ancestral ecologies, cognitive and otherwise. What he and, frankly, the rest of the planet need to realize is that between the two, the prospect of semantic apocalypse is actually both more imminent and more dire. The heuristic scripts we use to cognize biological intelligences are about to face an onslaught of evolutionarily unprecedented intelligences, ever-improving systems designed to cue human sociocognitive reflexes out of school. How long before we’re overrun by billions of ‘junk intelligences’? One decade? Two?

What happens when genuine social interaction becomes optional?

The age of AI is upon us. And even though it is undoubtedly the case that social cognition is heuristic—ecological—our blindness to our nature convinces us that we possess no such nature and so remain, in some respect (because strokes still happen), immune. Our ‘symbolic spaces’ will be deluged with invasive species, each optimized to condition us, to cue social reflexes—to “nudge” or to “improve user experience.” We’ll scoff at them, declare them stupid, even as we dutifully run through scripts they have cued.

So long as the residue of traditional humanistic philosophy persists, so long as we presume meaning exceptional, this prospect cannot even be conceived, let alone explored. The “evacuation of interiority,” as Scranton calls it, is always the other guy’s—metacognitive neglect assures experience cannot but appear fathomless, immovable. Therein lies the heartbreaking genius of our cognitive predicament: given the intractability of our biomechanical nature, our sociocognitive and metacognitive systems behave as though no such nature exists. We just… are—the deliverance of something inexplicable.

An apparent interruption in thought, in nature, something necessarily observing the ruin, rather than (as Nietzsche understood) embodying it. And so enthusiastically tearing down the last ecological staple sustaining meaning: that humans cue one another ignorant of those cues as such.

All deep environmental knowledge constitutes an unprecedented attenuation of our ancestral cognitive ecologies. Up to this point, the utilities extracted have far exceeded the utilities lost. Pinker is right in this one regard: modernity has been a fantastic deal. We could plunder the ecologies about us, while largely ignoring the ecologies between us. But now that science and technology are becoming cognitive, we ourselves are becoming the resources ripe for plunder, the ecology doomed to fragment and implode.

We’re fucked. So now what? We fight, clutch for flotsam, like any other doomed beetle caught upon the flood, not for any ‘reason,’ but because this is what beetles do, drowning.

Fight.

Framing “On Alien Philosophy”*

by rsbakker

 

Peter Hankins of Conscious Entities fame has a piece considering “On Alien Philosophy.” The debate is just getting started, but I thought it worthwhile explaining why I think this particular paper of mine amounts to more than yet just another interpretation to heap onto the intractable problem of ourselves.

Consider the four following claims:

1) We have biologically constrained (in terms of information access and processing resources) metacognitive capacities ancestrally tuned to the solution of various practical problem ecologies, and capable of exaptation to various other problems.

2) ‘Philosophical reflection’ constitutes such an exaptation.

3) All heuristic exaptations inherit, to some extent, the problem-solving limitations of the heuristic exapted.

4) ‘Philosophical reflection’ inherits the problem-solving limitations of deliberative metacognition.

Now I don’t think there’s much anything controversial about any of these claims (though, to be certain, there’s a great many devils lurking in the details adduced). So note what happens when we add the following:

5) We should expect human philosophical practice will express, in a variety of ways, the problem-solving limitations of deliberative metacognition.

Which seems equally safe. But note how the terrain of the philosophical debate regarding the nature of the soul has changed. Any claim purporting the exceptional nature of this or that intentional phenomena now needs to run the gauntlet of (5). Why assume we cognize something ontologically exceptional when we know we are bound to be duped somehow? All things being equal, mediocre explanations will always trump exceptional ones, after all.

The challenge of (5) has been around for quite some time, but if you read (precritical) eliminativists like Churchland, Stich, or Rosenberg, this is where the battle grinds to a standstill. Why? Because they have no general account of how the inevitable problem-solving limitations of deliberative metacognition would be expressed in human philosophical practice, let alone how they would generate the appearance of intentional phenomena. Since all they have are promissory notes and suggestive gestures, ontologically exceptional accounts remain the only game in town. So, despite the power of (5), the only way to speak of intentional phenomena remains the traditional, philosophical one. Science is blind without theory, so absent any eliminativist account of intentional phenomena, it has no clear way to proceed with their investigation. So it hews to exceptional posits, trusting in their local efficacy, and assuming they will be demystified by discoveries to come.

Thus the challenge posed by Alien Philosophy. By giving real, abductive teeth to (5), my account overturns the argumentative terrain between eliminativism and intentionalism by transforming the explanatory stakes. It shows us how stupidity, understood ecologically, provides everything we need to understand our otherwise baffling intuitions regarding intentional phenomena. “On Alien Philosophy” challenges the Intentionalist to explain more with less (the very thing, of course, he or she cannot do).

Now I think I’ve solved the problem, that I have a way to genuinely naturalize meaning and cognition. The science will sort my pretensions in due course, but in the meantime, the heuristic neglect account of intentionality, given its combination of mediocrity and explanatory power, has to be regarded as a serious contender.

*Originally posted 02/17/2017

The Crash of Truth: A Critical Review of Post-Truth by Lee C. Mcintyre

by rsbakker

Lee Mcintyre is a philosopher of science at Boston University, and author of Dark Ages: The Case for a Science of Human Behaviour. I read Post-truth on the basis of Fareed Zakaria’s enthusiastic endorsement on CNN’s GPS, so I fully expected to like it more than I ultimately did. It does an admirable job scouting the cognitive ecology of post-truth, but because it fails to understand that ecology in ecological terms, the dynamic itself remains obscured. The best Mcintyre can do is assemble and interrogate the usual suspects. As a result, his case ultimately devolves into what amounts to yet another ingroup appeal.

As perhaps, we should expect, given the actual nature of the problem.

Mcintyre begins with a transcript of an interview where CNN’s Alisyn Camerota presses Newt Gingrich at the 2016 Republican convention on Trump’s assertions regarding crime:

GINGRICH: No, but what I said is equally true. People feel more threatened.

CAMEROTA: Feel it, yes. They feel it, but the facts don’t support it.

GINGRICH: As a political candidate, I’ll go with how people feel and let you go with the theoreticians.

There’s a terror you feel in days like these. I felt that terror most recently, I think, watching Sarah Huckabee Sanders insisting that the out-going National Security Advisor, General H. R. McMaster, had declared that no one had been tougher on Russia than Trump after a journalist had quoted him saying almost exactly otherwise. I had been walking through the living-room and the exchange stopped me in my tracks. Never in my life had I ever witnessed a Whitehouse Official so fecklessly, so obviously, contradict what everyone in the room had just heard. It reminded me of the psychotic episodes I witnessed as a young man working tobacco with a friend who suffered schizophrenia—only this was a social psychosis. Nothing was wrong with Sarah Huckabee Sanders. Rather than lying in malfunctioning neural machinery, this discrepancy lay in malfunctioning social machinery. She could say what she said because she knew that statements appearing incoherent to those knowing what H. R. McMaster had actually said would not appear as such to those ignorant of or indifferent to what he had actually said.  She knew, in other words, that even though the journalists in the room saw this:

given the information available to their perspective, the audience that really mattered would see this:

which is to say, something rendered coherent for neglecting that information.

The task Mcintyre sets himself in this brief treatise is to explain how such a thing could have come to pass, to explain, not how a sitting President could lie, but how he could lie without consequences. When Sarah Huckabee Sanders asserts that H. R. McMaster’s claim that the Administration is not doing enough is actually the claim that no Administration has done more she’s relying on innumerable background facts that simply did not obtain a mere generation ago. The social machinery of truth-telling has fundamentally changed. If we look at the sideways picture of Disney’s faux New York skyline as the ‘deep information view,’ and the head-on picture as the ‘shallow information view,’ the question becomes one of how she could trust that her audience, despite the availability of deep information, would nevertheless affirm the illusion of coherence provided by the shallow information view. As Mcintyre writes, “what is striking about the idea of post-truth is not just that truth is being challenged, but that it is being challenged as a mechanism for asserting political dominance.” Sanders, you could say, is availing herself of new mechanisms, ones antagonistic to the traditional mechanisms of communicating the semantic authority of deep information. Somehow, someway, the communication of deep information has ceased to command the kinds of general assent it once did. It’s almost preposterous on the face of it: in attributing Trump’s claims to McMaster, Sanders is gambling that somehow, either by dint of corruption, delusion, or neglect, her false claim will discharge functions ideally belonging to truthful claims, such as informing subsequent behaviour. For whatever reason, the circumstances once preventing such mass dissociations of deep and shallow information ecologies have yielded to circumstances that no longer do.

Mcintyre provides a chapter by chapter account of those new circumstances. For reasons that will become apparent, I’ll skip his initial chapter, which he devotes to defining ‘post-truth,’ and return to it in the end.

Science Denial

He provides clear, pithy outlines of the history of the tobacco industry’s seminal decision to argue the science, to wage what amounts to an organized disinformation campaign. He describes the ways resource companies adapted these tactics to scramble the message and undermine the authority of climate science. And by ‘disinformation,’ he means this literally, given “that even while ExxonMobil was spending money to obfuscate the facts about climate change, they were making plans to explore new drilling opportunities in the Arctic once the polar ice cap had melted.” This part of the story is pretty well-known, I think, but Mcintyre tells the tale in a way that pricks the numbness of familiarity, reminding us of the boggling scale of what these campaigns achieved: generating a political/cultural alliance that is—not simply bent on—hastening untold misery and global economic loss in the name of short term parochial economic gain.

Cognitive Bias

He gives a curiously (given his background) two-dimensional sketch of the role cognitive bias plays in the problem, focusing primarily on cognitive dissonance, our need to minimize cognitive discrepancies, and the backfire effect, how counter-arguments actually strengthen, as opposed to mitigate, commitment to positions. (I would recommend Steven Sloman and Philip Fernbach’s The Knowledge Illusion for a more thorough consideration of the dynamics involved). He discusses research showing the profound ways that social identification, even cued by things so flimsy as coloured wristbands, profoundly transforms our moral determinations. But he underestimates, I think, the profound nature of what Dan Kahan and his colleagues call the “Tragedy of the Risk-Perception Commons,” the individual rationality of espousing irrational collective claims. There’s so much research directly pertinent to his thesis that he passes over in silence, especially that belonging to ecological rationality.

Traditional versus social media

If Mcintyre’s consideration of the cognitive science left me dissatisfied, I thoroughly enjoyed his consideration of media’s contribution to the problem of post-truth. He reminds us that the existence of entities, like Fox News, disguising advocacy as disinterested reporting, is the historical norm, not the rule. Disinterested journalistic reporting was more the result how AP, which served papers grinding different political axes, required stories expressing as little overt bias as possible. Rather than seize upon this ecological insight (more on this below), he narrates the gradual rise of television news from small, money-losing network endeavours, to money-making enterprises culminating in CNN, Fox, MSNBC, and the return of ‘yellow journalism.’

He provides a sobering assessment of the eclipse of traditional media, and the historically unprecedented rise of social media. Here, more than anywhere else, we find Mcintyre taking steps toward a genuine cognitive ecological understanding of the problem:

“In the past, perhaps our cognitive biases were ameliorated by our interactions with others. It is ironic to think that in today’s media deluge, we could perhaps be more isolated from contrary opinion than when our ancestors were forced to live and work among other members of their tribe, village, or community, who had to interact with one another to get information.”

Since his understanding of the problem is primarily normative, however, he fails to see how cognitive reflexes that misfire in experimental contexts, and so strike observers as normative breakdowns, actually facilitate problem-solving in ancestral contexts. What he notes as ‘ironic’ should strike him (and everyone else) as astounding, as one of the doors that any adequate explanation of post-truth must kick down. But it is heartening, I have to say, to see these ideas begin to penetrate more and more brainpans. Despite the insufficiency of his theoretical tools, Mcintyre glimpses something of the way cognitive technology has impacted human cognitive ecology: “Indeed,” he writes, “what a perfect storm for the exploitation of our ignorance and cognitive biases by those with an agenda to put forward.” But even if the ‘perfect storm’ metaphor captures the complex relational nature of what’s happened, it implies that we find ourselves suffering a spot of bad luck, and nothing more.

Postmodernism

At last he turns to the role postmodernism has played in all this: this is the only chapter where I smelled a ‘legacy effect,’ the sense that the author is trying to shoe-horn some independently published material.

He acknowledges that ‘postmodernism’ is hopelessly overdetermined, but he thinks two theses consistently rise above the noise: the first is that “there is no such thing as objective truth,” and the second is “that any profession of truth is nothing more than a reflection of the political ideology of the person who is making it.”

To his credit, he’s quick to pile on the caveats, to acknowledge the need to critique both the possibility of absolute truth as well as the social power of scientific truth-claims. Because of this, it quickly becomes apparent that his target isn’t so much ‘postmodernism’ as it is social constructivism, the thesis that ‘truth-telling,’ far from connecting us to reality, bullies us into affirming interest serving constructs. This, as it turns out, is the best way to think post-truth “[i]n its purest form” as “when one thinks that the crowd’s reaction actually does change the facts about a lie.”

In other words, for Mcintyre, post-truth is the consequence of too many people believing in social constructivism—or in other words, presuming the wrong theory of truthHis approach to the question of post-truth is that of a traditional philosopher: if the failure is one of correspondence, then the blame has to lie with anti-correspondence theories of truth. The reason Sarah Huckabee Sanders could lie about McMaster’s final speech turns on (among other things) the wide-spread theoretical belief that there is no such thing as objective truth,’ that it’s power plays all the way down.

Thus the (rather thick) irony of citing Daniel Dennett—an interpretivist!—stating that “what the postmodernists did was truly evil” so far as they bear responsibility “for the intellectual fad that made it respectable to be cynical about truth and facts.”

The sin of the postmodern left has very, very little to do with generating semantically irresponsible theoriesDennett’s own positions are actually a good deal more radical in this regard! When it comes to the competing narratives involving ‘meaning of’ questions and answers, Dennett knows we have no choice but to advert to the ‘dramatic idiom’ of intentionality. If the problem were one of providing theoretical ammunition then Dennett is as much a part of the problem as Baudrillard.

And yet Mcintyre caps Dennett’s assertion by asking, “Is there more direct evidence than this?” Not a shining moment, dialectically speaking.

I agree with him that tools have been lifted from postmodernists, but they have been lifted from pragmatists (Dennett’s ilk) as well. Talk of ‘stances’ and ‘language games’ is also rife on the right! And I should know. What’s happening now is the consequence of a trend that I’ve been battling since the turn of the millennium. All my novels constitute self-conscious attempts to short-circuit the conditions responsible for ‘post-truth.’ And I’ve spent thousands of hours trolling the alt-Right (before they were called such) trying to figure out what was going on. The longest online debate I ever had was with a fundamentalist Christian who belonged to a group using Thomas Kuhn to justify their belief in the literal truth of Genesis.

Defining Post-truth

Which brings us, as promised, back to the book’s beginning, the chapter that I skipped, where, in the course of refining his definition of post-truth, Mcintyre acknowledges that no one knows what the hell truth is:

“It is important at this point to give at least a minimal definition of truth. Perhaps the most famous is that of Aristotle, who said: ‘to say of what is that it is not, or of what is not, that it is, is false, while to say of what is that it is, and what of is not that it is not, is true.’ Naturally, philosophers have fought for centuries over whether this sort of “correspondence” view is correct, whereby we judge the truth of a statement only by how well it fits reality. Other prominent conceptions of truth (coherentist, pragmatist, semantic) reflect a diversity of opinion among philosophers about the proper theory of truth, even while—as a value—there seems little dispute that truth is important.”

He provides a minimal definition with one hand—truth as correspondence—which he immediately admits is merely speculative! Truth, he’s admitting, is both indispensable and inscrutable. And yet this inscrutability, he thinks, need not hobble the attempt to understand post-truth: “For now, however, the question at hand is not whether we have the proper theory of truth, but how to make sense of the different ways that people subvert truth.”

In other words, we don’t need to know what is being subverted to agree that it is being subverted. But this goes without saying; the question is whether we need to know what is being subverted to explain what Mcintyre is purporting to explain, namely, how truth is being subverted. How do we determine what’s gone wrong with truth when we don’t even know what truth is?

Mcintyre begins Post-truth, in other words, by admitting that no canonical formulation of his explanandum exists, that it remains a matter of mere speculation. Truth remains one of humanity’s confounding questions.

But if truth is in question, then shouldn’t the blame fall upon those who question truth? Perhaps the problem isn’t this or that philosophy so much as philosophy itself. We see as much at so many turns in Mcintyre’s account:

“Why not doubt the mainstream news or embrace a conspiracy theory? Indeed, if news is just political expression, why not make it up? Whose facts should be dominant? Whose perspective is the right one? Thus is postmodernism the godfather of post-truth.”

Certainly, the latter two questions belong to philosophy as whole, and not postmodernism in particular. To that extent, the two former questions—so far as they follow from the latter—have to be seen as falling out of philosophy in general, and not just some ‘philosophical bad apples.’

But does it make sense to blame philosophy, to suggest we should have never questioned the nature of truth? Of course not.

The real question, the one that I think any serious attempt to understand post-truth needs to reckon, is the one Mcintyre breezes by in the first chapter: Why do we find truth so difficult to understand?

On the one hand, truth seems to be crashing. On the other, we have yet to take a step beyond Aristotle when it comes to answering the question of the nature of truth. The latter is the primary obstacle, since the only way to truly understand the nature of the crash is to understand the nature of truth. Could the crash and the inscrutability of truth be related? Could post-truth somehow turn on our inability to explain truth?

Adaptive Anamorphosis

Truth lies murdered in the Calais Coach, and Mcintyre has assembled all the suspects: denialism, cognitive biases, traditional and social media, and (though he knows it not) philosophy. He knows all of them had some part to play, either directly, or as accessories, but the Calais Coach remains locked—his crime scene is a black box. He doesn’t even have a body!

For me, however, post-truth is a prediction come to pass—a manifestation of what I’ve long called the ‘semantic apocalypse.’ Far from a perfect storm of suspects coming together in unlikely ways to murder ‘all of factual reality,’ it is an inevitable consequence of our rapidly transforming cognitive ecologies.

Biologically speaking, human communication and cooperation represent astounding evolutionary achievements. Human cognition is the most complicated thing human cognition has ever encountered: only now are we beginning to reverse-engineer its nature, and to use that knowledge to engineer unprecedented cognitive artifacts. We know that cognition is structurally and dynamically composite, heavily reliant on heuristic specialization to solve its social and natural environments. The astronomical complexity of human cognition means that sociocognition and metacognition are especially reliant on composite, source-insensitive systems, devices turning on available cues that correlate, given that various hidden regularities obtain, with specific outcomes. Despite being legion, we manage to synchronize with our fellows and our environments without the least awareness of the cognitive machinery responsible.

We suffer medial neglect, a systematic insensitivity to our own nature—a nature that includes this insensitivity. Like every other organism on this planet we cognize without cognizing the concurrent act of cognition. Well, almost like every other organism. Where other species utterly depend on the reliability of their cognitive capacities, have no way of repairing failures in various enabling—medial—systems, we do have recourse. Despite our blindness to the machinery of human cognition, we’ve developed a number of different ways to nudge that machinery—whack the TV set, you could say.

Truth-talk is one of those ways. Truth-talk allows us to minimize communicative discrepancies absent, once again, sensitivity to the complexities involved. Truth-talk provides a way to circumvent medial neglect, to resolve problems belonging to the enabling dimension of cognition despite our systematic insensitivity to the facts of that dimension. When medial issues—problems pertaining to cognitive function—arise, truth-talk allows for the metabolically inexpensive recovery of social and environmental synchronization. Incompatible claims can be sorted, at least so far as our ancestors required in prehistoric cognitive ecologies. The tribe can be healed, despite its profound ignorance of natures.

To say human cognition is heuristic is to say it is ecologically dependent, that it requires the neglected regularities underwriting the utility of our cues remain intact. Overthrow those regularities, and you overthrow human cognition. So, where our ancestors could simply trust the systematic relationship between retinal signals and environments while hunting, we have to remove our VR goggles before raiding the fridge. Where our ancestors could simply trust the systematic relationship between the text on the page or the voice in our ear and the existence of a fellow human, we have to worry about chatbots and ‘conversational user interfaces.’ Where our ancestors could automatically depend on the systematic relationship between their ingroup peers and the environments they reported, we need to search Wikipedia—trust strangers. More generally, where our ancestors could trust the general reliability (and therefore general irrelevance) of their cognitive reflexes, we find ourselves confronted with an ever growing and complicating set of circumstances where our reflexes can no longer be trusted to solve social problems.

The tribe, it seems, cannot be healed.

And, unfortunately, this is the very problem we should expect given the technical (tactical and technological) radicalization of human cognitive ecology.* Philosophy, and now, cognitive science, provide the communicative tactics required to neutralize (or ‘threshold’) truth-talk. Cognitive technologies, meanwhile, continually complicate the once direct systematic relationships between our suites of cognitive reflexes and our social and natural environments. The internet doesn’t simply render the sum of human knowledge available, it also renders the sum of human rationalization available as well. The curious and the informed, meanwhile, no longer need suffer the company of the incurious and the uninformed, and vice versa. The presumptive moral superiority of the former stands revealed: and in ever greater numbers the latter counter-identify, with a violence aggravated by phenomena such as the ‘online disinhibition effect.’ (One thing Mcintyre never pauses to consider is the degree to which he and his ilk are hated, despised, so much so as to see partners in traditional foreign adversaries, and to think lies and slander simply redress lies and slander). Populations begin spontaneously self-selecting. Big data identifies the vulnerable, who are showered with sociocognitive cues—atrocity tales to threaten, caricatures to amuse—engineered to provoke ingroup identification and outgroup alienation. In addition to ‘backfiring,’ counter-arguments are perceived as weapons, evidence of outgroup contempt for you and your own. And as the cognitive tactics become ever more adept at manipulating our biases, ever more scientifically informed, and as the cognitive technology becomes ever more sophisticated, ever more destructive of our ancestral cognitive habitat, the break between the two groups, we should expect, will only become more, not less, profound.

None of this is intuitive, of course. Medial neglect means reflection is source blind, and so inclined to conceive things in super-ecological terms. Thus the value of the prop building analogy I posed at the beginning.

Disney’s massive Manhattan anamorph depends on the viewer’s perspectival position within the installation to assure the occlusion of incompatible information. The degrees of cognitive freedom this position possesses—basically, how far one can wander this way and that—depends on the size and sophistication of the anamorph. The stability of illusion, in other words, entirely depends on the viewer: the deeper one investigates, the less stable the anamorph becomes. Their dependence on cognitive ‘sweet spots’ is their signature vulnerability.

The cognitive fragility of the anamorph, however, resides in the fact that we can move, while it cannot. Overcoming this fragility, then, either requires 1) de-animating observation, 2) complicating the anamorph, or 3) animating the anamorph. The problem we face can be understood as the problem of adaptive cognitive anamorphosis, the way cognitive science, in combination with cognitive technology, enables the de-animation of information consumers by gaming sociocognitive cues, while both complicating and animating the artifactual anamorphic information they consume.

Once a certain threshold is crossed, Sarah Huckabee Sanders can lie without shame or apology on national television. We don’t know what we don’t know. Mcintyre references the notorious Dunning-Kruger effect, the way cognitive incompetence correlates with incompetent assessments of competence, but the underlying mechanism is more basic: cognitive systems lacking access to information function independent of that information. Medial neglect assures we take the sufficiency of our perspectives for granted absent information indicating insufficiency or ‘medial misalignment.’ Trusting our biology and community is automatic. Perhaps we refuse to move, to even consider the information belonging to:

But if we do move, the anamorph, thanks to cognitive technology, adapts, the prop-facades grow prop sides, and the deep (globally synchronized) information presented above, has to compete with ‘faux deep’ information. The question becomes one of who has been systematically deceived—a question that ingroup biases have already answered in illusion’s favour. We can return to our less inquisitive peers and assure them they were right all along.

What is ‘post-truth’? Insofar as it names anything it refers to diminishing capacity of globally, versus locally, synchronized claims to drive public discourse. It’s almost as if, via technology, nature is retooling itself to conceal itself by creating adaptive ‘faux realities.’ It’s all artifactual, all biologically ‘constructed’: the question is whether our cognitive predicament facilitates global (or deep) synchronization geared to what happens to be the case, or facilitates local (or shallow) synchronization geared to ingroup expectations and hidden political and commercial interests.

There’s no contest between spooky correspondence and spooky construction. There’s no ‘assertion of ideological supremacy,’ just cognitive critters (us) stranded in a rapidly transforming cognitive ecology that has become too sophisticated to see, and too powerful to credit. Post-truth, in other words, is an inevitable consequence of scientific progress, particularly as it pertains to cognitive technologies.

Sarah Huckabee Sanders can lie without shame or apology on national television because Trump was able to lure millions of Americans across a radically transformed (and transforming) anamorphic threshold. And we should find this terrifying. Most doomed democracies elect their executioner. In his The Death of Democracy: Hitler’s Rise to Power, Benjamin Carter Hett blames the success of Nazism on the “reality deficit” suffered by the German people. “Hostility to reality,” he writes, “translated into contempt for politics, or, rather, desire for a politics that was somehow not political: a thing that can never be” (14). But where Germany in the 1930’s had every reason to despise the real, “a lost war that had cost the nation almost two million of her sons, a widely unpopular revolution, a seemingly unjust peace settlement, and economic chaos accompanied by huge social and technological change” (13), America finds itself suffering only the latter. The difference lies in the way the latter allows for the cultivation and exploitation of this hostility in an age of unparalleled peace and prosperity. In the German case, the reality itself drove the populace to embrace atavistic political fantasies. Thanks to technology, we can now achieve the same effect using only human cognitive shortcomings and corporate greed.

Buckle up. No matter what happens to Trump, the social dysfunction he expresses belongs to the very structure of our civilization. Competition for the market he’s identified is only going to intensify.

 

Killing Bartleby (Before It’s Too Late)

by rsbakker

Why did I not die at birth,

come forth from the womb and expire?

Why did the knees receive me?

Or why the breasts, that I should suck?

For then I should have lain down and been quiet;

I should have slept; then I should have been at rest,

with kings and counselors of the earth

who rebuilt ruins for themselves…

—Job 3:11-14 (RSV)

 

“Bartleby, the Scrivener: A Story of Wall-Street”: I made the mistake of rereading this little gem a few weeks back. Section I, below, retells the story with an eye to heuristic neglect. Section II leverages this retelling into a critique of readings, like those belonging to the philosophers Gilles Deleuze and Slavoj Zizek, that fall into the narrator’s trap of exceptionalizing Bartleby. If you happen to know anyone interested in Bartleby criticism, by all means encourage them to defend their ‘doctrine of assumptions.’

 

I

The story begins with the unnamed narrator identifying two ignorances, one social and the other personal. The first involves Bartleby’s profession, that “somewhat singular set of men, of whom as yet nothing that I know of has ever been written.” Human scriveners, like human computers, hail from a time when social complexities demanded the undertaking of mechanical cognitive labours, the discharge of tasks too procedural to rest easy in the human soul. Copies are all the ‘system’ requires of them, pure documentary repetition. It isn’t so much that their individuality does not matter, but that it matters too much, perturbing (‘blotting’) the function of the whole. So far as social machinery is legal machinery, you could say law-copyists belong to the neglected innards of mid-19th century society. Bartleby belongs to what might be called the caste of most invisible men.

What makes him worthy of literary visibility turns on a second manifestation of ignorance, this one belonging to the narrator. “What my own astonished eyes saw of Bartleby,” he tells us, “that is all I know of him, except, indeed, one vague report which will appear in the sequel.” And even though the narrator thinks this interpersonal inscrutability constitutes “an irreparable loss to literature,” it turns out to be the very fact upon which the literary obsession with “Bartleby, the Scrivener” hangs. Bartleby is so visible because he is the most hidden of the hidden men.

Since comprehending the dimensions of a black box buried within a black box is impossible, the narrator has no choice but to illuminate the latter, to provide an accounting of Bartleby’s ecology: “Ere introducing the scrivener, as he first appeared to me, it is fit I make some mention of myself, my employees, my business, my chambers, and general surroundings; because some such description is indispensable to an adequate understanding of the chief character about to be presented.” In a sense, Bartleby is nothing apart from his ultimately profound impact on this ecology, such is his mystery.

Aside from inklings of pettiness, the narrator’s primary attribute, we learn, is also invisibility, the degree to which he disappears into his social syntactic role. “I am one of those unambitious lawyers who never addresses a jury, or in any way draws down public applause; but in the cool tranquility of a snug retreat, do a snug business among rich men’s bonds and mortgages and title-deeds,” he tells us. “All who know me, consider me an eminently safe man.” He is, in other words, the part that does not break down, and so, like Heidegger’s famed hammer, never becomes something present to hand, an object of investigation in his own right.

His description of his two existing scriveners demonstrates that his ‘safety’ is to some extent rhetorical, consisting in his ability to explain away inconsistencies, real or imagined. Between Turkey’s afternoon drunkenness and Nipper’s foul morning temperament, you could say his office is perpetually compromised, but the narrator chooses to characterize it otherwise, in terms of each man mechanically cancelling out the incompetence of the other. “Their fits relieved each other like guards,” the narrator informs us, resulting in “a good natural arrangement under the circumstances.”

He depicts what might be called an economy of procedural and interpersonal reflexes, a deterministic ecology consisting of strictly legal or syntactic demands, all turning on the irrelevance of the discharging individual, the absence of ‘blots,’ and a stochastic ecology of sometimes conflicting personalities. Not only does he instinctively understand the insoluble nature of the latter, he also understands the importance of apology, the power of language to square those circles that refuse to be squared. When he comes “within an ace” of firing Turkey, the drunken scrivener need only bow and say what amounts to nothing to mollify his employer. As with bonds and mortgages and title-deeds, the content does not so much matter as does the syntax, the discharge of social procedure. Everyone in his office “up stairs at No.—Wall-street” is a misfit, and the narrator is a compulsive ‘fitter,’ forever searching for ways to rationalize, mythologize, and so normalize, the idiosyncrasies of his interpersonal circumstances.

And of course, he and his fellows are entombed by the walls of Wall Street, enjoying ‘unobstructed views’ of obstructions. Theirs is a subterranean ecology, every bit as “deficient in what landscape painters call ‘life’” as the labour that consumes them.

Enter Bartleby. “After a few words touching his qualifications,” the narrator informs us, “I engaged him, glad to have among my corps of copyists a man of so singularly sedate an aspect, which I thought might operate beneficially upon the flighty temper of Turkey, and the fiery one of Nippers.” Absent any superficial sign of idiosyncrasy, he seems the perfect ecological fit. The narrator gives the man a desk behind a screen in his own office, a corner possessing a window upon obstruction.

After three days, he calls out to Bartleby to examine the accuracy of a document, reflexively assuming the man would discharge the task without delay, only to hear Bartleby, obscure behind his green screen, say the fateful words that would confound, not only our narrator, but countless readers and critics for generations to come: “I would prefer not to.” The narrator is gobsmacked:

“I sat awhile in perfect silence, rallying my stunned faculties. Immediately it occurred to me that my ears had deceived me, or Bartleby had entirely misunderstood my meaning. I repeated my request in the clearest tone I could assume. But in quite as clear a one came the previous reply, “I would prefer not to.””

Given the “natural expectancy of instant compliance,” the narrator assumes the breakdown is communicative. When he realizes this isn’t the case, he confronts Bartleby directly, to the same effect:

“Not a wrinkle of agitation rippled him. Had there been the least uneasiness, anger, impatience or impertinence in his manner; in other words, had there been any thing ordinarily human about him, doubtless I should have violently dismissed him from the premises. But as it was, I should have as soon thought of turning my pale plaster-of-paris bust of Cicero out of doors.”

Realizing that he has been comprehended, the narrator assumes willful defiance, that Bartleby seeks to provoke him, and that, accordingly, the man will present the cues belonging to interpersonal power struggles more generally. When Bartleby manifests none of these signs, the hapless narrator lacks the social script he requires to solve the problem. Turning out the scrivener becomes as unthinkable as surrendering his bust of Cicero, which is to say, the very emblem of his legal vocation.

The next time Bartleby refuses to read, the narrator demands an explanation, asking, “Why do you refuse?” To which Bartleby replies, once again, “I would prefer not to.” When the narrator presses, resolved “to reason with him,” he realizes that dysrationalia is not the problem: “It seemed to me that while I had been addressing him, he carefully revolved every statement that I made; fully comprehended the meaning; could not gainsay the irresistible conclusions; but, at the same time, some paramount consideration prevailed with him to reply as he did.”

If Bartleby were non compos mentis, then he could be ‘medicalized,’ reduced to something the narrator would find intelligible—something providing some script for action. Instead, the scrivener understands, or manifests as much, leaving the narrator groping for evidence of his own rationality:

“It is not seldom the case that when a man is browbeaten in some unprecedented and violently unreasonable way, he begins to stagger in his own plainest faith. He begins, as it were, vaguely to surmise that, wonderful as it may be, all the justice and all the reason is on the other side. Accordingly, if any disinterested persons are present, he turns to them for some reinforcement for his own faltering mind.”

For a claim to be rational it must be rational to everyone. Each of us is stranded with our own perspective, and each of us possesses only the dimmest perspective on that perspective: rationality is something we can only assume. This is why ‘truth’ (especially in ‘normative’ matters (politics)) so often amounts to a ‘numbers game,’ a matter of tallying up guesses. Our blindness to our cognitive orientation—medial neglect—combined with the generativity of the human brain and the capriciousness of our environments, requires the communicative policing of cognitive idiosyncrasies. Whatever rationality consists in, minimally it functions to minimize discrepancies between individuals, sometimes vis a vis their environments and sometimes not. Reason, like the narrator, makes things fit.

The ‘disinterested persons’ the narrator turns to are themselves misfits, with “Nippers’ ugly mood on duty and Turkey’s off.” The irony here, and what critics are prone to find most interesting, is that the three are anything but disinterested. The more thought-provoking fact, however, lies in the way they agree with their employer despite the wild variance of their answers. For all the idiosyncrasies of its constituents, the office ecology automatically manages to conserve its ‘paramount consideration’: functionality.

Baffled unto inaction, the narrator suffers bouts of explaining away Bartleby’s discrepancies in terms of his material and moral utilities. The fact of his indulgences alternately congratulates and exasperates him: Bartleby becomes (and remains) a bi-stable sociocognitive figure, alternately aggressor and victim. “Nothing so aggravates an earnest person as a passive resistance,” the narrator explains. “If the individual so resisted be of a not inhumane temper, and the resisting one perfectly harmless in his passivity; then, in the better moods of the former, he will endeavor charitably to construe to his imagination what proves impossible to be solved by his judgment.” To be earnest is to be prone to minimize social discrepancies, to optimize via the integrations of others. The passivity of “I would prefer not to” poises Bartleby upon a predictive-processing threshold, one where the vicissitudes of mood are enough to transform him from a ‘penniless wight’ into a ‘brooding Marius’ and back again. The signals driving the charitable assessment are constantly interfering with the signals driving the uncharitable assessment, forcing the different neural hypotheses to alternate.

Via this dissonance, the scrivener begins to train him, with each “I would prefer not to” tending “to lessen the probability of [his] repeating the inadvertence.”

The ensuing narrative establishes two facts. First, we discover that Bartleby belongs to the office ecology, and in a manner more profound than even the narrator, let alone any one of his employees. Discovering Bartleby indisposed in his office on a Sunday, the narrator finds himself fleeing his own premises, alternately lost in “sad fancyings—chimeras, doubtless, of a sick and silly brain” and “[p]resentiments of strange discoveries”—strung between delusion and revelation.

Second, we learn that Bartleby, despite belonging to the office ecology, nevertheless signals its ruination:

“Somehow, of late I had got into the way of involuntarily using this word “prefer” upon all sorts of not exactly suitable occasions. And I trembled to think that my contact with the scrivener had already and seriously affected me in a mental way. And what further and deeper aberration might it not yet produce?”

When the narrator catches Turkey also saying “prefer,” he says, “So you have got the word too,” as if a verbal tick could be caught as a cold. Turkey manifests cryptonesia. Nippers does the same not moments afterward—ever bit as unconsciously as Turkey. Knowing nothing of the way humans have evolved to unconsciously copy linguistic behaviour, the narrator construes Bartleby as a kind of contagion—or pollutant, a threat to his delicately balanced office ecology. He once again determines he must rid his office of the scrivener’s insidious influence, but, under that influence, once again allows prudence—or the appearance of such—to dissuade immediate action.

Bartleby at last refuses to copy, irrevocably undoing the foundation of the narrator’s ersatz rationalizations. “And what is the reason?” the narrator demands to know. Staring at the brick wall just beyond his window, Bartleby finally offers a different explanation: “Do you not see the reason for yourself.” Though syntactically structured as a question, this statement possesses no question mark in Melville’s original version (as it does, for instance, in the version anthologized by Norton). And indeed, the narrator misses the very reason implied by his own narrative—the wall that occupied so many of Bartleby’s reveries—and confabulates an apology instead: work induced ‘impaired vision.’

But this rationalization, like all the others, is quickly exhausted. The internal logic of the office ecology is entirely dependent on the logic of Wall-street: the text continually references the functional exigencies commanding the ebb and flow of their lives, the way “necessities connected with my business tyrannized over all other considerations.” The narrator, when all is said and done, is an instrument of the Law and the countless institutions dependent upon it. At long last he fires Bartleby rather than merely resolving to do so.

He celebrates his long-deferred decisiveness while walking home, only to once again confront the blank wall the scrivener has become:

“My procedure seemed as sagacious as ever—but only in theory. How it would prove in practice—there was the rub. It was truly a beautiful thought to have assumed Bartleby’s departure; but, after all, that assumption was simply my own, and none of Bartleby’s. The great point was, not whether I had assumed that he would quit me, but whether he would prefer so to do. He was more a man of preferences than assumptions.”

And so, the great philosophical debate, both within the text and its critical reception, is set into motion. Lost in rumination, the narrator overhears someone say, “I’ll take odds he doesn’t,” on the street, and angrily retorts, assuming the man was referring to Bartleby, and not, as was actually the case, an upcoming election. Bartleby’s ‘passive resistance’ has so transformed his cognitive ecology as to crash his ability to make sense of his fellow man. Meaning, at least so far as it exists in his small pocket of the world, has lost its traditional stability.

Of course, the stranger’s voice, though speaking of a different matter altogether, had spoken true. Bartleby prefers not to leave the office that has become his home.

“What was to be done? or, if nothing could be done, was there any thing further that I could assume in the matter? Yes, as before I had prospectively assumed that Bartleby would depart, so now I might retrospectively assume that departed he was. In the legitimate carrying out of this assumption, I might enter my office in a great hurry, and pretending not to see Bartleby at all, walk straight against him as if he were air. Such a proceeding would in a singular degree have the appearance of a home-thrust. It was hardly possible that Bartleby could withstand such an application of the doctrine of assumptions.”

The ‘home-thrust,’ in other words, is to simply pretend, to physically enact the assumption of Bartleby’s absence, to not only ignore him, but to neglect him altogether, to the point of walking through him if need be. “But upon second thoughts the success of the plan seemed rather dubious,” the narrator realizes. “I resolved to argue the matter over with him again,” even though argument, Sellars famed ‘game of giving and asking for reasons,’ is something Bartleby prefers not to recognize.

When the application of reason fails once again, the narrator at last entertains the thought of killing Bartleby, realizing “the circumstance of being alone in a solitary office, up stairs, of a building entirely unhallowed by humanizing domestic associations” is one tailor-made for the commission of murder. Even acts of evil have their ecological preconditions. But rather than seize Bartleby, he ‘grapples and throws’ the murderous temptation, recalling the Christian injunction to love his neighbour. As research suggests, imagination correlates with indecision, the ability to entertain (theorize) possible outcomes: the narrator is nothing if not an inspired social confabulator. For every action-demanding malignancy he ponders, his aversion to confrontation occasions another reason for exemption, which is all he needs to reduce the discrepancies posed.

He resigns himself to the man:

“Gradually I slid into the persuasion that these troubles of mine touching the scrivener, had been all predestinated from eternity, and Bartleby was billeted upon me for some mysterious purpose of an all-wise Providence, which it was not for a mere mortal like me to fathom. Yes, Bartleby, stay there behind your screen, thought I; I shall persecute you no more; you are harmless and noiseless as any of these old chairs; in short, I never feel so private as when I know you are here. At last I see it, I feel it; I penetrate to the predestinated purpose of my life. I am content. Others may have loftier parts to enact; but my mission in this world, Bartleby, is to furnish you with office-room for such period as you may see fit to remain.”

But this story, for all its grandiosity, likewise melts before the recalcitrant scrivener. The comical notion that furnishing Bartleby an office could have cosmic significance merely furnishes a means of ignoring what cannot be ignored: how the man compromises, in ways crude and subtle, the systems of assumptions, the network of rational reflexes, comprising the ecology of Wall-street. In other words, the narrator’s clients are noticing…

“Then something severe, something unusual must be done. What! surely you will not have him collared by a constable, and commit his innocent pallor to the common jail? And upon what ground could you procure such a thing to be done?—a vagrant, is he? What! he a vagrant, a wanderer, who refuses to budge? It is because he will not be a vagrant, then, that you seek to count him as a vagrant. That is too absurd. No visible means of support: there I have him. Wrong again: for indubitably he does support himself, and that is the only unanswerable proof that any man can show of his possessing the means so to do.”

At last invisibility must be sacrificed, and regularity undone. The narrator ratchets through the facts of the scrivener’s cognitive bi-stability. An innocent criminal. An immovable vagrant. Unsupported yet standing. Reason itself cracks about him. And what reason cannot touch only fight or flight can undo. If the ecology cannot survive Bartleby, and Bartleby is immovable, then the ecology must be torn down and reestablished elsewhere.

It’s tempting to read this story in ‘buddy terms,’ to think that the peculiarities of Bartleby only possess the power they do given the peculiarities of the narrator. (One of the interesting things about the yarn is the way it both congratulates and insults the neuroticism of the critic, who, having canonized Bartleby, cannot but flatter themselves both by thinking they would have endured Bartleby the way the narrator does, and by thinking that surely they wouldn’t be so disabled by the man). The narrator’s decision to relocate allows us to see the universality of his type, how others possessing far less history with the scrivener are themselves driven to apologize, to exhaust all ‘quiet’ means of minimizing discrepancies. “[S]ome fears are entertained of a mob,” his old landlord warns him, desperate to purge the scrivener from No.—Wall-street.

Threatened with exposure in the papers—visibility—the narrator once again confronts Bartleby the scrivener. This time he comes bearing possibilities of gainful employment, greener pastures, some earnest, some sarcastic, only to be told, “I would prefer not to,” with the addition of, “I am not particular.” And indeed, as Bartleby’s preference severs ever more ecological connections, he seems to become ever more super-ecological, something outside the human communicative habitat. Repulsed yet again, the narrator flees Wall-street altogether.

Bartleby, meanwhile, is imprisoned in the Tombs, the name given to the House of Detention in lower Manhattan. A walled street is replaced by a walled yard—which, the narrator will tell Bartleby, “is not so sad a place as one might think,” the irony being, of course, that with sky and grass the Tombs actually represent an improvement over Wall-street. Bartleby, for his part, only has eyes for the walls—his unobstructed view of obstruction. To assure his former scrivener is well fed, the narrator engages the prison cook, who asks him whether Bartleby is a forger, likening the man to Monroe Edwards, a famed slavetrader and counterfeiter in Melville’s day. Despite the criminal connotations of Nippers, the narrator assures the man he was “never socially acquainted with any forgers.”

On his next visit, he discovers that Bartleby’s metaphoric ‘dead wall reveries’ have become literal. The narrator finds him “huddled at the base of the wall, his knees drawn up, and lying on his side, his head touching the cold stones,” dead for starvation. Cutting the last, most fundamental ecological reflex of all—the consumption of food—Bartleby has finally touched the face of obstruction… oblivion.

The story proper ends with one last misinterpretation: the cook assuming that Bartleby sleeps. And even here, at this final juncture, the narrator apologizes rather than corrects, quoting Job 3:14, using the Holy Bible, perhaps, to “mason up his remains in the wall.” Melville, however, seems to be gesturing to the fundamental problem underwriting the whole of his tale, the problem of meaning, quoting a fragment of Job in extremis, asking God why he should have been born at all, if his lot was only desolation. What meaning resides in such a life? Why not die an innocent?

Like Bartleby.

What the narrator terms the “sequel” consists of no more than two paragraphs (set apart by a ‘wall’ of eight asterisks), the first divulging “one little item of rumor” which may or may not be more or less true, the second famously consisting in, “Ah Bartleby! Ah humanity!” The rumour occasioning these apostrophic cries suggests “that Bartleby had been a subordinate clerk in the Dead Letter Office at Washington, from which he had been suddenly removed by a change of administration.”

What moves the narrator to passions too complicated to scrutinize is nothing other than the ecology of such a prospect: “Conceive a man by nature and misfortune prone to a pallid hopelessness, can any business seem more fitted to heighten it than that of continually handling these dead letters, and assorting them for the flames?” Here at last, he thinks, we find some glimpse of the scrivener’s original habitat: dead letters potentially fund the reason the man forever pondered dead walls. Rather than a forger, one who cheats systems, Bartleby is an undertaker, one who presides over their crashing. The narrator paints his final rationalization, Bartleby mediating an ecology of fatal communicative interruptions:

“Sometimes from out the folded paper the pale clerk takes a ring:—the finger it was meant for, perhaps, moulders in the grave; a bank-note sent in swiftest charity:—he whom it would relieve, nor eats nor hungers any more; pardon for those who died despairing; hope for those died unhoping; good tidings for those who died stifled by unrelieved calamities. On errands of life, these letters speed to death.”

An ecology, in other words, consisting of quotidian ecological failures, life lost for the interruption of some crucial material connection, be it ink or gold. Thus, are Bartleby and humanity entangled in the failures falling out of neglect, the idiosyncratic, the addresses improperly copied, and the ill-timed, the words addressed to those already dead. A meta-ecology where discrepancies can never be healed only consigned to oblivion.

But, of course, were Bartleby still living, this ‘sad fancying’ would likewise turn out to be a ‘chimera of a sick and silly brain.’ Just another way to brick over the questions. If the narrator finds consolation, the wreckage of his story remains.

 

II

I admit that I feel more like Ahab than Ishmael… most of the time. But I’m not so much obsessed by the White Whale as by what is obliterated when it’s revealed as yet another mere cetacean. Be it the wrecking of The Pequod, or the flight of the office at No.— Wall-street, the problem of meaning is my White Whale. “Bartleby, the Scrivener” is compelling, I think, to the degree it lends that problem the dimensionality of narrative.

Where in Moby-Dick, the relation between the inscrutable and the human is presented via Ishmael, which is to say the third person, in Bartleby, the relation is presented in the second: the narrator is Ahab, every bit as obsessed with his own pale emblem of unaccountable discrepancy—every bit as maddened. The violence is merely sublimated in quotidian discursivity.

The labour of Ishmael falls to the critic. “Life is so short, and so ridiculous and irrational (from a certain point of view),” Melville writes to John C. Hoadley in 1877, “that one knows not what to make of it, unless—well, finish the sentence for yourself.” A great many critics have, spawning what Dan McCall termed (some time ago now) the ‘Bartleby Industry.’ There’s so many interpretations, in fact, that the only determinate thing one can say regarding the text is that it systematically underdetermines every attempt to determine its ‘meaning.’

In the ecology of literary and philosophical critique, Bartleby remains a crucial watering hole in an ever-shrinking reservation of the humanities. A great number of these interpretations share the narrator’s founding assumption, that Bartleby—the character—represents something exceptional. Consider, for instance, Deleuze in “Bartleby; or, the Formula.”

“If Bartleby had refused, he could still be seen as a rebel or insurrectionary, and as such would still have a social role. But the formula stymies all speech acts, and at the same time, it makes Bartelby a pure outsider [exclu] to whom no social position can be attributed. This is what the attorney glimpses with dread: all his hopes of bringing Bartleby back to reason are dashed because they rest on a logic of presuppositions according to which an employer ‘expects’ to be obeyed, or a kind of friend listened to, whereas Bartleby has invented a new logic, a logic of preference, which is enough to undermine the presuppositions of language as a whole.” 73

Or consider Zizek, who uses Bartleby to conclude The Parallax View no less:

“In his refusal of the Master’s order, Bartleby does not negate the predicate; rather, he affirms a nonpredicate: he does not say that he doesn’t want to do it; he says that he prefers (wants) not to do it. This is how we pass from the politics of “resistance” or “protestation,” which parasitizes upon what it negates, to a politics which opens up a new space outside the hegemonic position and its negation.” 380-1

Bartleby begets ‘Bartleby politics,’ the possibility of a relation to what stands outside relationality, a “move from something to nothing, from the gap between two ‘somethings’ to the gap that separates a something from nothing, from the void of its own place” (381). Bartleby isn’t simply an outsider on this account, he’s a pure outsider, more limit than liminal. And this, of course, is the very assumption that the narrator himself carries away intact: that Bartleby constitutes something ontologically or logically exceptional.

I no longer share this assumption. Like Borges in his “Prologue to Herman Melville’s “Bartleby,” I see “the symbol of the whale is less apt for suggesting the universe is vicious than for suggesting its vastness, its inhumanity, its bestial or enigmatic stupidity.” Melville, for all the wide-eyed grandiloquence of his prose, was a squinty-eyed skeptic. “These men are all cracked right across the brow,” he would write of philosophers such as Emerson. “And never will the pullers-down be able to cope with the builders-up.” For him, the interest always lies in the distances between lofty discourse and the bloody mundanities it purports to solve. As he writes to Hawthorne in 1851:

“And perhaps after all, there is no secret. We incline to think that the Problem of the Universe is like the Freemason’s mighty secret, so terrible to all children. It turns out, at last, to consist in a triangle, a mallet, and an apron—nothing more! We incline to think that God cannot explain His own secrets, and that He would like a little more information upon certain points Himself. We mortals astonish Him as much as He us.”

It’s an all too human reflex. Ignorance becomes justification for the stories we want to tell, and we are filled with “oracular gibberish” as a result.

So what if Bartleby holds no secrets outside the ‘contagion of nihilism’ that Borges ascribes to him?

As a novelist, I cannot but read the tale, with its manifest despair and gallows humour, as the expression of another novelist teetering on the edge of professional ruin. Melville conceived and wrote “Bartleby, the Scrivener” during a dark period of his life. Both Moby-Dick and Pierre had proved to be critical and commercial failures. As Melville would write to Hawthorne:

“What I feel most moved to write, that is banned—it will not pay. Yet, altogether write the other way I cannot. So the product is a final hash, and all my books are botches.”

Forgeries, neither artistic nor official. Two species of neuroticism plague full-time writers, particularly if they possess, as Melville most certainly did, a reflective bent. There’s the neuroticism that drives a writer to write, the compulsion to create, and there’s the neuroticism secondary to a writer’s consciousness of this prior incapacity, the neurotic compulsion to rationalize one’s neuroticism.

Why, for instance, am I writing this now? Am I a literary critic? No. Am I being paid to write this? No. Are there things I should be writing instead? Buddy, you have no idea. So why don’t I write as I should?

Well, quite simply, I would prefer not to.

And why is this? Is it because I have some glorious spark in me? Some essential secret? Am I, like Bartleby, a pure outsider?

Or am I just a fucking idiot? A failed copyist.

For critics, the latter is pretty much the only answer possible when it comes to living writers who genuinely fail to copy. No matter how hard we wave discrepancy’s flag, we remain discrepancy minimization machines—particularly where social cognition is concerned. Living literary dissenters cue reflexes devoted to living threats: the only good discrepancy is a dead discrepancy. As the narrator discovers, attributing something exceptional becomes far easier once the dissenter is dead. Once the source falls silent, the consequences possess the freedom to dispute things as they please.

Writers themselves, however, discover they are divided, that Ahab is not Ahab, but Ishmael as well, the spinner of tales about tales. A failed copyist. A hapless lawyer. Gazing at obstruction, chasing the whale, spinning rationalization after rationalization, confabulating as a human must, taking meagre heart in spasms of critical fantasy.

Endless interpretative self-deception. As much as I recognize Bartleby, I know the narrator only too well. This is why for me, “Bartleby, the Scrivener” is best seen as a prank on the literary establishment, a virus uploaded with each and every Introduction to American Literature class, one assuring that the critic forever bumbles as the narrator bumbles, waddling the easy way, the expected way, embodying more than applying the ‘doctrine of assumptions.’ Bartleby is the paradigmatic idiot, both in the ancient Greek sense of idios, private unto inscrutable, and idiosyncratic unto useless. But for the sake of vanity and cowardice, we make of him something vast, more than a metaphor for x. The character of Bartleby, on this reading, is not so much key to understanding something ‘absolute’ as he is key to understanding human conceit—which is to say, the confabulatory stupidity of the critic.

But explaining the prank, of course, amounts to falling for the prank (this is the key to its power). No matter how mundane one’s interpretation of Bartleby, as an authorial double, as a literary prank, it remains simply one more interpretation, further evidence of the narrative’s profound indeterminacy. ‘Negative exceptionalists’ like Deleuze or Zizek (or Agamben) need only point out this fact to rescue their case—don’t they? Even if Melville conceived Bartleby as his neurotic alter-ego, the word-crazed husband whose unaccountable preferences had reduced his family to penury (and so, charity), he nonetheless happened upon “a zone of indetermination or indiscernibility in which neither words nor characters can be distinguished” (“Bartleby, or the Formula,” 76).

No matter how high one stacks their mundane interpretations of Bartleby—as an authorial alter-ego, a psycho-sociological casualty, an exemplar of passive resistance, or so on—the profundity of his rationality crashing function remains every bit as profound, exceptional. Doesn’t it? After-all, nothing essential binds the distal intent of the author (itself nothing but another narrative) to the proximate effect of the text, which is to “send language itself into flight” (76). Once we set aside the biographical, psychological, historical, economic, political, and so on, does not this formal function remain? And is it not irreducible, exceptional?

That depends whether you think,

is exceptional. What should we say about Necker Cubes? Do they mark the point where the visibility of the visible collapses, generating ‘a zone of indetermination or indiscernibility in which neither indents nor protrusions can be distinguished’? Are they ‘pure figures,’ efficacies that stand outside the possibility of intelligible geometry? Or do they merely present the visual cortex with the demand to distinguish between indents and protrusions absent the information required to settle that demand, thus stranding visual experience upon the predictive threshold of both? Are they bi-stable images?

The first explanation pretty clearly mistakes a heuristic breakdown in the cognition of visual information with an exceptional visual object, something intrinsically indeterminate—something super-geometrical, in fact. When we encounter something visually indeterminate, we immediately blame our vision, which is to say, the invisible, enabling dimension of visual cognition. Visual discrepancies had real reproductive consequences, evolutionarily speaking. Thanks to medial neglect, we had no way of cognizing the ecological nature of vision, so we could only blink, peer, squint, rub our eyes, or change our position. If the discrepancy persisted, we wondered at it, and if we could, transformed it into something useful—be it cuing environmental forms on cave or cathedral walls (‘visual representations’) or cuing wonder with kaleidoscopes at Victorian exhibitions.

Likewise, Deleuze and Zizek (and many, many others) are mistaking a heuristic breakdown in the cognition of social information with an exceptional social entity, something intrinsically indeterminate—something super-social. Imagine encountering a Bartleby in your own place of employ. Imagine your employer not simply tolerating him, but enabling him, allowing him to drift ever deeper into anorexic catatonia. Initially, when we encounter something socially indeterminate in vivo, we typically blame communication—as does the narrator with Bartleby. Social discrepancies, one might imagine, had profound reproductive consequences (given that reproduction is itself social). The narrator’s sensitivity to such discrepancies is the sensitivity that all of us share. Given medial neglect, however, we have no way of cognizing the ecological nature of social cognition. So we check with our colleagues just to be sure (‘Am I losing my mind here?’), then we blame the breakdown in rational reflexes on the man himself. We gossip, test out this or that pet theory, pester spouses who, insensitive to potential micropolitical discrepancies, urge us to file a complaint with someone somewhere. Eventually, we either quit the place, get the poor sod some help, or transform him into something useful, like “Bartleby politics” or what have you. This is the prank that Melville lays out with the narrator—the prank that all post-modern appropriations of this tale trip into headlong…

The ecological nature of cognition entails the blindness of cognition to its ecological nature. We are distributed systems: we evolved to take as much of our environments for granted as we possibly could, accessing as little as possible to solve as many problems as possible. Experience and cognition turn on shallow information ecologies, blind systems turning on reliable (because reliably generated) environmental frequencies to solve problems—especially communicative problems. Absent the requisite systems and environments, these ecologies crash, result in the application of cognitive systems to situations they cannot hope to solve. Those who have dealt with addicted or mentally-ill loved ones know the profundity of these crashes first-hand, the way the unseen reflexes (‘preferences’) governing everyday interactions cast you into dismay and confusion time and again, all for want of applicability. There’s the face, the eyes, all the cues signaling them as them, and then… everything collapses into mealy alarm and confusion. Bartleby, with his dissenting preference, does precisely the same: Melville provides exquisite experiential descriptions of the dumbfounding characteristic of sociocognitive crashes.

Bartleby need not be a ‘pure outsider’ to do this. He just needs to provide enough information to demand disambiguation, but not enough information to provide it. “I would prefer not to”—Bartleby’s ‘formula,’ according to Deleuze—is anything but ‘minimal’: its performance functions the way it does because of the intricate communicative ecology it belongs to. But given medial neglect, our blindness to ecology, the formula is prone to strike us as something quite different, as something possessing no ecology.

It certainly strikes Deleuze as such:

“The formula is devastating because it eliminates the preferable just as mercilessly as any nonpreferred. It not only abolishes the term it refers to, and that it rejects, but also abolishes the other term it seemed to preserve, and that becomes impossible. In fact, it renders them indistinct: it hollows out an ever expanding zone of indiscernibility or indetermination between some nonpreferred activities and a preferable activity. All particularity, all reference is abolished.” 71

Since preferences affirm, ‘preferring not to’ (expressed in the subjunctive no less) can be read as an affirmative negation: it affirms the negation of the narrator’s request. Since nothing else is affirmed, there’s a peculiar sense in which ‘preferring not to’ possesses no reference whatsoever. Medial neglect assures that reflection on the formula occludes the enabling ecology, that asking what the formula does will result in fetishization, the attribution of efficacy in an explanatory vacuum. Suddenly ‘preferring not to’ appears to be a ‘semantic disintegration grenade,’ something essentially disruptive.

In point of natural fact, however, human sociocognition is fundamentally interactive, consisting in the synchronization of radically heuristic systems given only the most superficial information. Understanding one another is a radically interdependent affair. Bartleby presents all the information cuing social reliability, therefore consistently cuing predictions of reliability that turn out to be faulty. The narrator subsequently rummages through the various tools we possess to solve harmless acts of unreliability given medial neglect—tools which have no applicability in Bartleby’s case. Not only does Bartleby crash the network of predictive reflexes constituting the office ecology, he crashes the sociocognitive hacks that humans in general use to troubleshoot such breakdowns. He does so, not because of some arcane semantic power belonging to the ‘formula,’ but because he manifests as a sociocognitive Necker-Cube, cuing noncoercive troubleshooting routines that have no application given whatever his malfunction happens to be.

This is the profound human fact that Melville’s skeptical imagination fastened upon, as well as the reason Bartleby is ‘nothing in particular’: all human social cognition is fundamentally ecological. Consider, once again, the passage where the narrator entertains the possibility of neglecting Bartleby altogether, simply pretending he was absent:

“What was to be done? or, if nothing could be done, was there any thing further that I could assume in the matter? Yes, as before I had prospectively assumed that Bartleby would depart, so now I might retrospectively assume that departed he was. In the legitimate carrying out of this assumption, I might enter my office in a great hurry, and pretending not to see Bartleby at all, walk straight against him as if he were air. Such a proceeding would in a singular degree have the appearance of a home-thrust. It was hardly possible that Bartleby could withstand such an application of the doctrine of assumptions. But upon second thoughts the success of the plan seemed rather dubious. I resolved to argue the matter over with him again.”

Having reached the limits sociocognitive application, he proposes simply ignoring any subsequent failure in prediction, in effect, wishing the Bartlebian crash space away. The problem, of course, is that it ‘takes two to tango’: he has no choice but to ‘argue the matter again’ because the ‘doctrine of assumptions’ is interactional, ecological. What Melville has fastened upon here is the way the astronomical complexity of the sociocognitive (and metacognitive) systems involved holds us hostage, in effect, to their interactional reliability. Meaning depends on maddening sociocognitive intricacies.

The entirety of the story illustrates the fragility of this cognitive ecosystem despite its all-consuming power. Time and again Bartleby is characterized as an ecological casualty of the industrialization of social relations, be it the mass disposal of undelivered letters or the mass reproduction of legally binding documentation. Like ‘computer,’ ‘copier’ names something that was once human but has since become technology. But even as Bartleby’s breakdown expresses the system’s power to break the maladapted, it also reveals its boggling vulnerability, the ease with which it evaporates into like-minded conspiracies and ‘mere pretend.’ So long as everyone plays along—functions reliably—this interdependence remains occluded, and the irrationality (the discrepancy generating stupidity) of the whole never needs be confronted.

In other words, the lesson of Bartleby can be profound, as profound as human communication and cognition itself, without implying anything exceptional. Stupidity, blind, obdurate obliviousness, is all that is required. A minister’s black veil, a bit of crepe poised upon the right interactional interface, can throw whole interpretative communities from their pins. The obstruction, the blank wall, need not conceal anything magical to crash the gossamer ecologies of human life. It need only appear to be a window, or more cunning still, a window upon a wall. We need only be blind to the interactional machinery of looking to hallucinate absolute horizons. Blind to the meat of life.

And in this sense, we can accuse the negative exceptionalists such as Deleuze and Zizek not simply with ignoring life, the very topos of literature, but with concealing the threat that the technologization of life poses to life. Only in an ecology can we understand the way victims can at once be assailants absent aporia, how Bartleby, overthrown by the technosocial ecologies of his age, can in turn overthrow that technosocial ecology. Only understanding life for what we know it to be—biological—allows us to see the profound threat the endless technological rationalization of human sociocognitive ecologies poses to the viability of those ecologies. For Bartleby, by revealing the ecological fragility of human social cognition, how break begets break, reveals the antithesis between ‘progress’ and ‘meaning,’ how the former can only carry the latter so far before crashing.

As Deleuze and Zizek have it, Bartleby holds open a space of essential resistance. As the reading here has it, Bartleby provides a grim warning regarding the ecological fragility of human social cognition. One can even look at him as a blueprint for the potential weaponization of anthropomorphic artificial intelligence, systems designed to strand individual decision-making upon thresholds, to command inaction via the strategic presentation of cues. Far from representing some messianic discrepancy, apophatic proof of transcendence, he represents the way we ourselves become cognitive pollutants when abandoned to polluted cognitive ecologies.