Outing the It that Thinks: The Collapse of an Intellectual Ecosystem

[Note: The peer reviewed version of this paper can be found in, Digital Dionysus: Nietzsche and the Network-Centric Condition, 144-160]

.

For psychology is now once again the road to the fundamental problems.

–Nietzsche, Beyond Good and Evil

I – “The Soul-Hypothesis”

Who kills Hector in The Illiad?

The easy answer is, Achilles. After all, he was the one who drives his spear through Hector’s neck, who glories over his dying form, then proceeds to desecrate his corpse.

But if you read Homer carefully, you will see that the death of Hector is in fact a corporate enterprise. We are told shortly after the duel that death and fate seized and dragged him down. And even more curiously, we learn that Hector was, “struck down at Achilles’ hands by blazing-eyed Athena.”

The ancient Greeks, it seems, saw themselves–or their heroes, at least–as conduits, as prone to enact the will of more shadowy, supernatural agencies as to act on their own. Perhaps it was the exigencies of their lives, the sense of powerlessness that comes with living at the whim of organized violence and random scarcity. Perhaps it was simply a misplaced humility, or a cunning moral prophylactic, a reluctance to take credit for what could turn into an obligation. Whatever the reason, they were disinclined to see themselves as the sole authors of their thoughts and actions…

The way we are taught to see ourselves. The way I saw myself up to the age of 14, the age my mother made the mistake of buying me an old manual typewriter at a local yard sale. I made the mistake of using it, you see, not just to type out adventures for my weekly Dungeons and Dragons session, but to think things through.

I really can’t remember much of those musings: I like to think that they were packed with the kind of accidental profundity you often find in your student’s naive musings, but I really have no way of knowing. All I know for certain is that my thoughts eventually fastened on the concept of cause. It’s ubiquity. It’s explanatory power.

And at one point, I typed the following:

Everything has a cause.

A –> B –> C

A= outer event

B= inner event

C= this very thought now!!!!!!

I had stumbled across determinism. The insight had the character of a religious revelation for me, quite literally. I even wept, realizing not only that everything I had been taught was a lie, but that I was myself a kind of lie. I was an illusion weeping at my own illusoriness. How fucked up was that? Whenever I got high alone, I would listen to Pink Floyd or some-such and just sit staring at my experience, trying to will my way through it, or daring it to show its paltry hand. I became a kind of naive nihilist, blowing away my buddies and alienating all the babes at parties with my arguments against the freedom of will. I would always finish the same way, swinging my arms wide and saying, “It’s all bullshit. All of it. It can’t be and yet it is. Bullshit, through and through!”

Of course, I never stopped believing in the “Bullshit,” as I called it. I was, if anything, quite strident in my moral declarations, and extremely possessive of my ‘choices.’ But nevertheless, a ribbon of despair continually floated in and out of the obscurities that hedged my daily life. I would sigh and look away from all the looked-at things, out a window, or through the fingers of a tree, and just exist in momentary impossibility.

A vacancy absorbing space, as Helen Keller would say.

Later, while at University, I read Heidegger’s Being and Time in an effort to understand deconstruction and Derrida, whom I thought just had to be wrong, whatever it was the crazy bastard was saying. This would be my second religious revelation, one that would ultimately lead to my disastrous tenure as a Branch Derridean. The facticity of my thrownness made a deep impression on me. As did the ontological difference. I realized my earlier revelation was simply that of a naive 14 year-old, one who had been brainwashed by the Encyclopedia of Technology and Innovation that I’d received for Christmas when I was 8. I had made a fetish out of science, failing to see that science had its own historical and conceptual conditions, that it was a skewed artifact, part of the dread “metaphysics of presence.”

Aristotle, man. Had to go fuck things up for everybody.

It was a joyous, heady time for me. Suddenly the world, which had been little more than a skin of mammalian lies whenever I looked with my theoretical eyes, became positively soupy with meaning. Sure, thanks to differance, I could never nail that meaning down with representation, but it was the oh-so-Western urge to nail that was the problem. I had been the proverbial man with a hammer–of course I had seen all questions as ontic nails! At long last I could set aside the conceptual toolbox I had inherited from his well-intentioned, but ultimately deluded Euro-fathers.

Of course, I still waved my arms at parties, but this time the babes seemed to listen. I stared in the mirror saying, “Je ne sais quoi…” I cursed himself for hating French when I was in public school. I began practicing my Gallic shrug. I openly envied the children of diplomats and rued my own backwater upbringing.

Since I had read Derrida and Heidegger, I had no choice but to read Descartes. How could I carry on the critique of metaphysics unless I immersed myself the Western Tradition? Know thy enemy, no? This lead me to ponder the famous Frenchman’s infamous cogito, “I think, therefore I am,” Descartes attempt, given the collapse in confidence wrought by the new science of the 17th century, to place knowledge on a new, secure, subjective foundation.

Just who did the guy think he was fooling? Really?

To show just how hopeless Descartes was, I began returning to Nietzsche again and again and again in all of my undergraduate papers. I was all, like, Beyond Good and Evil, like. I continually paraphrased Nietzsche’s famous reformulation of the Cartesian cogito. I would always write, using a double hanging indent for dramatic purposes,

           It thinks, therefore I am.

Of course, the “it” simply had to be italicized, if only to underscore the abject impersonality at the root of subjectivity. Even though we like to think our thoughts come from our prior thoughts, which is to say, from ourselves, the merest reflection shows this cannot be the case, that each thought is dropped into consciousness from the outside, and that hence the ‘I’ is born after the fact.

Then, the following academic year, I came across Sartre’s reformulation of the cogito in Being and Nothingness. Combining the two, the Sartrean and Nietzschean, I arrived at a reformulation that I thought was distinctly my own,

           It thinks, therefore I was.

Here, I would tell people, we see how well and truly fucked up things are. Not only do our origins congenitally outrun us, we continually outrun ourselves as well! We’re an echo that knows itself only as an echo of this echo. Or, as I used to joke with my in-the-know friends, we’re “Umberto squared.” In my papers, I started using this final formulation to describe Derrida’s self-erasing notion of differance as applied to subjectivity, the way all reflection is in fact a species of deflection.

My professors lapped it up. On my papers I would find comments like, “Excellent!” or, “Great stuff!” or more exciting still, “What would Freud make of this?” scrawled in red pen.

The formula became my mantra, my originary repetition, even though it took quite some time to realize just how originary it was. For some reason, it never dawned on me that I had come full circle, that at 28, I had yet to take a single step beyond 14, intellectually speaking. I had literally kept typing the same self-immolating thought through fourteen years and two life transforming revelations.

It. Me. Nobody. Over and over again.

It would be a poker game, of all absurdities, that would bring this absurdity to light for me. At this particular game, which took place before the hysterical popularity of Texas Hold’em, I met a philosophy PhD student from Mississippi who was also an avowed nihilist. Given my own heathen, positivistic past, I took it upon myself to convert the poor fool. He was just an adolescent, after all–time to set aside childish thoughts! So I launched into an account of my own sorry history and how I had been saved by Heidegger and the ontological difference.

The nihilist listened to me carefully, interrupting only to clarify this or that point with astute questions. Then, after I had more or less burned through my batteries, the nihilist asked, “You agree that science clearly implies nihilism, right?”

“Of course.”

“Well… it’s kind of inconsistent, isn’t it?”

“What’s inconsistent?”

A thoughtful bulge of the bottom lip. “Well, that despite the fact that philosophy hasn’t resolved any matter with any reliability ever, and, despite the fact that science is the most powerful, reliable, theoretical claim-making institution in human history, you’re still willing to suspend your commitment to scientific implications on the basis of prior commitments to philosophical claims about science and this… ontological difference.”

Tortured syntax aside, I understood exactly what the nihilist meant: Why believe Heidegger when you could argue almost anything in philosophy? I had read enough by now to know this was the only sure thing in the humanities. It was an uncomfortable fact: outside the natural sciences there was no way short of exhaustion or conspiracy to end the regress of interpretation.

Nevertheless, I found myself resenting that bottom lip.

“I don’t follow.”

“Well,” the nihilist said, making one of those pained correct-me-if-I’m-wrong faces, “isn’t that kind of like using Ted Bundy’s testimony to convict Mother Theresa?”

“Um,” I replied, my voice pinched in please-no resignation… “I guess?”

So, back to the “Bullshit” it was.

I should have known.

After all, I had only spent 14 years repeating myself.

II – “The New Mistrust”

When I was 14, I had understood the It that comes before in naive causal terms–probably because of the pervasiveness of the scientific worldview. I lost my faith in intentionality. Heidegger changed my life because he convinced me that this was a loaded way of looking at things, that it begged apparently indefensible assumptions, and most importantly, a certain destructive attitude toward being and life as it is lived. I regained my faith in intentionality. Even though I qualified that faith with Nietzsche, Sartre, and Derrida–particularly when it came to agency–my renewed faith in intentionality remained unquestioned.

What I had done, I now realize, is reconsider the same problematic in intentional terms. My adolescent horror, that I wasn’t originary, had become my adult preoccupation. The problem of IT had become safer, somehow, less conceptually corrosive. I became quite fond of my fragmented self hanging out in the grad pub with all my fragmented friends. My old causal way of looking at things, it seemed to me, was juvenile, the presumption of someone bound in the ontic blinders of the scientific worldview. I even told the story I told above, the way I imagine born-again Christians are prone to tell stories of their youthful cognitive folly to fellow believers.

Science… Can you imagine?”

Then I had to go play poker with a fucking nihilist.

Let’s make up a word: determinativity.

Determinativity is simply the degree of determination, the hot potato of efficacy.

So let’s say that I have the determinativity at this moment, that I’m dictating the movements of your soul in the course of verbalizing these marks on the page. Or let’s say the marks themselves have the determinativity, they write you and I simply vanish into them, a kind of Foucauldian sham meant to impose order on an unruly world o’ texts. Or let’s say that you have the determinativity, that you take the words, make of them what you will. Or let’s say your unconscious has the determinativity, that you’re simply the aporetic interstice between the text and some psychodynamic subtext. Or let’s say history has the determinativity, or that culture or society or God or language has the determinativity.

Can we say that all of these things possess determinativity? None of them?

Sure. We can mix and match, recast this and tweak that, and come up with entirely new theoretical outlooks if we want. Spin the academic bottle.

The bottom line is that we really don’t know what the fuck we’re talking about. For better or worse, the only kind of determinativity we can follow with enough methodological and institutional rigor to actually resolve (as opposed to exhaust) interpretative disputes is causality–whatever the hell that is. As Richard Dawkins is so fond of pointing out in interviews, scientists–unlike us–can actually agree on what will change their minds.

And this, as the past five centuries have amply demonstrated, is a powerful thing.

Grasping a problem or a theory or a concept is never enough. Like the blind gurus who confuse the elephant for a snake, tree, and rope because they each only feel its trunk, leg, or tail, you have to know just where you’re grasping from. By some astounding coincidence, I had relegated science using precisely the same, self-aggrandizing theoretical tactic used by all my friends. (Amazing, isn’t it? the way groups of thinkers magically find themselves convinced by the same things–how, as Nietzsche puts it,”[u]nder an invisible spell they always trace once more the identical orbit” [Beyond Good and Evil, 20, 50]). The second flabbergasting coincidence was the way these commitments had the happy consequence of rendering my discursive domain immune to scientific critique, even as it exposed science to my theoretical scrutiny. I mean, who did those scientists think they were? waving around their big fat language game like that? They were so obviously blind to the conditions of their discourse…

In other words, I had used my prior commitment to What Science Was–a social construct, a language game, an expression of the metaphysics of presence–to condition my commitment to What Science Does, which is explicate natural phenomena in causal terms. My domain was nothing less than the human soul, and the last I checked, science was the product of human souls: if anything, science was a subset of my domain, not vice versa.

So I once believed, more or less.

Two kinds of ignorance, it now seems to me, are required to make this family of assumptions convincing (beyond the social psychological dimensions of belief acquisition, such as the simple need for belonging and prestige). First, you need to be unaware of what we now know about human cognition and its apparent limitations. Second, you need to know next to nothing about the physiology of the human soul.

III – “The New Psychologist”

The first ignorance, I have come to think, is nothing short of astounding, and demonstrates the way the humanities, which are so quick to posture themselves as critical authorities, are simply of a piece with our sham culture of pseudo-empowerment and fatuous self-affirmation. For decades, now, cognitive psychologists have be dismantling our flattering cognitive assumptions, compiling an encyclopedic inventory of the biases, fallacies, and outright illusions that afflict human cognition. Please bear with me as I read through the following (partial!) list: actor-observer bias (fundamental attribution error), ambiguity effect, anchoring effect, asymmetric insight illusion, attentional bias, availability heuristic, availability cascade, the bandwagon effect, Barnum effect, base-rate neglect, belief bias, black swan effect, clustering illusion, choice bias, confirmation bias, congruence bias, consensus fallacy, contrast effect, control bias, cryptonesia, deprivation bias, distinction bias, Dunnig-Kruger effect, egocentric bias, expectation bias, exception bias, exposure effect, false memory, focusing effect, framing effect, future discounting, gambler’s fallacy, hindsight bias, halo effect, impact bias, ingroup bias, just-world illusion, moral credential effect, moral luck bias, negativity bias, omission bias, outcome bias, outgroup homogeneity bias, planning fallacy, post-hoc rationalization, post-hoc ergo propter hoc, projection bias, observer-expectancy effect, optimism bias, ostrich effect, positive outcome bias, positivity effect, pareidolia, pessimism bias, primacy effect, recency effect, reaction bias, regression neglect, restraint bias, rosy retrospection effect, selective perception, self-serving bias, Semmelweis reflex, social comparison bias, stereotyping, suggestibility, sunk-cost bias, superiority illusion, status-quo bias, trait ascription bias, transparency illusion, unit bias, ultimate attribution error, wishful thinking, zero-risk bias.

Tedious, I know, but some cognitive shortcomings, such as the Semmelweis reflex (where paradigm-incompatible evidence is rejected out of hand), or exception biases (where individuals think themselves immune to the failings of others) need to be bludgeoned into submission.

This inventory of cognitive foibles has lead many psychologists, perhaps not surprisingly, to rethink the function of reason and argumentation. The traditional view of reason as a cognitive instrument, a tool we use to produce new knowledge out of old, has been all but overturned. The story has to be far more complicated than mere cognition: even though evolution has devised many, many imperfect tools, rarely do the imperfections line up so neatly. All too often, reason seems to fail precisely where and when we need it to.

Earlier this year, the Journal of Behavioural and Brain Sciences devoted an entire issue to Dan Sperber’s Argumentative Theory of Reason (sparking enough interest to warrant an article in The New York Times). According to Sperber, the primary function of reason is to facilitate argumentation instead of cognition, to win the game of giving and asking for reasons rather than get things right. Far from a “flawed general mechanism,” he argues that human reasoning “is a remarkably efficient specialized device” when viewed through the lense of social and cognitive interaction [72]. And though this might prove to be an epistemic disaster at the individual level, he contends the picture is not “wholly disheartening” [72] when viewed in a larger social context. As bad as we are when it comes to producing arguments, the research suggests that we are not quite so bad when it comes to evaluating them, “provided [we] have no particular axe to grind” [72]–an important proviso to say the least.

My own reservations with ATR stem from a failure to discriminate between various contexts of reasoning, or to consider the role played by ambiguity. In either case, my guess is that balance between the epistemic and the egocentric dimensions of reasoning is varies according to social and semantic circumstances. Human reason evolved in social conditions far different than our own, at a time when almost all our relationships were at once relationships of material interdependency–when our lives literally depended on face to face consensus and cooperation. Given this, it stands to reason that the epistemic/egocentric emphasis of reasoning will vary depending on the urgency of the subject matter, whether we are arguing about the constitution of the moon, or the direction of a roaming pack of predators. I also think [based on the work of David Dunning – Self-Insight] that the epistemic/egocentric emphasis is indexed to the relative clarity and ambiguity of the subject matter, that reasoning is more knowledge prone when the matter at issue is proximal and practical, and more display prone when it is distal and abstract. Anyone who has rescued any kind of relationship knows something of the way circumstances can induce us to ‘check our ego at the door.’

Either way, we now know enough about reasoning to assert that we are, as the ancient Skeptics argued so long ago, theoretical incompetents. (And if you think about it, this really isn’t all that surprising, insofar as science counts as an accomplishment, something humanity had to discover, nurture, and defend. Human theoretical incompetence actually explains why we required the methodological and institutional apparatuses of science to so miraculously transform the world).

If the first ignorance pertains to the how of theory in the humanities, the thinking, the second concerns the what–the It that thinks. As much as cognitive psychology has problematized reasoning, cognitive neuroscience has all but demolished the so-called ‘manifest image,’ consciousness as it appears to introspection and intuition.

Consider the ‘feeling of certainty’ that motivates not just some, but all of your beliefs. Short of this ‘sense of rightness,’ it’s hard to imagine how we could commit to any claim or paradigm. And yet, as neurologist Richard Burton has recently argued, “[c]ertainty and similar states of ‘knowing what we know’ arise out of involuntary brain mechanisms that, like love or anger, function independently of reason”–which is to say, independent of any rational warrant [On Being Certain].

And the list of ‘debunked experiences’ goes on. You have Daniel Wegner arguing that the ‘feeling of willing,’ far from initiating action, is something that merely accompanies behaviour[The Illusion of Conscious Will] ; Daniel Dennett contending that qualia–the much vaunted ‘what-it-is-like’ of experience–do not exist; Thomas Metzinger arguing much the same about agency and selfhood [Being No One]. Paul and Patricia Churchland arguing the wholesale replacement of ‘folk psychology’–talk of desires, beliefs, affects, and so on–with a more neuroscientifically adequate vocabulary.

None of these theories and arguments command anything approaching consensus in the cognitive neuroscience research community, but each of them represents an attempt to make sense of a steadily growing body of radically counterintuitive data. Though we cannot yet say what a given experience ‘is,’ we can say that the final answer, like so many answers provided by science, will lie far outside the pale of our intuitive preconceptions–perhaps incomprehensibly so.

In my own view, this is precisely what we should expect. We now know that only a fraction of the estimated 38,000 trillion operations per second processed by the brain finds its way to consciousness. This means that experience, all experience, is profoundly privative, a simplistic caricature of otherwise breathtakingly complex processes. We want to think that this loss of information is synoptic, that despite its relative paucity, experience nevertheless captures some functional kernel of its correlated neurological functions. But there are telling structural and developmental reasons to think otherwise. The metabolic costs associated with neural processing and the sheer evolutionary youth of human consciousness suggest that experience should be severely blinkered: synoptic or ‘low resolution’ in some respects, perhaps, but thoroughly fictitious otherwise. (The fact that these fictions appear to play efficacious roles should come as no surprise, since they need only be systematically related to actually efficacious brain functions. Since they constitute the sum of what we experience, they will also constitute the sum of ‘understanding,’ albeit one which is itself be incomprehensible.)

This second ignorance, you might object, is anything but problematic, since the human soul has always been an open question. But this is precisely what I’m saying: the human soul is no longer an open question.

It has become an empirical one.

One of the great paradoxes of human cognition is the way ignorance, far more than knowledge, serves as the foundation of certainty. For years I had tackled the question of the human soul using the analytic and speculative tools belonging to the humanities–and I had done so with absolute confidence. Sure, I knew that reason was ‘flawed’ and that the soul was ‘problematic.’ Sure, I felt the irony of clinging to my interpretations given the ‘tropical luxuriance’ of the alternatives. Sure, I realized that there was no definitive way to arbitrate between alternatives. But I had embraced a theoretical outlook that seemed to make a virtue out of these apparent liabilities. Even more importantly, I had secured a privileged social identity. I… was the radical one. I… was the one asking the difficult questions. I… was the truly critical one. I understood the way my subject position had been culturally and historically conditioned. I realized that all theory was laden, warped by the weight of innumerable implicit assumptions. And because of this, I was ‘enlightened’ in a way that scientific researchers (outside of France, perhaps) were not. Where scientists (and, well, every other human on planet) were constrained by their ignorance of their assumptions, unwitting agents of a benighted and pernicious conceptual status quo, I understood the oh-so-onerous burden of culture and history–and so thought I could work around them, with a little luck.

In other words, I believed what pretty much every human being (not suffering clinical depression) believes: that I had won the Magical Belief Lottery, that my totalizing post-structuralist/contextualist/constructivist theoretical outlook really was ‘just the way things were.’ Why bother interrogating it, when all the critical heavy lifting had already been done? Besides, I wanted to think I was a nonconformist, contrarian, iconoclastic radical. I needed an outlook to match my culture-jamming T-shirts.

But the ugly, perhaps monstrous, fact remains. If the cognitive psychologists are right and reasoning–outside of a narrow family of contexts–is profoundly flawed, far more egocentric than epistemic, then the humanities are stranded with a box of broken tools. If the eliminativists and revisionists are right and consciousness is itself a kind of cognitive illusion then the very subject matter of the humanities awaits scientifically legitimized redefinition.

If these two ignorances were all that kept us safe, then we are about to become extinct.

IV – “Inventing the New”

Which brings me back to the remarkable exception that is Nietzsche.

Neither of the ignorances described above, I think, would surprise him in the least. The notion that reasoning is motivated is a pervasive theme throughout his work. Like Sperber, he believes reason’s epistemic presumption is largely a ploy, a way to gain advantage. Where the philosophical tradition assumed that intuition, observation, and logical necessity primarily motivated reason–he proposes breakfast, weather, cleanliness, or abode. [Ecce Homo, “Why I am So Clever.”]

Nietzsche commonly employs what might be called a transcategorical interpretative strategy in his work. He likes to peer past the obvious and the conceptually contiguous to things unlikely–even inhuman–and so regularly spins outrage and revelation out of what other philosophers would call ‘category mistakes.’ To this extent, contemporary cognitive neuroscience would not shock him in the least. I’m sure he would be delighted to see so many of his counterintuitive hunches reborn in empirical guise. As he famously writes in The Antichrist:”our knowledge of man today is real knowledge precisely to the extent that it is knowledge of him as a machine” [14].

For the longest time I read Nietzsche as a proto-post-structuralist, as the thinker of the so-called ‘performative turn’ that would come to dominate so much 20th Century philosophy. Now, I appreciate that he is so very much more, that he was actually thinking past post-structuralism a century before it. And I realize that his continual references to physiology–and other empirical wheels that never seemed to turn–are in fact every bit as central as their frequency suggests.

Consider the following quote, one which I think can only be truly appreciated now:

But the road to new forms and refinements of the soul-hypothesis stands open: and such conceptions as ‘mortal soul’ and ‘soul as multiplicity of the subject’ and ‘soul structure of the drives and emotions’ want henceforth to possess civic rights in science. To be sure, when the new psychologist puts an end to the superstition which has hitherto flourished around the soul-idea with almost tropical luxuriance, he has as it were thrust himself out into a new wilderness and a new mistrust -it may be that the older psychologists had a merrier and more comfortable time of it–: ultimately, however, he sees that, by precisely that act, he has also condemned himself to inventing the new–and, who knows? perhaps to finding it.– [Beyond Good and Evil, 12, 43-4]

The anachronistic timeliness of this statement is nothing short of remarkable. Nietzsche was as much futurist as intellectual historian, an annalist of endangered and collapsing conceptual ecosystems. He understood that the Enlightenment would not stop exploding our ingrown vanities, that sooner or later the anthropos would fall with the anthropomorphic.

When I first read this passage as a young man I thought that I was one of the new psychologists, that I was the one ‘condemned’ to be so cool. Sure, the terms ‘science’ and ‘psychologist’ made me uncomfortable, but certainly Nietzsche was using these terms in their broadest, Latin and Ancient Greek, senses–scientia, and psukhe logos. He couldn’t mean science science, or psychologist psychologist could he?

Noooo…

Yes?

Of course he did. I thought, the way so many others thought, that Nietzsche had glimpsed post-modernity, that his ‘deconstruction of the subject’ was far more post-structural than Humean. Now I’m convinced that this is the moment he had glimpsed, however obscurely: the moment when our methods crumble, and our discursive domain slips away–when science asserts its problematic cognitive rights.

My strategy here has been twofold: First, to offer this thesis within a biographical context, to demonstrate the powerful way path-dependency shapes our theoretical commitments. So much of what we believe is simply a matter of who or what gets to us first. And second, to introduce you to the ‘new psychologists,’ the ones that are ‘outing’ the It that thinks–and unfortunately (though not surprisingly) showing that monsters hide in the closet after all.

The goal of this strategy has been to show you the cognitive fragility of your ecosystem, and thus your inevitable demise as an intellectual species.

So here’s the cartoon I want to offer you: The sciences have spent centuries rooting through the world, replacing anecdote and apocrypha with quantitative observation, and intentional, anthropomorphic accounts of natural phenomena with functional explanations. During this time, however, one stubborn corner of the natural world remained relatively immune to this process simply because of the sheer complexity of its functions: the human brain. As a result, the discursive traditions that took the soul as their domain were spared the revolution that swept away the old, anthropomorphic discourses of the physical world. Though certainly conditioned by the sciences, the humanities have flourished within what might be called an ‘intentional game preserve.’

Motivated reasoning means that we can make endless conceptual hash of ambiguity. So long as the causal precursors of thought remain shrouded, anything goes theoretically speaking. Instead of saying, “Here be Dragons,” we say, “Here be the Will to Power, the Id, differance, virtualities, normative contexts, the social a priori, and so on.”

We make the very mistake that Spinoza accuses naive Christians of making in his letters: we conceive of the condition in terms belonging to the conditioned. So, where naive Christians anthropomorphize God, theorists in the humanities anthropomorphize (albeit in a conceptually decomposed form) the ‘darkness that comes before’ thought: we foist intentional interpretations on the conditions of the soul. Where the ancient Greeks said “Athena struck down Hector by Achilles hand,” we say, “The social a priori struck down Hector by Achilles hand,” or “The unconscious struck down Hector by Achilles hand.”

Thanks to the obduracy of the brain, scholars in the humanities could safely expound on the ‘real’ It without any knowledge of natural science–let alone respect for its practices. As the one family of interrelated domains where intentional speculation retained something of its ancient cognitive gravitas, the humanities provided a discursive space where specialists could still intentionally theorize without fear of embarrassing themselves. So long as we discharged our discursive obligations with domain-specific erudition and intelligence we could hold our heads up high with in-group pride.

The humanities, have remained, in a peculiar way, ‘prescientific.’ You might even say, ancient.

How times have changed. The walls of the brain have been overrun. The intentional bastions of the soul are falling. Taken together, the sciences of the mind and brain are developing a picture that in many cases out-and-out contradicts many of the folk-psychological intuitions that underwrite so much speculation within the humanities. Unless one believes the humanities magically constitute a ‘special case,’ there is no reason to think that its voluminous, armchair speculations will have a place in the ‘post-scientific’ humanities to come.