Three Pound Brain

No bells, just whistling in the dark…

Tag: Continental philosophy

The Truth Behind the Myth of Correlationism

by rsbakker

A wrong turn lies hidden in the human cultural code, an error that has scuttled our every attempt to understand consciousness and cognition. So much philosophical activity reeks of dead ends: we try and we try, and yet we find ourselves mired in the same ancient patterns of disputation. The majority of thinkers believe the problem is local, that they need only tinker with the tools they’ve inherited. They soldier on, arguing that this or that innovative modification will overcome our confusion. Some, however, believe the problem lies deeper. I’m one of those thinkers, as is Meillassoux. I think the solution lies in speculation bound to the hip of modern science, in something I call ‘heuristic neglect.’ For me, the wrong turn lies in the application of intentional cognition to solve the theoretical problem of intentional cognition. Meillassoux thinks it lies in what he calls ‘correlationism.’

Since I’ve been accused of ‘correlationism’ on a couple of occasions now, I thought it worthwhile tackling the issue in more detail. This will not be an institutional critique a la Golumbia’s, who manages to identify endless problems with Meillassoux’s presentation, while somehow entirely missing his skeptical point: once cognition becomes artifactual, it becomes very… very difficult to understand. Cognitive science is itself fractured about Meillassoux’s issue.

What follows will be a constructive critique, an attempt to explain the actual problem underwriting what Meillassoux calls ‘correlationism,’ and why his attempt to escape that problem simply collapses into more interminable philosophy. The problem that artifactuality poses to the understanding of cognition is very real, and it also happens to fall into the wheelhouse of Heuristic Neglect Theory (HNT). For those souls growing disenchanted with Speculative Realism, but unwilling to fall back into the traditional bosom, I hope to show that HNT not only offers the radical break with tradition that Meillassoux promises, it remains inextricably bound to the details of this, the most remarkable age.

What is correlationism? The experts explain:

Correlation affirms the indissoluble primacy of the relation between thought and its correlate over the metaphysical hypostatization or representational reification of either term of the relation. Correlationism is subtle: it never denies that our thoughts or utterances aim at or intend mind-independent or language-independent realities; it merely stipulates that this apparently independent dimension remains internally related to thought and language. Thus contemporary correlationism dismisses the problematic of scepticism, and or epistemology more generally, as an antiquated Cartesian hang-up: there is supposedly no problem about how we are able to adequately represent reality; since we are ‘always already’ outside ourselves and immersed in or engaging with the world (and indeed, this particular platitude is constantly touted as the great Heideggerean-Wittgensteinian insight). Note that correlationism need not privilege “thinking” or “consciousness” as the key relation—it can just as easily replace it with “being-in-the-world,” “perception,” “sensibility,” “intuition,” “affect,” or even “flesh.” Ray Brassier, Nihil Unbound, 51

By ‘correlation’ we mean the idea according to which we only ever have access to the correlation between thinking and being, and never to either term considered apart from the other. We will henceforth call correlationism any current of thought which maintains the unsurpassable character of the correlation so defined. Consequently, it becomes possible to say that every philosophy which disavows naive realism has become a variant of correlationism. Quentin Meillassoux, After Finitude, 5

Correlationism rests on an argument as simple as it is powerful, and which can be formulated in the following way: No X without givenness of X, and no theory about X without a positing of X. If you speak about something, you speak about something that is given to you, and posited by you. Consequently, the sentence: ‘X is’, means: ‘X is the correlate of thinking’ in a Cartesian sense. That is: X is the correlate of an affection, or a perception, or a conception, or of any subjective act. To be is to be a correlate, a term of a correlation . . . That is why it is impossible to conceive an absolute X, i.e., an X which would be essentially separate from a subject. We can’t know what the reality of the object in itself is because we can’t distinguish between properties which are supposed to belong to the object and properties belonging to the subjective access to the object. Quentin Meillassoux,”Time without Becoming

The claim of correlationism is the corollary of the slogan that ‘nothing is given’ to understanding: everything is mediated. Once knowing becomes an activity, then the objects insofar as they are known become artifacts in some manner: reception cannot be definitively sorted from projection and as a result no knowledge can be said to be absolute. We find ourselves trapped in the ‘correlationist circle,’ trapped in artifactual galleries, never able to explain the human-independent reality we damn well know exists. Since all cognition is mediated, all cognition is conditional somehow, even our attempts (or perhaps, especially our attempts) to account for those conditions. Any theory unable to decisively explain objectivity is a theory that cannot explain cognition. Ergo, correlationism names a failed (cognitivist) philosophical endeavour.

It’s a testament to the power of labels in philosophy, I think, because as Meillassoux himself acknowledges there’s nothing really novel about the above sketch. Explaining the ‘cognitive difference’ was my dissertation project back in the 90’s, after all, and as smitten as I was with my bullshit solution back then, I didn’t think the problem itself was anything but ancient. Given this whole website is dedicated to exploring and explaining consciousness and cognition, you could say it remains my project to this very day! One of the things I find so frustrating about the ‘critique of correlationism’ is that the real problem—the ongoing crisis—is the problem of meaning. If correlationism fails because correlationism cannot explain cognition, then the problem of correlationism is an expression of a larger problem, the problem of cognition—or in other words, the problem of intentionality.

Why is the problem of meaning an ongoing crisis? In the past six fiscal years, from 2012 to 2017, the National Institute of Health will have spent more than 113 billion dollars funding research bent on solving some corner of the human soul. [1] And this is just one public institution in one nation involving health related research. If you include the cognitive sciences more generally—research into everything from consumer behaviour to AI—you could say that solving the human soul commands more resources than any other domain in history. The reason all this money is being poured into the sciences rather than philosophy departments is that the former possesses real world consequences: diseases cured, soap sold, politicians elected. As someone who tries to keep up with developments in Continental philosophy, I already find the disconnect stupendous, how whole populations of thinkers continue discoursing as if nothing significant has changed, bitching about traditional cutlery in the shadow of the cognitive scientific tsunami.

Part of the popularity of the critique of correlationism derives from anxieties regarding the growing overlap of the sciences of the human and the humanities. All thinkers self-consciously engaged in the critique of correlationism reference scientific knowledge as a means of discrediting correlationist thought, but as far as I can tell, the project has done very little to bring the science, what we’re actually learning about consciousness and cognition, to the fore of philosophical debates. Even worse, the notion of mental and/or neural mediation is actually central to cognitive science. What some neuroscientists term ‘internal models,’ which monolopolize our access to ourselves and the world, is nothing if not a theoretical correlation of environments and cognition, trapping us in models of models. The very science that Meillassoux thinks argues against correlationism in one context, explicitly turns on it in another. The mediation of knowledge is the domain of cognitive science—full stop. A naturalistic understanding of cognition is a biological understanding is an artifactual understanding: this is why the upshot of cognitive science is so often skeptical, prone to further diminish our traditional (if not instinctive) hankering for unconditioned knowledge—to reveal it as an ancestral conceit

A kind of arche-fossil.

If an artifactual approach to cognition is doomed to misconstrue cognition, then cognitive science is a doomed enterprise. Despite the vast sums of knowledge accrued, the wondrous and fearsome social instrumentalities gained, knowledge itself will remain inexplicable. What we find lurking in the bones of Meillassoux’s critique, in other words, is precisely the same commitment to intentional exceptionality we find in all traditional philosophy, the belief that the subject matter of traditional philosophical disputation lies beyond the pale of scientific explanation… that despite the cognitive scientific tsunami, traditional intentional speculation lies secure in its ontological bunkers.

Only more philosophy, Meillassoux thinks, can overcome the ‘scandal of philosophy.’ But how is mere opinion supposed to provide bona fide knowledge of knowledge? Speculation on mathematics does nothing to ameliorate this absurdity: even though paradigmatic of objectivity, mathematics remains as inscrutable as knowledge itself. Perhaps there is some sense to be found in the notion of interrogating/theorizing objects in a bid to understand objectivity (cognition), but given what we now know regarding our cognitive shortcomings in low-information domains, we can be assured that ‘object-oriented’ approaches will bog down in disputation.

I just don’t know how to make the ‘critique of correlationism’ workable, short ignoring the very science it takes as its motivation, or just as bad, subordinating empirical discoveries to some school of ‘fundamental ontological’ speculation. If you’re willing to take such a leap of theoretical faith, you can be assured that no one in the vicinity of cognitive science will take it with you—and that you will make no difference in the mad revolution presently crashing upon us.

We know that knowledge is somehow an artifact of neural function—full stop. Meillassoux is quite right to say this renders the objectivity of knowledge very difficult to understand. But why think the problem lies in presuming the artifactual nature of cognition?—especially now that science has begun reverse-engineering that nature in earnest! What if our presumption of artifactuality weren’t so much the problem, as the characterization? What if the problem isn’t that cognitive science is artifactual so much as how it is?

After all, we’ve learned a tremendous amount about this how in the past decades: the idea of dismissing all this detail on the basis of a priori guesswork seems more than a little suspect. The track record would suggest extreme caution. As the boggling scale of the cognitive scientific project should make clear, everything turns on the biological details of cognition. We now know, for instance, that the brain employs legions of special purpose devices to navigate its environments. We know that cognition is thoroughly heuristic, that it turns on cues, bits of available information statistically correlated to systems requiring solution.

Most all systems in our environment shed information enabling the prediction of subsequent behaviours absent the mechanical particulars of that information. The human brain is exquisitely tuned to identify and exploit the correlation of information available and subsequent behaviours. The artifactuality of biology is an evolutionary one, and as such geared to the thrifty solution of high impact problems. To say that cognition (animal or human) is heuristic is to say it’s organized according to the kinds of problems our ancestors needed to solve, and not according to those belonging to academics. Human cognition consists of artifactualities, subsystems dedicated to certain kinds of problem ecologies. Moreover, it consists of artifactualities selected to answer questions quite different from those posed by philosophers.

These two facts drastically alter the landscape of the apparent problem posed by ‘correlationism.’ We have ample theoretical and empirical reasons to believe that mechanistic cognition and intentional cognition comprise two quite different cognitive regimes, the one dedicated to explanation via high-dimensional (physical) sourcing, the other dedicated to explanation absent that sourcing. As an intentional phenomena, objectivity clearly belongs to the latter. Mechanistic cognition, meanwhile, is artifactual. What if it’s the case that ‘objectivity’ is the turn of a screw in a cognitive system selected to solve in the absence of artifactual information? Since intentional cognition turns on specific cues to leverage solutions, and since those cues appear sufficient (to be the only game in town where that behaviour is concerned), the high-dimensional sourcing of that same behavior generates a philosophical crash space—and a storied one at that! What seems sourceless and self-evident becomes patently impossible.

Short magic, cognitive systems possess the environmental relationships they do thanks to super-complicated histories of natural and neural selection—evolution and learning. Let’s call this their orientation, understood as the nonintentional (‘zombie’) correlate of ‘perspective.’ The human brain is possibly the most complex thing we know of in the universe (a fact which should render any theory of the human neglecting that complexity suspect). Our cognitive systems, in other words, possess physically intractable orientations. How intractable? Enough that billions of dollars in research has merely scratched the surface.

Any capacity to cognize this relationship will perforce be radically heuristic, which is to say, provide a means to solve some critical range of problems—a problem ecology—absent natural historical information. The orientation heuristically cognized, of course, is the full-dimensional relationship we actually possess, only hacked in ways that generate solutions (repetitions of behaviour) while neglecting the physical details of that relationship.

Most significantly, orientation neglects the dimension of mediation: thought and perception (whatever they amount to) are thoroughly blind to their immediate sources. This cognitive blindness to the activity of cognition, or medial neglect, amounts to a gross insensitivity to our physical continuity with our environments, the fact that we break no thermodynamic laws. Our orientation, in other words, is characterized by a profound, structural insensitivity to its own constitution—its biological artifactuality, among other things. This auto-insensitivity, not surprisingly, includes insensitivity to the fact of this insensitivity, and thus the default presumption of sufficiency. Specialized sensitivities are required to flag insufficiencies, after all, and like all biological devices, they do not come for free. Not only are we blind to our position within the superordinate systems comprising nature, we’re blind to our blindness, and so, unable to distinguish table-scraps from a banquet, we are duped into affirming inexplicable spontanieties.

‘Truth’ belongs to our machinery for communicating (among other things) the sufficiency of iterable orientations within superordinate systems given medial neglect. You could say it’s a way to advertise clockwork positioning (functional sufficiency) absent any inkling of the clock. ‘Objectivity,’ the term denoting the supposed general property of being true apart from individual perspectives, is a deliberative contrivance derived from practical applications of ‘truth’—the product of ‘philosophical reflection.’ The problem with objectivity as a phenomenon (as opposed to ‘objectivity’ as a component of some larger cognitive articulation) is that the sufficiency of iterable orientations within superordinate systems is always a contingent affair. Whether ‘truth’ occasions sufficiency is always an open question, since the system provides, at best, a rough and ready way to communicate and/or troubleshoot orientation. Unpredictable events regularly make liars of us all. The notion of facts ‘being true’ absent the mediation of human cognition, ‘objectivity,’ also provides a rough and ready way to communicate and/or troubleshoot orientation in certain circumstances. We regularly predict felicitous orientations without the least sensitivity to their artifactual nature, absent any inkling how their pins lie in intractable high-dimensional coincidences between buzzing brains. This insensitivity generates the illusion of absolute orientation, a position outside natural regularities—a ‘view from nowhere.’ We are a worm in the gut of nature convinced we possess disembodied eyes. And so long as the consequences of our orientations remain felicitous, our conceit need not be tested. Our orientations might as well ‘stand nowhere’ absent cognition of their limits.

Thus can ‘truth’ and ‘objectivity’ be naturalized and their peculiarities explained.

The primary cognitive moral here is that lacking information has positive cognitive consequences, especially when it comes to deliberative metacognition, our attempts to understand our nature via philosophical reflection alone. Correlationism evidences this in a number of ways.

As soon as the problem of cognition is characterized as the problem of thought and being, it becomes insoluble. Intentional cognition is heuristic: it neglects the nature of the systems involved, exploiting cues correlated to the systems requiring solution instead. The application of intentional cognition to theoretical explanation, therefore, amounts to the attempt to solve natures using a system adapted to neglect natures. A great deal of traditional philosophy is dedicated to the theoretical understanding of cognition via intentional idioms—via applications of intentional cognition. Thus the morass of disputation. We presume that specialized problem-solving systems possess general application. Lacking the capacity to cognize our inability to cognize the theoretical nature of cognition, we presume sufficiency. Orientation, the relation between neural systems and their proximal and distal environments—between two systems of objects—becomes perspective, the relation between subjects (or systems of subjects) and systems of objects (environments). If one conflates the manifest artifactual nature of orientation for the artifactual nature of perspective (subjectivity), then objectivity itself becomes a subjective artifact, and therefore nothing objective at all. Since orientation characterizes our every attempt to solve for cognition, conflating it with perspective renders perspective inescapable, and objectivity all but inexplicable. Thus the crash space of traditional epistemology.

Now I know from hard experience that the typical response to the picture sketched above is to simply insist on the conflation of orientation and perspective, to assert that my position, despite its explanatory power, simply amounts to more of the same, another perspectival Klein Bottle distinctive only for its egregious ‘scientism.’ Only my intrinsically intentional perspective, I am told, allows me to claim that such perspectives are metacognitive artifacts, a consequence of medial neglect. But asserting perspective before orientation on the basis of metacognitive intuitions alone not only begs the question, it also beggars explanation, delivering the project of cognizing cognition to never-ending disputation—an inability to even formulate explananda, let alone explain anything. This is why I like asking intentionalists how many centuries of theoretical standstill we should expect before that oft advertised and never delivered breakthrough finally arrives. The sin Meillassoux attributes to correlationism, the inability to explain cognition, is really just the sin belonging to intentional philosophy as a whole. Thanks to medial neglect, metcognition,  blind to both its sources and its source blindness, insists we stand outside nature. Tackling this intuition with intentional idioms leaves our every attempt to rationalize our connection underdetermined, a matter of interminable controversy. The Scandal dwells on eternal.

I think orientation precedes perspective—and obviously so, having watched loved ones dismantled by brain disease. I think understanding the role of neglect in orientation explains the peculiarities of perspective, provides a parsimonious way to understand the apparent first-person in terms of the neglect structure belonging to the third. There’s no problem with escaping the dream tank and touching the world simply because there’s no ontological distinction between ourselves and the cosmos. We constitute a small region of a far greater territory, the proximal attuned to the distal. Understanding the heuristic nature of ‘truth’ and ‘objectivity,’ I restrict their application to adaptive problem-ecologies, and simply ask those who would turn them into something ontologically exceptional why they would trust low-dimensional intuitions over empirical data, especially when those intuitions pretty much guarantee perpetual theoretical underdetermination. Far better trust to our childhood presumptions of truth and reality, in the practical applications of these idioms, than in any one of the numberless theoretical misapplications ‘discovering’ this trust fundamentally (as opposed to situationally) ‘naïve.’

The cognitive difference, what separates the consequences of our claims, has never been about ‘subjectivity’ versus ‘objectivity,’ but rather intersystematicity, the integration of ever-more sensitive orientations possessing ever more effectiveness into the superordinate systems encompassing us all. Physically speaking, we’ve long known that this has to be the case. Short actual difference making differences, be they photons striking our retinas or compression waves striking our eardrums or so on, no difference is made. Even Meillassoux acknowledges the necessity of physical contact. What we’ve lacked is a way of seeing how our apparently immediate intentional intuitions, be they phenomenological, ontological, or normative, fit into this high-dimensional—physical—picture.

Heuristic Neglect Theory not only provides this way, it also explains why it has proven so elusive over the centuries. HNT explains the wrong turn mentioned above. The question of orientation immediately cues the systems our ancestors developed to circumvent medial neglect. Solving for our behaviourally salient environmental relationships, in other words, automatically formats the problem in intentional terms. The automaticity of the application of intentional cognition renders it apparently ‘self-evident.’

The reason the critique of correlationism and speculative realism suffer all the problems of underdetermination their proponents attribute to correlationism is that they take this very same wrong turn. How is Meillassoux’s ‘hyper-chaos,’ yet another adventure in a priori speculation, anything more than another pebble tossed upon the heap of traditional disputation? Novelty alone recommends them. Otherwise they leave us every bit as mystified, every bit as unable to accommodate the torrent of relevant scientific findings, and therefore every bit as irrelevant to the breathtaking revolutions even now sweeping us and our traditions out to sea. Like the traditions they claim to supersede, they peddle cognitive abjection, discursive immobility, in the guise of fundamental insight.

Theoretical speculation is cheap, which is why it’s so frightfully easy to make any philosophical account look bad. All you need do is start worrying definitions, then let the conceptual games begin. This is why the warrant of any account is always a global affair, why the power of Evolutionary Theory, for example, doesn’t so much lie in the immunity of its formulations to philosophical critique, but in how much it explains on nature’s dime alone. The warrant of Heuristic Neglect Theory likewise turns on the combination of parsimony and explanatory power.

Anyone arguing that HNT necessarily presupposes some X, be it ontological or normative, is simply begging the question. Doesn’t HNT presuppose the reality of intentional objectivity? Not at all. HNT certainly presupposes applications of intentional cognition, which, given medial neglect, philosophers pose as functional or ontological realities. On HNT, a theory can be true even though, high-dimensionally speaking, there is no such thing as truth. Truth talk possesses efficacy in certain practical problem-ecologies, but because it participates in solving something otherwise neglected, namely the superordinate systematicity of orientations, it remains beyond the pale of intentional resolution.

Even though sophisticated critics of eliminativism acknowledge the incoherence of the tu quoque, I realize this remains a hard twist for many (if not most) to absorb, let alone accept. But this is exactly as it should be, both insofar as something has to explain why isolating the wrong turn has proven so stupendously difficult, and because this is precisely the kind of trap we should expect, given the heuristic and fractionate nature of human cognition. ‘Knowledge’ provides a handle on the intersection of vast, high-dimensional histories, a way to manage orientations without understanding the least thing about them. To know knowledge, we will come to realize, is to know there is no such thing, simply because ‘knowing’ is a resolutely practical affair, almost certainly inscrutable to intentional cognition. When you’re in the intentional mode, this statement simply sounds preposterous—I know it once struck me as such! It’s only when you appreciate how far your intuitions have strayed from those of your childhood, back when your only applications of intentional cognition were practical, that you can see the possibility of a more continuous, intersystematic way to orient ourselves to the cosmos. There was a time before you wandered into the ancient funhouse of heuristic misapplication, when you could not distinguish between your perspective and your orientation. HNT provides a theoretical way to recover that time and take a radically different path.

As a bona fide theory of cognition, HNT provides a way to understand our spectacular inability to understand ourselves. HNT can explain ‘aporia.’ The metacognitive resources recruited for the purposes of philosophical reflection possess alarm bells—sensitivities to their own limits—relevant only to their ancestral applications. The kinds of cognitive apories (crash spaces) characterizing traditional philosophy are precisely those we might expect, given the sudden ability to exercise specialized metacognitive resources out of school, to apply, among other things, the problem-solving power of intentional cognition to the question of intentional cognition.

As a bona fide theory of cognition, HNT bears as much on artificial cognition as on biological cognition, and as such, can be used to understand and navigate the already radical and accelerating transformation of our cognitive ecologies. HNT scales, from the subpersonal to the social, and this means that HNT is relevant to the technological madness of the now.

As a bona fide empirical theory, HNT, unlike any traditional theory of intentionality, will be sorted. Either science will find that metacognition actually neglects information in the ways I propose, or it won’t. Either science will find this neglect possesses the consequences I theorize, or it won’t. Nothing exceptional and contentious is required. With our growing understanding of the brain and consciousness comes a growing understanding of information access and processing capacity—and the neglect structures that fall out of them. The human brain abounds in bottlenecks, none of which are more dramatic than consciousness itself.

Cognition is biomechanical. The ‘correlation of thought and being,’ on my account, is the correlation of being and being. The ontology of HNT is resolutely flat. Once we understand that we only glimpse as much of our orientations as our ancestors required for reproduction, and nothing more, we can see that ‘thought,’ whatever it amounts to, is material through and through.

The evidence of this lies strewn throughout the cognitive wreckage of speculation, the alien crash site of philosophy.



[1] This includes, in addition to the neurosciences proper, research into Basic Behavioral and Social Science (8.597 billion), Behavioral and Social Science (22.515 billion), Brain Disorders (23.702 billion), Mental Health (13.699 billion), and Neurodegenerative (10.183 billion). 21/01/2017


The Discursive Meanie

by rsbakker

So I went to see Catherine Malabou speak on the relation between deep history, consciousness and neuroscience last night. As she did in her Critical Inquiry piece, she argued that some new conceptuality was required to bridge the natural historical and the human, a conceptuality that neuroscience could provide. When I introduced myself to her afterward, she recognized my name, said that she had read my post, “Malabou, Continentalism, and New Age Philosophy.” When I asked her what she thought, she blushed and told me that she thought it was mean.

I tried to smooth things over, but for most people, I think, expressing aggression in interpersonal exchanges is like throwing boulders tied to their waist. Hard words rewrite communicative contexts, and it takes the rest of the brain several moments to catch up. Once she tossed her boulder it was only a matter of time before the rope yanked her away. Discussion over.

I appreciate that I’m something of an essayistic asshole, and that academics, adapted to genteel communicative contexts as they are, generally have little experience with, let alone stomach for, the more bruising environs of the web. But then the near universal academic tendency to take the path of least communicative resistance, to foster discursive ingroups, is precisely the tendency Three Pound Brain is dedicated to exposing. The problem, of course, is that cuing people to identify you as a threat pretty much guarantees they will be unable to engage you rationally, as was the case here. Malabou had dismissed me, and so my arguments simply followed.

How does one rattle ingroup assumptions as an outgroup competitor, short disguising oneself as an ingroup sympathizer, that is? Interesting conundrum, that. I suppose if I had more notoriety, they would feel compelled to engage me…

Is it time to rethink my tactics?

Malabou, Continentalism, and New Age Philosophy

by rsbakker

Perhaps it’s an ex-smoker thing, the fact that I was a continentalist myself for so many years. Either way, I generally find continental philosophical forays into scientific environs little more than exercises in conceptual vanity (see “Reactionary Atheism: Hagglund, Derrida, and Nooconservatism“, “Zizek, Hollywood, and the Disenchantment of Continental Philosophy,” or “Life as Perpetual Motion Machine: Adrian Johnston and the Continental Credibility Crisis“). This is particularly true of Catherine Malabou, who, as far as I can tell, is primarily concerned with cherry-picking those findings that metaphorically resonate with certain canonical continental philosophical themes. For me, her accounts merely demonstrate the deepening conceptual poverty of the continental tradition, a poverty dressed up in increasingly hollow declarations of priority. This is true of “One Life Only: Biological Resistance, Political Resistance,” but with a crucial twist.

In this piece, she takes continentalism (or ‘philosophy,’ as she humbly terms it) as her target, charging it with a pervasive conceptual prejudice. She wants to show how recent developments in epigenetics and cloning reveal what she terms the “antibiological bias of philosophy.” This bias is old news, of course (especially in these quarters), but Malabou’s acknowledgement is heartening nonetheless, at least to those, such as myself, who think the continental penchant for conceptual experimentation is precisely what contemporary cognitive science requires.

“Contemporary philosophy,” she claims, “bears the marks of a primacy of symbolic life over biological life that has not been criticized, nor deconstructed.” Her predicate is certainly true—continentalism is wholly invested in theoretical primacy of intentionality—but her subsequent modifier simply exemplifies the way we humans are generally incapable of hearing criticisms outside our own. After all, it’s the quasi-religious insistence on the priority of the intentional, the idea that armchair speculation on the nature of the intentional trumps empirical findings in this or that way, that has rendered continentalism a laughing-stock in the sciences.

But outgroup criticisms are rarely heard. Whatever ‘othering the other’ consists in, it clearly involves not only their deracination, but their derationalization, the denial of any real critical insight. This is arguably what makes the standard continental shibboleths of ‘scientism,’ ‘positivism,’ and the like so rhetorically effective. By identifying an interlocutor as an outgroup competitor, you assure your confederates will be incapable of engaging him or her rationally. Continentalists generally hear ideology instead of cogent criticism. The only reason Malabou can claim that the ‘primacy of the symbolic over the biological’ has been ‘neither criticized nor deconstructed’ is simply that so very few within her ingroup have been able to hear the outgroup chorus, as thunderous as it has been.

But Malabou is a party member, and to her credit, she has done anything but avert her eyes from the scientifically mediated revolution sweeping the ground from beneath all our feet. One cannot dwell in foreign climes without suffering some kind of transformation of perspective. And at long last she has found her way to the crucial question, the one which threatens to overthrow her own discursive institution, the problem of what she terms the “unquestioned splitting of the concept of life.”

She takes care, however, to serve up the problem with various appeals to continental vanity—to hide the poison in some candy, you might say.

It must be said, the biologists are of little help with this problem. Not one has deemed it necessary to respond to the philosophers or to efface the assimilation of biology to biologism. It seems inconceivable that they do not know Foucault, that they have never encountered the word biopolitical. Fixated on the two poles of ethics and evolutionism, they do not think through the way in which the science of the living being could—and from this point on should—unsettle the equation between biological determination and political normalization. The ethical shield with which biological discourse is surrounded today does not suffice to define the space of a theoretical disobedience to accusations of complicity among the science of the living being, capitalism, and the technological manipulation of life.

I can remember finding ignorances like these ‘inconceiveable,’ thinking that if only scientists would ‘open their eyes’ (read so and so) they would ‘see’ (their conceptually derivative nature). But why should any biologist read Foucault, or any other continentalist for that matter? What distinguishes continental claims to the priority of their nebulous domain over the claims of say, astrology, particularly when the dialectical strategies deployed are identical? Consider what Manly P. Hall has to say in The Story of Astrology:

Materialism in the present century has perverted the application of knowledge from its legitimate ends, thus permitting so noble a science as astronomy to become a purely abstract and comparatively useless instrument which can contribute little more than tables of meaningless figures to a world bankrupt in spiritual, philosophical, and ethical values. The problem as to whether space is a straight or a curved extension may intrigue a small number of highly specialized minds, but the moral relationship between man and space and the place of the human soul in the harmony of the spheres is vastly more important to a world afflicted with every evil that the flesh is heir to. 8, Hall, Manly P. The Story of Astrology: The Belief in the Stars as a Factor in Human Progress. Cosimo, Inc., 2005.

Sound familiar? If you’ve read any amount of continental philosophy it should. One can dress up the relation between the domains differently, but the shape remains the same. Where astronomy is merely ontic or ideological or technical or what have you, astrology ministers to the intentional realities of lived life. The continentalist would cry foul, of course, but the question isn’t so much one of what they actually believe as one of how they appear. Insofar as they place various, chronically underdetermined speculative assertions before the institutional apparatuses of science, they sound like astrologers. Their claims of conceptual priority, not surprisingly, are met with incredulity and ridicule.

The fact that biologists neglect Foucault is no more inconceivable than the fact that astronomers neglect Hall. In science, credibility is earned. Everybody but everybody thinks they’ve won the Magical Belief Lottery. The world abounds with fatuous, theoretical claims. Some claims enable endless dispute (and, for a lucky few, tenure), while others enable things like smartphones, designer babies, and the detonation of thermonuclear weapons. Since there’s no counting the former, the scientific obsession with the latter is all but inevitable. Speculation is cheap. Asserting the primacy of the symbolic over the natural on speculative grounds is precisely the reason why scientists find continentalism so bizarre.

Akin to astrology.

Now historically, at least, continentalists have consistently externalized the problem, blaming their lack of outgroup credibility on speculative goats like the ‘metaphysics of presence,’ ‘identity thinking,’ or some other combination of ideology and ontology. Malabou, to her credit, wants ‘philosophy’ to partially own the problem, to see the parsing of the living into symbolic and biological as something that must itself be argued. She offers her quasi-deconstructive observations on recent developments in epigenetics and cloning as a demonstration of that need, as examples of the ways the new science is blurring the boundaries between the intentional and the natural, the symbolic and the biological, and therefore outrunning philosophical critiques that rely upon their clear distinction.

This blurring is important because Malabou, like most all continentalists, fears for the future of the political. Reverse engineering biology amounts to placing biology within the purview of engineering, of rendering all nature plastic to human whim, human scruple, human desire. ‘Philosophy’ may come first, but (for reasons continentalists are careful to never clarify) only science seems capable of doing any heavy lifting with their theories. One need only trudge the outskirts of the vast swamp of neuroethics, for instance, to get a sense of the myriad conundrums that await us on the horizon.

And this leads Malabou to her penultimate statement, the one which I sincerely hope ignites soul-searching and debate within continental philosophy, lest the grand old institution become indistinguishable from astrology altogether.

And how might the return of these possibilities offer a power of resistance? The resistance of biology to biopolitics? It would take the development of a new materialism to answer these questions, a new materialism asserting the coincidence of the symbolic and the biological. There is but one life, one life only.

I entirely agree, but I find myself wondering what Malabou actually means by ‘new materialism.’ If she means, for instance, that the symbolic must be reduced to the natural, then she is referring to nothing less than the long-standing holy grail of contemporary cognitive science. Until we can understand the symbolic in terms continuous with our understanding of the natural world, it’s doomed to remain a perpetually underdetermined speculative domain—which is to say, one void of theoretical knowledge.

But as her various references to the paradoxical ‘gap’ between the symbolic and the biological suggest, she takes the irreducibility of the symbolic as axiomatic. The new materialism she’s advocating is one that unifies the symbolic and the biological, while somehow respecting the irreducibility of the symbolic. She wants a kind of ‘type-B materialism,’ one that asserts the ontological continuity of the symbolic and the biological, while acknowledging their epistemic disparity or conceptual distinction. David Chalmers, who coined the term, characterizes the problem faced by such materialisms as follows:

I was attracted to type-B materialism for many years myself, until I came to the conclusion that it simply cannot work. The basic reason for this is simple. Physical theories are ultimately specified in terms of structure and dynamics: they are cast in terms of basic physical structures, and principles specifying how these structures change over time. Structure and dynamics at a low level can combine in all sort of interesting ways to explain the structure and function of high-level systems; but still, structure and function only ever adds up to more structure and function. In most domains, this is quite enough, as we have seen, as structure and function are all that need to be explained. But when it comes to consciousness, something other than structure and function needs to be accounted for. To get there, an explanation needs a further ingredient. “Moving Forward on the Problem of Consciousness.”

Substitute ‘symbolic’ for ‘consciousness’ in this passage, and Malabou’s challenge becomes clear: science, even in the cases of epigenetics and cloning, deals with structure and dynamics—mechanisms. As it stands we lack any consensus commanding way of explaining the symbolic in mechanistic terms. So long as the symbolic remains ‘irreducible,’ or mechanistically inexplicable, assertions of ontological continuity amount to no more than that, bald assertions. Short some plausible account of that epistemic difference in ontologically continuous terms, type-B materialisms amount to little more than wishing upon traditional stars.

It’s here where we can see Malabou’s institutional vanity most clearly. Her readings of epigenetics and cloning focus on the apparently symbolic features of the new biology—on the ways in which organisms resemble texts. “The living being does not simply perform a program,” she writes. “If the structure of the living being is an intersection between a given and a construction, it becomes difficult to establish a strict border between natural necessity and self-invention.”

Now the first, most obvious criticisms of her reading is that she is the proverbial woman with the hammer, pouring through the science, seeing symbolic nails at every turn. Are epigenetics and cloning intrinsically symbolic? Do they constitute a bona fide example of a science beyond structure and dynamics?

Certainly not. Science can reverse engineer our genetic nature precisely because our genetic nature is a feat of evolutionary engineering. This kind of theoretical cognition is so politically explosive precisely because it is mechanical, as opposed to ‘symbolic.’ Researchers now know how some of these little machines work, and as result they can manipulate conditions in ways that illuminate the function of other little machines. And the more they learn, the more mechanical interventions they can make, the more plastic (to crib one of Malabou’s favourite terms) human nature becomes. The reason these researchers hold so much of our political future in their hands is precisely because their domain (unlike Malabou’s) is mechanical.

For them, Malabou’s reading of their fields would be obviously metaphoric. Malabou’s assumption that she is seeing the truth of epigenetics and cloning, that they have to be textual in some way rather than lending themselves to certain textual (deconstructive) metaphors, would strike them as comically presumptuous. The blurring that she declares ontological, they would see as epistemic. To them, she’s just another humanities scholar scrounging for symbolic ammunition, for confirmation of her institution’s importance in a time of crisis. Malabou, like Manly P. Hall, can rationalize this dismissal in any number of ways–this goes without saying. Her problem, like Hall’s, is that only her confederates will agree with her. She has no real way of prosecuting her theoretical case across ingroup boundaries, and so no way of recouping any kind of transgroup cognitive legitimacy–no way of reversing the slow drift of ‘philosophy’ to the New Age section of the bookstore.

The fact is Malabou begins by presuming the answer to the very question she claims to be tackling: What is the nature of the symbolic? To acknowledge that continental philosophy is a speculative enterprise is to acknowledge that continental philosophy has solved nothing. The nature of the symbolic, accordingly, remains an eminently open question (not to mention an increasingly empirical one). The ‘irreducibility’ of the symbolic order is no more axiomatic than the existence of God.

If the symbolic were, say, ecological, the product of evolved capacities, then we can safely presume that the symbolic is heuristic, part of some regime for solving problems on the cheap. If this were the case, then Malabou is doing nothing more than identifying the way different patterns in epigenetics and cloning readily cue a specialized form of symbolic cognition. The fact that symbolic cognition is cued does not mean that epigenetics and cloning are ‘intrinsically symbolic,’ only that they readily cue symbolic cognition. Given the vast amounts of information neglected by symbolic cognition, we can presume its parochialism, its dependence on countless ecological invariants, namely, the causal structure of the systems involved. Given that causal information is the very thing symbolic cognition has adapted to neglect, we can presume that its application to nature would prove problematic. This raises the likelihood that Malabou is simply anthropomorphizing epigenetics and cloning in an institutionally gratifying way.

So is the symbolic heuristic? It certainly appears to be. At every turn, cognition makes due with ‘black boxes,’ relying on differentially reliable cues to leverage solutions. We need ways to think outcomes without antecedents, to cognize consequences absent any causal factors, simply because the complexities of our environments (be they natural, social, or recursive) radically outrun our capacity to intuit. The bald fact is that the machinery of things is simply too complicated to cognize on the evolutionary cheap. Luckily, nature requires nothing as extravagant as mechanical knowledge of environmental systems to solve those systems in various, reproductively decisive ways. You don’t need to know the mechanical details of your environments to engineer them. So long as those details remain relatively fixed, you can predict/explain/manipulate them via those correlated systematicities you can access.

We genuinely need things like symbolic cognition, regimes of ecologically specific tools, for the same reason we need scientific enterprises like biology: because the machinery of most everything is either too obscure or too complex. The information we access provides us cues, and since we neglect all information pertaining to what those cues relate us to, we’re convinced that cues are all that is the case. And since causal cognition cannot duplicate the cognitive shorthand of the heuristics involved, they appear to comprise an autonomous order, to be something supernatural, or to use the prophylactic jargon of intentionalism, ‘irreducible.’ And since the complexities of biology render these heuristic systems indispensable to the understanding of biology, they appear to be necessary, to be ‘conditions of possibility’ of any cognition whatsoever. We are natural in such a way that we cannot cognize ourselves as natural, and so cognize ourselves otherwise. Since this cognitive incapacity extends to our second-order attempts to cognize our cognizing, we double down, metacognize this ‘otherwise’ in otherwise terms. Far from any fractionate assembly of specialized heuristic tools, symbolic cognition seems to stand not simply outside, but prior the natural order.

Thus the insoluble conundrums and interminable disputations of Malabou’s ‘philosophy.’

Heuristics and metacognitive neglect provide a way to conceive symbolic cognition in wholly natural terms. Blind Brain Theory, in other words, is precisely the ‘new materialism’ that Malabou seeks. The problem is that it seems to answer Malabou’s question regarding political in the negative, to suggest that even the concept of ‘resistance’ belongs to a bygone and benighted age. To understand the coincidence of the symbolic and biological, the intentional and the natural, one must understand the biology of philosophical reflection, and the way we were evolutionarily doomed to think ourselves something quite distinct from what we in fact are (see “Alien Philosophy,” part one and two). One must turn away from the old ways, the old ideas, and dare to look hard at the prospect of a post-intentional future. The horrific prospect.

Odds are we were wrong folks. The assumption that science, the great killer of traditional cognitive traditions, will make an exception for us, somehow redeem our traditional understanding of ourselves is becoming increasingly tendentious. We simply do not have the luxury of taking our cherished, traditional conceits for granted—at least not anymore. The longer continental philosophy pretends to be somehow immune, or even worse, to somehow come first, the more it will come to resemble those traditional discourses that, like astrology, refuse to relinquish their ancient faith in abject speculation.

Life as Perpetual Motion Machine: Adrian Johnston and the Continental Credibility Crisis

by rsbakker

In Thinking, Fast and Slow, Daniel Kahneman cites the difficulty we have distinguishing experience from memory as the reason why we retrospectively underrate our suffering in a variety of contexts. Given the same painful medical procedure, one would expect an individual suffering for twenty minutes to report a far greater amount than an individual suffering for half that time or less. Such is not the case. As it turns out duration has “no effect whatsoever on the ratings of total pain” (380). Retrospective assessments, rather, seem determined by the average of the pain’s peak and its coda.

Absent intellectual effort, the default is to remove the band-aid slowly.

Far from being academic, this ‘duration neglect,’ as Kahneman calls it, places the therapist in something of a bind. What should the physician’s goal be? The reduction of the pain actually experienced, or the reduction of the pain remembered. Kahneman provocatively frames the problem as a question of choosing between selves, the ‘experiencing self’ that actually suffers the pain and the ‘remembering self’ that walks out of the clinic. Which ‘self’ should the therapist serve? Kahneman sides with the latter. “Memories,” he writes, “are all we get to keep from our experience of living, and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self” (381). As he continues:

“Confusing experience with the memory of it is a compelling cognitive illusion—and it is the substitution that makes us believe a past experience can be ruined. The experiencing self does not have a voice. The remembering self is sometimes wrong, but it is the one that keeps score and governs what we learn from living, and it is the one that makes decisions. What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experience. This is the tyranny of the remembering self.” 381

There’s many, many ways to parse this fascinating passage, but what I’m most interested in is the brand of tyranny Kahneman invokes here. The use is metaphoric, of course, referring to some kind of ‘power’ that remembering possesses over experience. But this ‘power over’ isn’t positive: the ‘remembering self’ is no ‘tyrant’ in the interpersonal or political sense. We aren’t talking about a power that one agent holds over another, but rather the way facts belonging to one capacity, experiencing, regularly find themselves at the mercy of another, remembering.

Insofar as the metaphor obtains at all, you could say the power involved is the power of selection. Consider the sum of your own sensorium this very moment—the nearly sub-audible thrum of walled-away urban environs, the crisp white of the screen, the clamour of meandering worry on your margins, the smell of winter drafts creeping through lived-in spaces—and think of how wane and empty it will have become when you lie in bed this evening. With every passing heartbeat, the vast bulk of experience is consigned to oblivion, stranding us with memories as insubstantial as coffee-rings on a glossy magazine.

It has to be this way, of course, for both brute biomechanical and evolutionary developmental reasons. The high-dimensionality of experience speaks to the evolutionary importance of managing ongoing environmental events. The biomechanical complexity required to generate this dimensionality, however, creates what might be called the Problem of Indisposition. Since any given moment of experience exhausts our capacity to experience, each subsequent moment of experience all but utterly occludes the moment prior. The astronomical amounts of information constitutive of momentary experience is all but lost, ‘implicit’ in the systematic skeleton of ensuing effects to be sure, but inaccessible to cognition all the same.

Remembering, in other words, is radically privative. As a form of subsequent experiencing, the machinery involved generating the experience remembered has been retasked. Accordingly, the question of just what gets selected becomes all important. The phenomenon of duration neglect noted above merely highlights one of very many kinds of information neglected. In this instance, it seems, evolution skimped on the metacognitive machinery required to reliably track and rationally assess certain durations of pain. Remembering the peak and coda apparently packed a bigger reproductive punch.

Kahneman likens remembering to a tyrant because selectivity, understood at the level of agency, connotes power. The automaticity of this selectivity, however, suggests that abjection is actually the better metaphor, that far from being a tyrant, remembering is more a captive to the information available, more a prisoner in Plato’s Cave, than any kind of executive authority.

If any culprit deserves the moniker of ‘tyrant’ here, it has to be neglect. Why do so many individuals  choose to remove the band-aid slowly? Because information regarding duration plays far less a roll than information regarding intensity. Since the mechanisms responsible for remembering systematically neglect such information, that information possesses no downstream consequences for the machinery of decision-making. What we have traditionally called memory consists of a fractionate system of automata scattered throughout the brain. What little they cull from experiencing is both automatic and radically heuristic. Insofar as the metaphor of ‘tyrant’ applies at all, it applies to the various forms of neglect suffered by conscious cognition, the myriad scotomas constraining the possibilities of ‘remembering experience’—or metacognition more generally.

Kahneman’s distinction wonderfully illustrates the way the lack of information can have positive cognitive effects. Band-aids get pulled slowly because only a spare, evolutionarily strategic fraction of experiencing can be remembered. We only recall enough of experience, it seems safe to assume, to solve the kinds of problems impacting our paleolithic ancestors’ capacity to reproduce. This raises the general question of just what kinds of problems we should expect metacognition—given the limitations of its access and resources—to be able to solve.

Or put more provocatively, the question that philosophy has spent millennia attempting to evade in the form of skepticism: If we don’t possess the metacognitive capacity to track the duration of suffering, why should we expect theoretical reflection to possess the access and capacity to theoretically cognize the truth of experience otherwise? Given the sheer complexity of the brain, the information consciously accessed is almost certainly adapted to various, narrow heuristic functions. It’s easy to imagine specialized metacognitive access and processing adapting to solve specialized problems possessing reproductive benefits. But it seems hard to imagine why evolution would select for the ability to theoretically intuit experience for what it is. Even worse, theoretical reflection is an exaptation, a cultural achievement. As such, we should expect it to be a naive metacognitive consumer, taking all information absent any secondary information regarding that information’s sufficiency.

In other words, not only should we expect theoretical reflection to be blind, we should also expect it to be blind to its own blindness.

It is this question of neurobiological capacity and evolutionary problem-solving that I want to bring to Adrian Johnston’s project to materially square the circle of subjectivity—or as he puts it, to secure “the possibility of a gap between, on the one hand, a detotalized, disunified plethora of material substances riddled with contingencies and conflicts and, on the other hand, the bottom-up surfacing out of these substances of the recursive, self-relating structural dynamics of cognitive, affective, and motivational subjectivity—a subjectivity fully within but nonetheless free at certain levels from material nature” (209).

I’ve considered several attempts by different Continental philosophers to deal with the challenges posed by the sciences of the mind over the past three years: Quentin Meillasoux in CAUSA SUIcide, Levi Bryant in The Ptolemaic Restoration, Martin Hagglund in Reactionary Atheism, and Slavoj Zizek in Zizek Hollywood, each of which has received thousands of views. With Meillasoux I focussed on his isolation of ‘correlation’ as a problematic ontological assumption, and the way he seemed to think he need only name it as such, and all the problems of subjectivity raised by Hume and normativity raised by Wittgenstein could just be swept under the philosophical rug. With Bryant I focussed on the problem of dogmatic ontologism, the notion that naming correlation as a problem somehow warranted a return to the good old preKantian days, where we could make ontological assertions without worrying about our capacity to make such claims. With Hagglund I raised issues with his interpretation of Derrida as an early thinker of ‘ultratranscendental materialism,’ showing how the concepts at issue were intentional through and through, and thus thoroughly incompatible with the natural scientific project. With Zizek I focussed on the way his deflationary ontology of negative subjectivity arising from some ‘gap’ in the real, aside from simply begging all the questions it purported to answer, amounted to an ontologization of what is far more parsimoniously explained as a cognitive illusion.

And, of course, I took the opportunity to demonstrate the explanatory power of the Blind Brain Theory in each case, the way each of these approaches actually exploit various metacognitive illusions to make their case.

Now, having recently completed Johnston’s Prolegomena to Any Future Materialism: The Outcome of Contemporary French Philosophy, I’ve come to realize that these thinkers* are afflicted with the same set of recurring problems, problems which must be overcome if anything approaching a compelling account of the kind Johnston sets as his goal is to be had. These might be enumerated as follows:

Naivete Problem: With the qualified exception of Zizek, these authors seem largely (and in some cases entirely) ignorant of the enormous philosophical literature dealing with the problems intentionality poses for materialism/physicalism. They also seem to have scant knowledge of the very sciences they claim to be ‘grounding.’

No Cognitive Guarantee Problem: These authors take it as given that radical self-deception is simply not a possible outcome of a mature neuroscience–that something resembling subjectivity as remembered is ‘axiomatic.’ In all fairness, this is a common presumption of those critical of the eliminativist implications of the sciences of the brain. Rose and Abi-Rached, for instance, make it the centrepiece of their attempt to defang the neuroscientific threat to social science in their Neuro: The New Brain Sciences and the Management of the Mind. (Their strategy is twofold: on the one hand, they (like some of the authors considered here) give a conveniently narrow characterization of the threat in terms of subjectivity, arguing that the findings of neuroscience in this regard are simply confirming the subject-decentering theoretical insights already motivating much of the social sciences. Then they essentially cherry-pick researchers and commentators in the field who confirm their thesis without giving dissenters a hearing.) The unsettling truth is that wholesale, radical deception regarding who and what we are is entirely possible (evolution only cares about accuracy insofar as it pays reproductive dividends), and actually already a matter of empirical fact regarding a handful of cognitive capacities.

Talk Is Cheap Problem: There is a decided tendency among these authors to presume the effectiveness of metaphysical argumentation, to not only think that ontological claims merit serious attention in the sciences, but that the threat posed is merely ideological and not material. Rehearsing old arguments against determinism (especially when it’s the Second Law of Thermodynamics that needs to be refuted) will make no difference whatsoever once the brain ceases to be a ‘grey box’ and becomes continuous with our technology.

Implausible Continuity Problem: All of these authors ignore what I call the Big Fat Pessimistic Induction: the fact that, all things being equal, we should expect science to revolutionize the human as radically as it has revolutionized every other natural domain now that the brain has become empirically tractable. They assume, rather, that the immunity the opacity of the brain had granted their tradition historically will somehow continue.

Metacognitive Reliability Problem: All of these authors overlook the potentially crippling issue of metacognitive deception, despite the mounting evidence of metacognitive unreliability. I should note that this tendency is common in Analytic Philosophy of Mind as well (but less and less so as the years pass).

Intentional Dissociation Problem: All of these authors characterize the cognitive scientific threat in the narrow terms of subjectivity rather than intentionality broadly construed, the far more encompassing rubric common to Analytic philosophy. Given the long Continental tradition of critiquing commonly held conceptions of subjectivity, the attractiveness of this approach is understandable, but no less myopic.

I think Prolegomena to Any Future Materialism: The Outcome of Contemporary French Philosophy suffers from all these problems—clearly so. What follows is not so much a review—I’ll await the final book of his trilogy for that (for a far more balanced consideration see Stephan Craig Hickman’s serial review here, here, here, and here)—as a commentary on the general approach one finds in many Continental materialisms as exemplified by Johnston. What all these authors want is some way of securing—or salvaging—some portion of the bounty of spirit absent spirit. They want intentionality absent theological fantasy, and materialism absent nihilistic horror. What I propose is a discussion of the difficulties any such project must overcome—a kind of prolegomena to Johnston’s Prolegomena—and a demonstration why he cannot hope to succeed short of embracing the very magical thinking he is so quick to deride.

Insofar as this is a blog post, part of a living, real time debate, I heartily encourage partisans of his approach to sound off. I am by no means a scholar of any of these authors, so I welcome corrections of misinterpretations. Strawmen teach few lessons, and learn none whatsoever. But I also admit to a certain curiosity given the optimistic stridency of so much of Johnston’s rhetoric. “From my perspective,” he writes in a recent interview, “these naturalists are overconfident aggressors not nearly as well-armed as they believe themselves to be. And, the anti-naturalists react to them with unwarranted fear, buying into the delusions of their foes that these enemies really do wield scientifically-solid, subject-slaying weapons.” I’m sure everyone reading this would love to see what kind of walk accompanies this talk! From my quite contrary perspective, the only way a book like this could be written is for the lack of any sustained interaction with those holding contrary views. Write for your friends long enough, and your writing becomes friendly.

In my own terms, Johnston is an explicit proponent of what might be called noocentrism, the last bastion, now that geocentrism and biocentrism have been debunked, of the intuition that we are something special. Freud, of course, famously claimed to have accomplished this overthrow, to have inflicted the third great ‘narcissistic wound,’ when he had only camouflaged the breastworks by carving intentionality along different mortices. Noocentrism represents an umbrella commitment to our metacognitive intuitions regarding the various efficacies of experience, and these are the intuitions that Johnston explicitly seeks to vindicate. He is ‘preoccupied,’ as he puts it, “with constructing an ontology of freedom” (204). Since any such ontology contradicts the prevailing understanding of the natural arising out of the sciences–how can freedom arise in a nature where everything is in-between, a cog for indifferent forces?–the challenge confronting any materialism is one of explaining subjectivity in a materially consistent manner. As he puts it in his recent Society and Space interview:

“For me, the true ultimate test of any and every materialism is whether it can account in a strictly materialist (yet non-reductive) fashion for those phenomena seemingly most resistant to such an account. Merely dismissing these phenomena (first and foremost, those associated with subjectivity) as epiphenomenal relative to a sole ontological foundation (whether as Substance, Being, Otherness, Flesh, Structure, System, Virtuality, Difference, or whatever else) fails this test and creates many more problems than it supposedly solves.”

Naturalizing consciousness and intentionality—or in Johnston’s somewhat antiquated jargon, explaining the material basis of subjectivity—is without a doubt the holy grail, not only of contemporary philosophy of mind, but of several sciences as well. And he is quite right to insist, I think, that any such naturalization that simply eliminates intentional phenomena (along the lines of Alex Rosenberg’s position, say) hasn’t actually naturalized anything at all. If consciousness and intentionality don’t exist as we intuit them, then we need some account of why we intuit them as such. Elimination, in other words, has to explain why elimination is required in the first place.

But global eliminativist materialist approaches (such as Rosenberg’s and my own) are actually very rare. In contemporary debates, philosophers and researchers tend to be eliminativists or antirealists about specific intentional phenomena, qualia, content, norms, or so on, rather than all intentional phenomena. This underscores two problems that loom large over Johnston’s account, at least as it stands in this first volume. The first has to do with what I called the Intentional Dissociation Problem above, the fact that the problem of subjectivity is simply a subset of the larger problem of intentionality. It falls far short of capturing the ‘problem space’ that Johnston purports to tackle. Some philosophers (Pete Mandik comes to mind) are eliminativists about subjectivity, yet realists about other semantic phenomena.

The second has to do with the fact that throughout the course of the book he repeatedly references reductive and eliminative materialisms as his primary rhetorical foil without actually engaging any of the positions in any meaningful way. Instead he references Catherine Malabou’s perplexing work on neuroplasticity, stating that “one need not fear that bringing biology into the picture of a materialist theory of the subject leads inexorably to a reductive materialism of a mechanistic and/or eliminative sort; such worries are utterly unwarranted, based exclusively on an unpardonable ignorance of several decades of paradigm-shifting discoveries in the life sciences” (Prolegomena, 29). Why? Apparently because epigenetics and neural plasticity “ensure the openness of vectors and logics not anticipated or dictated by the bump-and-grind efficient causality of physical particles alone” (29).

Comments like these—and one finds them scattered throughout the text—demonstrates a problematic naivete regarding his subject matter. One could point out that quantum indeterminacy actually governs the ‘determinism’ he attributes to physical particles. But the bigger problem—the truly ‘unpardonable ignorance’—is that it shows how little he seems to understand the very problem he has set out to solve. His mindset seems to be as antiquated as the sources he cites. He seems to think, for instance, that ‘mechanism’ in the brain sciences refers to something nonstochastic, ‘clockwork,’ that the spectre of Laplace is what drives the unwarranted claims of reductive/eliminative materialists. ‘Decades of research revealing indeterminacy, and still they speak of mechanisms?’

As hard as it is to believe, Johnston pretty clearly thinks the primary problem materialism poses for subjectivity is the problem of determinism. But the problem, simply put, is nothing other than the Second Law of Thermodynamics, the exceptionless irreflexivity of the natural. Ontological freedom is every bit as incompatible with the probabilistic as it is the determined. The freedom of noise is no freedom at all.

This, without a doubt, is his single biggest argumentative oversight, the one that probably explains his wholesale dismissal of any would-be detractor such as myself. His foe here is entropy, not some anachronistic conception of clockwork determinism. Only an appreciation of this allows an appreciation of the difficulty the task Johnston has set himself. Forget the thousands of years of tradition, the lifetime of familiarity, the system of concepts anchored, forget that Johnston is arguing for the most beloved thing—your exceptionality—set aside all this, and what remains, make no mistake, is a perpetual motion machine, something belonging to reality but obeying laws of its own.

So how does one theoretically rationalize a perpetual motion machine?

The metaphor is preposterous, of course, even though it remains analogous in the most important respect. Johnston literally believes it’s possible to “be a partisan of a really and indissolubly free subject while simultaneously and without incoherence or self-contradiction remaining entirely faithful to the uncompromising atheism and immanentism of the combative materialist tradition” (176). He thinks that certain real, physical systems (you and me, as luck would have it) do not obey physical law, at least not the way every single system effectively explained through the history of natural science obeys physical law.

What makes the metaphor preposterous, however, is the apparent immediacy of subjectivity, the way it strikes us as a source of some kind upon reflection, hemmed not by astronomical neural complexities, but by rules, goals, rationality. In a basic sense, what could be more obvious? This is what we experience!

Or… is it just what we remember?

And here’s the rub. The problem that Johnston has set himself to solve is a dastardly one indeed, far, far more difficult than he seems to imagine. Even with the dazzling assurance of experience, a perpetual motion machine is pretty damn hard thing to explain. The fact that most everyone is dazzled by subjectivity in its myriad guises doesn’t change the fact that they are, quite explicitly, betting on a perpetual motion machine. There’s a reason, after all, why everyone but everyone who’s attempted what Johnston has set out to achieve has failed. “Empty-handed adversaries,” as Johnston claims in the same interview, “do not deserve to be feared.” But if they’re empty-handed, then they must know kung-fu, or something lethal, because so far they’ve managed to kill every single theory such as his!

But when you start interrogating that ‘dazzling assurance,’ when you consider just how much we remember, things become even more difficult for Johnston. Because the fact is, we really don’t remember all that much. Certain things escape memory simply because they escape experience altogether. Our brains, for instance, have no more access to the causal complexities of their own function than they do to those of others, so they rely on powerful, yet imperfect systems, ‘fast and frugal heuristics,’ to solve (explain, predict, and manipulate) themselves and others. When abnormalities occur in these systems, such as those belonging, say, to autism spectrum disorder, our capacity to solve is impaired.

As the history of philosophy attests, we seem to experience next to nothing regarding the actual function of these systems, or at least nothing we can remember in the course of pondering our various forms of intentional problem solving. All we seem to intuit are a series of problem-solving modes that we simply cannot square with the problem-solving modes we use to engineer and understand mechanical systems. And, most importantly, we seem to experience (or remember) nothing of just how little we experience (or remember). And so the armchair perpetually remains a live option.

I say ‘most importantly’ because this means remembering doesn’t simply overlook its incapacities, it neglects them. When it comes to experience, we remember everything there is to be remembered, always. We rarely have any inkling of what’s bent, bleached, or lost. What is lost to the system, does not exist for the system, even as something lost.

Add neglect and suddenly a good number of intentional peculiarities begin to make frightening sense. Why, for instance, should we be surprised that problem solving modes adapted to solve complex causal systems absent causal information cannot themselves make sense of causal information? We are mechanically embedded in our environments in such a way that we cannot cognize ourselves as so embedded, and so are forced to cognize ourselves otherwise, acausally, relying on heuristics that theoretical reflection transforms into rules, goals, and reasons, hazy obscurities at the limits of discrimination.

We are astronomically complicated causal systems that cannot remember themselves as such, amnesiac machines that take themselves for perpetual motion machines for the profundity of their forgetting. At any given moment, what we remember is all there is; there is nothing else to blame, no neuromechanistic background we might use to place our thoughts and experiences in their actual functional context, namely, the machinery that bullets and spirochetes and beta-amyloid plaques can destroy. We do not simply lack the access and the resources to intuit ourselves for what we are (something), we lack the resources intuit this lack of resources. Thus the myth of perpetual motion, our conviction in what Johnston calls the “self-determining spontaneity of transcendental subjects.”

The limits of remembering, in other words, provide an elegant, entirely naturalistic, explanation for our metacognitive intuitions of spontaneity, the almost inescapable sense that thought has to represent some kind of fundamental discontinuity in being. Since we cannot cognize the actual activity of cognition, that activity—the function of flesh and blood neural circuits that would seize were you to suffer a midcerebral arterial stroke this instant—does not exist for metacognition. All the informational dimensions of this medial functionality, the dimensions of the material, vanish into oblivion, stranding us with a now that always seems to be the same now, despite its manifest difference, a life that is always in the mysterious process of just beginning.

But Johnston doesn’t buy this story. For him, we actually do remember everything we need to remember to theoretically fathom experience. For him, the fact of subjectivity is nothing less than an “axiomatic intuition” (204), as dazzling as dazzling can be. He never explains how this magic might be possible, how any brain could possibly possess the access and resources to fathom its structure and dynamics in anything but radically privative ways, but then he’s not even aware this is a problem (or more likely, he assumes Freud and Lacan have already solved this problem for him). For him, self-determining spontaneity—perpetual motion—is simply a positive fact of what we are. Everything is remembered that needs to be remembered.

The problem, he’s convinced, doesn’t lie with us. So in order to pass his own test, to craft a materialism absent cryptotheological elements that nevertheless explains (as opposed to explains away) all the perplexing phenomena of intentionality, he needs some different account of nature.

He’s not alone in this regard. The vast majority of theorists who tackle the many angles of this problem are intentional realists of some description. But for many, if not most of them, the tactic is to posit empirical ignorance: though we presently cannot puzzle through the conundrums of intentional phenomena, proponents of so-called ‘spooky emergence’ contend, advances in cognitive neuroscience (and/or physics) will somehow vindicate our remembering. Consciousness and intentionality, they believe, are emergent phenomena, novel physical properties pertaining to as yet unknown natural mechanisms.

Johnston also appropriates the term ‘emergentism’ to describe his project, but it’s hard to see it as much more than a ‘cool by association’ ploy. Emergentism provides a way for physicalists (materialists) to redeem something ‘perpetual enough’ short of committing to ontological pluralism. Emergentists, in other words, are naturalists, convinced that “philosophy can and should limit itself to a deontologized epistemology with nothing more than, at best, a complex conception of the cognizing mental apparatus” (204).

This ‘article of faith,’ however, is one that Johnston explicitly rejects, claiming that “thought cannot indefinitely defer fulfilling its duty to build a realist and materialist ontology” (204). So be warned, no matter how much he helps himself to the term, Johnston is no ‘emergentist’ in the standard sense. He’s an avowed ontologist, as he has to be, given the Zizekian frame he uses to mount his theoretical chassis. “[A] theory of the autonomous negativity of self-relating subjectivity always is accompanied, at a minimum implicitly, by the shadow of a picture of being (as the ground of such subjectivity) that must be made explicit sooner or later” (204). Elsewhere, he writes, “I am tempted to characterize my transcendental materialism as an emergent dual-aspect monism, albeit with the significant qualification that these ‘aspects’ and their eradicable divisions (such as mind and matter, the asubjective and subjectivity, and the natural and the more-than-natural) enjoy the heft of actual existence” (180), that is, he’s a kind of dual-aspect monist so long as the dualities are not aspectual!

Insofar as perpetual motion machines (like autonomous subjects) pretty clearly violate nature as science presently conceives it, one might say that Johnston’s ontological emergentism is honest in a manner that naturalistic emergentism is not. As an eliminative naturalist who finds the notion of systems that violate the laws of physics arising as a consequence of those laws ‘spooky,’ I’m inclined to think so. But in avoiding one credibility conundrum he has simply inherited another, namely, our manifest inability to arbitrate ontological claim-making.

Johnston himself recognizes this problem of ontological credibility, insofar as he makes it the basis of his critiques of Badiou and Meillassoux, who suffer, he argues, “from a Heideggerean hangover, specifically, an acceptance unacceptable for (dialectical) materialism of the veracity of ontological difference, or a clear-cut distinction between the ontological and the ontic” (170). ‘Genuine materialism,’ as he continues, “does not grant anyone the low-effort luxury of fleeing into the uncluttered, fact-free ether of ‘fundamental ontology’ serenely separated from the historically shifting stakes of ontic disciplines” (171). And how could it, now that the machinery of human cognition itself lies on the examination table? He continues, “Although a materialist philosophy cannot be literally falsifiable as are Popperian sciences, it should be contestable as receptive, responsive, and responsible vis-a-vis the sciences” (171).

This, for me, is the penultimate line of the book, the thread from which the credibility of Johnston’s whole project hangs. As Johnston poses the dilemma:

“… the quarrels among the prior rationalist philosophers about being an sich are no more worth taking philosophically seriously than silly squabbles between sci-fi writers about whose concocted fantasy-world is truer or somehow more ‘superior’ than the others; such quarrels are nothing more than fruitless comparisons between equally hallucinatory apples and oranges, again resembling the sad spectacle of a bunch of pulp fiction novelists bickering over the correctness-without-criteria of each others’ fabricated imaginings and illusions.” 170

And yet nowhere could I find any explanation of how his own ontology manages to avoid this ‘fantasy world trap,’ to be ‘receptive’ or ‘responsive’ or ‘responsible’ to any of the sciences—to be anything other than another fundamental ontology, albeit one that rhetorically approves of the natural scientific project. The painful, perhaps even hilarious fact of the matter is that Johnston’s picture of intentionally rising from the cracks and gaps of an intrinsically contradictory reality happens to be the very ontological trope I use to structure the fantasy world of The Second Apocalypse!

There can be little doubt that he believes his picture somehow is receptive, responsive, and responsible, thinking, as he does, that his account

“… will not amount merely to compelling philosophy and psychoanalysis, in a lopsided, one-way movement, to adapt and conform to the current state of the empirical, experimental sciences, with the latter and their images of nature left unchanged in the bargain. Merging philosophy and psychoanalysis with the sciences promises to force profound changes, in a two-way movement, within the latter at least as much as within the former.” 179

Given the way science has ideologically and materially overrun every single domain it has managed to colonize historically, this amounts to a promise to force a conditional surrender with words—unless, that is, he has some gobsmacking way to empirically motivate (as opposed to verify) his peculiar brand of ontological emergentism.

But the closest he comes to genuinely explaining the difference between his ‘good’ ontologism and the ‘bad’ ontologism of those he critiques comes near the end of the text, where he espouses what might be called a qualified Darwinianism, one where “the chasm dividing unnatural humanity from natural animality is … not a top-down imposition inexplicably descending from the enigmatic heights of an always-already there ‘Holy Spirit’ … but, instead a ‘gap’ signalling a transcendence-in-immanence” (178). To advert to Dennettian terms, one might suggest that Johnston sees the bad ontologism of Badiou and Meillasoux as offering ‘skyhooks,’ unexplained explainers set entirely outside the blind irreflexivity of nature. His own good ontologism, on the other hand, he conceives phylogenetically, which is to say more in terms of what Dennett would call ‘cranes,’ a complicating continuity of natural processes and mechanisms culminating in ‘virtual machines’ that we then mistake for skyhooks.

Or perhaps we should label them ‘crane-hooks,’ insofar as Johnston envisions a ‘gap’ or ‘contradiction’ written into the very fundamental structure of existence, a wedge that bootstraps subjectivity as remembered

A perpetual motion machine.

The charitable assumption to make at this point is that he’s saving this bombshell for the ensuing text. But given the egregious way he mischaracterizes the difficulties of his project at the beginning of the text, it’s hard to believe he has much in the way combustible material. As we saw, he flat out equivocates the concrete mechanistic threat—the way the complexities of technology are transforming the complexities of life into more technology—with the abstract philosophical problem of determinism. Creeping depersonalization–be it the medicalization of individuals in numerous institutional (especially educational) contexts, or the ‘nudge’ tactics ubiquitously employed throughout commercial society, or institutional reorganization based on data mining techniques–is nothing if not an obvious social phenomenon. When does it stop? Is there really some essential ‘gap’ between you and all the buzzing, rumbling systems about you, the negentropic machinery of life, the endless lotteries that comprise evolution, the countless matter conversion engines that are stars? Does mechanism, engineered or described, eventually bump into the edge of mere nature, bounce from some redemptive contradiction in the fabric of being? One that just happens to be us?

Are we the perpetual motion machine we’ve sought in vain for millennia?

The fact is, one doesn’t have to look far to conclude that Johnston’s ontologism is just more bad ontology, the same old empty cans strung in a different configuration. After all, he takes the dialectical nature of his materialism quite seriously. As he writes:

“… naturalizing human being (i.e., not allowing humans to stand above-and-beyond the natural world in some immaterial, metaphysical zone) correlatively entails envisioning nature as, at least in certain instances, being divided against itself. An unreserved naturalization of humanity must result in a defamiliarization and reworking of those most foundational and rudimentary proto-philosophical images contributing to any picture of material nature. The new, fully secularized materialism (inspired in part by Freudian-Lacanian psychoanalysis) to be developed and defended in Prolegomena to Any Future Materialism is directly linked to this notion of nature as the self-shattering, internally conflicted existence of a detotalized material immanence.” 19-20

What all this means is that nature, for Johnston, is intrinsically contradictory. Now contradictions are at least three things: first, they logically entail everything; second, they’re analytically difficult to think; and third, they’re conceptually semantic, which is to say, intentional through and through. Setting aside the way the first two considerations raise the spectres of obscurantism and sophistry (where better hide something stolen?), the third should set the klaxons wailing for even those possessing paraconsistent sympathies. Why? Simply because saying that reality is fundamentally contradictory amounts to saying that reality is fundamentally intentional. And this means that what we have here, in effect, is pretty clearly a kind of anthropomorphism, the primary difference being, jargon aside, that it’s a different kind of anthropos that is being externalized, namely, the fragmented, decentred, and oh-so-dreary ‘postmodern subject.’

I don’t care how inured to a discourse’s foibles you become, this has to be a tremendous problem. Johnston writes, “a materialist theory of the subject, in order to adhere to one of the principal tenets of any truly materialist materialism (i.e., the ontological axiom according to which matter is the sole ground), must be able to explain how subjectivity emerges out of materiality—and, correlative to this, how materiality must be configured in and of itself so that such an emergence is a real possibility” (27). Now empirically speaking, we have no clue ‘how materiality must be configured’ because we do not, as yet, understand the mechanisms underwriting consciousness and intentionality. Johnston, of course, rhetorically dismisses this ongoing, ever advancing empirical project, as an obvious nonstarter. He has determined, rather, that the only way subjectivity can be naturally understood is if we come to see that nature itself is profoundly subjective…

I can almost hear Spinoza groaning from his grave on the Spui.

If the contradiction of the human can only be ‘explained’ by recourse to some contradiction intrinsic the entire universe, then why not simply admit that the contradiction of the human cannot be explained? Just declare yourself a mysterian of some kind–I dunno. Johnston devotes considerable space critiquing Meillasoux for using ‘hyperchaos’ as an empty metaphysical gimmick, a post hoc way to rationalize the nonmechanistic efficacy of intentional phenomena. And yet it’s hard to see how Johnston gives his reader even this much, insofar as he’s simply taken the enigma of intentionality and painted it across the cosmos—literally so!

Johnston references the ‘sad spectacle of a bunch of pulp fiction novelists’ arguing their worlds’ (170), but as someone who’s actually participated in that (actually quite hilarious) spectacle, I can assure everyone that we, unlike the sad spectacle of Continental materialists arguing their worlds, know we’re arguing fictions. What makes such spectacles sad is the presumption to a cognitive authority that simply does not exist. Arguing the intrinsically dialectical nature of materiality is of a par with arguing intelligent design, save that the intuitions motivating intelligent design are more immediate (they require nowhere near as much specialized training to appreciate), and that its proponents have done a tremendous amount of work to make their position appear receptive, responsive, and responsible to the sciences they would, in the spirit of share-and-share alike, ‘complement with a deeper understanding.’

A contradictory materiality is an anthropomorphic materiality. It provides redemption, not understanding of some decentred-me-friendly world that science has been unable to find. In his attempt to materially square the circle of subjectivity, Johnston invents a stripped down, intellectualized fantasy world, and then embarks on a series of ‘fruitless comparisons between equally hallucinatory apples and oranges’ (170). And how could it be any other way when all of these pulp philosophy thinkers are trapped arguing memories?

Vivid ones to be sure, but memories all the same.

The vividness, in fact, is a large part of the whole bloody problem. It means that no matter how empty our metacognitive intuitions regarding experience are, they generally strike us as sufficient: What, for instance, could be more obvious than our normative understanding of rules? But there’s powerful evidence suggesting our feeling of willing is only contingently connected to our actions (a matter of interpretation). There’s irrefutable evidence that our episodic memory is not veridical. Likewise, there is powerful evidence suggesting our explanations of our behaviour are only contingently related to our actions (a matter of interpretation). Even if you dispute the findings (with laboratory results, one would hope), or think that psychoanalysis is somehow vindicated by these findings (rather than rendered empirically irrelevant), the fact remains that none of the old assumptions can be trusted.

Do you have any metacognitive sense of the symphony of subpersonal heuristic systems operating inside your skull this very instant, the kinds of problems they’ve adapted to solve versus the kinds of problems that can only generate impasse and confusion? Of course not. The titanic investment in time and resources required to isolate what little we have isolated wouldn’t have been required otherwise. We are almost entirely blind to what we are and what we do. But because we are blind to that blindness, we confuse what little we do see with everything to be seen. We therefore become the ‘object’ that cannot be an ‘object,’ the thing that cannot be intuitively cognized in time and space, that strikes us with the immediacy of this very moment, that appears to somehow stand outside a nature that is all-encompassing otherwise.

The system outside the picture, somehow belonging and not belonging

Or as I once called it, the ‘occluded frame.’

And this just follows from our mechanical nature. For a myriad of reasons, any system originally adapted to systematically engage environmental systems will be structurally incapable of systematically engaging itself in the same manner. So when it develops the capacity to ask, as we have developed the capacity to ask, ‘What am I?’ it will have grounds to answer, ‘Of this world, and not of this world.’

To say, precisely because it is a mechanism, ‘I am contradiction.’

As with the crude thumbnail given above, the Blind Brain Theory attempts to naturalistically explain away the peculiarities of intentionality and phenomenality in terms of neglect. Since we cannot intuit our profound continuity with our environments, we intuit ourselves otherwise, as profoundly discontinuous with our environments. This discontinuity, of course, is the cornerstone of the problem of understanding what we are. Before, when the brain remained a black box, we could take it for granted, we could leverage our ignorance in ways that catered to our conceits, especially our perennial desire to be the great exception to the natural. So long as the box remained sealed, we could speak of beetles without fear of contradiction.

Now that the box has been cracked open with nary a beetle to be found, all those speculative discourses reliant upon our historical ignorance find themselves scrambling. They know the pattern, even if they are loath to speak of it or, like Johnston, prone to denial. Nevertheless, science is nothing if not imperial and industrial. It displaces aboriginal discourses, delegitimizes them in the course of revolutionizing any given domain. Humans, meanwhile, are hardwired to rationalize their interests. When their claims to status and authority are threatened, the moral and intellectual deficiencies of their adversary simply seems obvious. So it should come as no surprise that specialists in those discourses are finally rousing themselves from their ingroup slumber to defend what they must consider manifest authority and hard-earned privileges.

But they face a profound dilemma when it comes to prosecuting their case against science—a dilemma not one of these Continentalists has yet to acknowledge. Before, in the good old black box days, they could rely on simple pejoratives like ‘positivism’ and ‘scientism’ to do all the heavy lifting, simply because science reliably fell silent when it came to issues within their domain. The bind they find themselves in now, however, could scarce be more devious. The most obvious problem lies in the revolutionary revision of their subject matter—the thinking human. But the subject matter of the human is also the subject of the matter, the activity that makes the understanding of any subject matter possible. Continentalists, of course, know this, because it provides the basis for their ontological priority claims. They are describing, so they think, what makes science possible. This is what grants them diplomatic transcendental immunity when they take up residence in scientific domains. But Johnston isolates the dilemma—his dilemma—himself when he points out the empty nature of the Ontological Difference.

Foucault actually provides the most striking image of this that I know of with his analysis of the ‘emprico-transcendental doublet called man’ in The Order of Things. What is transpiring today can be seen as a battle for the soul of the darkness that comes before thought. Is it ontological as so much of philosophy insists? Or is it ontic as science seems to be in the process of discovering? So long as our ontic conditions remained informatically impoverished, so long as the brain remained a black box, then the dazzling vividness of our remembering could easily overcome our abstract, mechanistic qualms. We could rely on the apparent semantic density of ‘lived life’ or ‘conditions of possibility’ or ‘language games’ or ‘epistemes’ or so on (and so on) to silence the rumble of an omnivorous science. We could dwell in the false peace of trench warfare, a stalemate between two general, apparently antithetical claims to one truth. As Foucault writes:

“… either this true discourse finds its foundation and model in the empirical truth whose genesis in nature and in history it retraces, so that one has an analysis of the positivist type (the truth of the object determines the truth of the discourse that describes its foundation); or the true discourse anticipates the truth whose nature and history it defines; it sketches it out in advance and foments it from a distance, so that one has a discourse of the eschatological type (the truth of the philosophical discourse constitutes the truth in formation).” 320

Foucault, of course, has stacked the deck in this characterization of epistemological modes—simply posing the (historically contingent) problem of the human in terms of an ‘empirico-transcendental doublet’ is to concede authority to the transcendental—but he was nevertheless astute–or at least evocative–in his assessment of the form of the problem (as seen from within the subject/object heuristic). Again, as he writes:

“The true contestation of positivism and eschatology does not lie, therefore, in a return to actual experience (which rather, in fact, provides them with confirmation by giving them roots); but if such a contestation could be made, it would be from the starting-point of a question which may well seem aberrant, so opposed is it to what has rendered the whole of our thought historically possible. This question would be: Does man really exist?” 322

A question that was both prescient in his day and premature, given that the empirical remained, for most purposes, locked out of the black box of the human. For all his historicism, Foucault failed to look at this dilemma historically, to realize (as Adorno arguably did) that short of some form reason capable of contesting scientific claims on the human, the domain of the human was doomed to be overrun by scientific reason, and that discourses such as his would eventually be reduced to the status of alchemy or astrology or religion.

And herein lies the rub for Johnston. He thinks the key to a viable Continental materialism turns on getting the ontological nature of the what right, when the problem resides in the how. He says as much himself: anybody can cook up and argue a fantasy world. In my own lectures on fantasy, the most fictional of fictions, I always stress how the anthropomorphic ‘secondary worlds’ depicted could only be counted as ‘fantastic’ given the cognitive dominion of science. This, I think, is the real anxiety lurking beneath his work (despite all his embarrassing claims about ‘empty handed foes’). The only thing preventing the obvious identification of his secondary worlds as fantastic was the scientific inscrutability of the human. Now that the human is becoming empirically scrutable across myriad dimensions, now that the informatic floodgates have been cranked open—now that his claims have a baseline of comparison—the inexorable processes that rendered the anthropomorphic fantastic across external nature are beginning to render internal meaning fantastic as well.

Why do pharmaceuticals impact us? Man is a machine. Why do cochlear implants function? Man is a machine. Why do head injuries so profoundly reorganize experience? Man is a machine. The Problem of Mechanism is material first and only secondarily philosophical. Given what I know about the human capacity for self-deception (having followed the science for years now), I have no doubt that the vast majority of people will find refuge in ‘mere words,’ philosophical or theological rationalization of this or that redeeming ‘axiomatic posit.’ This is what makes the Singularity so bloody crucial to these kinds of debates (and what puts thinkers like David Roden so tragically far ahead of his peers). When we become indistinguishable from our machinery, or when our machines make kindergarten scribbles of our greatest works of genius, will we persist insisting on our ontological exceptionality then?

Or will the ‘human’ merely refer to some eyeless, larval stage? Will noocentrism be seen as last of the three great Centripetal Conceits?

Short of discovering some Messianic form of reason—a form of cognition capable of overpowering a scientific cognition that can cure blindness and vaporize cities—attempts to argue Messianic realities a la Continental materialism are doomed to fail before they even begin. Both the how and the what of the traditional humanities are under siege. As it stands, the profundity of this attack can still be partially hidden, so long as one’s audience wants to be reassured and has no real grasp of the process. A good number of high profile researchers are themselves apologists for the humanistic status quo, so one can, as defenders of various religious beliefs are accustomed, pluck many heartening quotes from the enemy’s own mouth. But since it is the rising tide of black-box information that has generated this legitimacy crisis, it seems more than a little plausible to presume that it will deepen and deepen, until finally it yawns abyssal, no matter how many well-heeled words are mustered to do battle against it.

Not matter how many Johnston’s pawn their cryptotheological perpetual motion machines.

Our only way to cognize our experiencing is via our remembering. The thinner this remembering turns out to be—and it seems to be very thin—the more we should expect to be dismayed and confounded by the sciences of the brain. At the same time we should expect a burgeoning market for apologia, for rationalizations that allow for the dismissal and domestication of the threats posed. Careers will be made, celebrated ones, for those able to concoct the most appealing and slippery brands of theoretical snake-oil. And meanwhile the science will trundle on, the incompatible findings will accumulate, and those of us too suspicious to believe in happy endings will be reduced to arguing against our hopes, and for the honest appraisal of the horror that confronts us all.

Because the bandage of our traditional self-conception will be torn away quicker than you think.


* POSTSCRIPT (17/01/2014): Levi Bryant, it should be noted, is an exception in several respects, and it was remiss of me to include him without qualification. A concise overview of his position can be found here.

A Material Churl in A Material World

by rsbakker

Aphorism of the Day: The cup of ego always but always leaks on the doily of theory. Thus the philosophical tendency to embroider in black.



I’d like to thank Roger for introducing a little high-altitude class into TPB while I was undergoing intense tequila retoxification treatment in Mexico. I’ll be providing my own naturalistic gloss on his metaphilosophical observations at some point over the ensuing weeks. In the meantime, however, I need to do a little spring cleaning…

Since I plan on shortly rowing back into more Analytic waters I thought I would fire a couple of more broadsides across the Continental fleet as I bring my leaky rowboat about. The (at times heated) debate we had following “The Ptolemaic Restoration,” has left me more rather than less puzzled by the ongoing ‘materialistic turn’ in Continental circles. Object Oriented Ontology has left me particularly mystified, especially in the wake of Levi Bryant’s claim that ‘object orientation’ need not concern itself with the question of meaning, even though, historically speaking, this question has always posed the greatest challenge to materialist accounts. As Ray Brassier acknowledges in his 2012 After Nature interview:

[Nihil Unbound] contends that nature is not the repository or purpose and that consciousness is not the fulcrum of thought. The cogency of these claims presupposes an account of thought and meaning that is neither Aristotelian–everything has meaning because everything exists for a reason–nor phenomenological–conscious is the basis of thought and the ultimate source of meaning. The absence of any such account is the book’s principal weakness…

What is truth? What is meaning? What is subjectivity? In short, What is intentionality? These are absolutely pivotal philosophical questions for any philosophy that purports to be ‘materialist.’ Why? Because if we actually had some way of naturalizing these perplexities, then we could plausibly claim that everything is material. And yet Bryant, when pressed on this selfsame issue, responds:

I’m not working on issues of intentionality. Asking me to have a detailed picture of intentionality is a bit like asking a neurologist to have a detailed picture of quantum mechanics or black holes. It’s just not what neurologists are doing. I’ll leave it to the neurologists to give that account of intentionality” (Comments to “The Ptolemaic Restoration,” March 14, 2013 6:45pm)

I fear the analogy escapes me. Asking him to have some picture of intentionality, given his claim that ontology is flat, is asking him how he has managed to smooth out the wrinkles that have hitherto nixed every attempt to flatten ontology in the manner he attempts. It is ‘like’ asking a materialist to respond to the traditional challenge to their position, nothing more or less. His inability to do this would suggest a gaping hole in his position, and thus the need to either retract his claim that ontology is flat, or to explore remedial strategies to shore up his position. But his unwillingness to do this seems to suggest he’s not interested in developing anything approximating a serious philosophical view. Failing some accounting of this issue, his brand of object orientation simply will not be taken seriously, not in the long run. The questions are just too basic, too immediate, to indefinitely ignore. If ontology is ‘flat,’ if ‘objects’ exhaust ontology, the most obvious perplexity becomes, What is this very moment now? A concatenation of objects? Our living perspectives, we are told, are some kind of material process. So then, What the hell are they? What kind of objects or units could they be? If soul or mind or being-in-the-world or what have you is ‘really’ a material process, then why, as Descartes so notoriously pointed out, does it so clearly seem to be anything but?

Leibniz, of course, gives us the most historically resonant image of the problem faced by object-oriented attempts to explain this-very-moment-now with his windmill:

One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles, that is, by figures and motions. In imagining that there is a machine whose construction would enable it to think, to sense, and to have perception, one could conceive it enlarged while retaining the same proportions, so that one could enter into it, just like into a windmill. Supposing this, one should, when visiting within it, find only parts pushing one another, and never anything to explain a perception. Thus it is in the simple substance, and not in the composite or in the machine, that one must look for perception. Monadology, §17

It’s not that it merely seems difficult to imagine how any organization of material things, any mechanism (no matter how complicated), could possibly result in something like this-very-moment-now, it seems downright unfathomable. And this pertains as much to its intentional structure as to its phenomenal content. As Brentano famously writes:

Every mental phenomenon includes something as object within itself, although they do not all do so in the same way. In presentation something is presented, in judgement something is affirmed or denied, in love loved, in hate hated, in desire desired and so on. This intentional in-existence is characteristic exclusively of mental phenomena. No physical phenomenon exhibits anything like it. We can, therefore, define mental phenomena by saying that they are those phenomena which contain an object intentionally within themselves. Psychology from an Empirical Standpoint, 68

In a more contemporary context, David Chalmers summarizes the problem with characteristic elegance and clarity:

First: Physical descriptions of the world characterize the world in terms of structure and dynamics. Second: From truths about structure and dynamics, one can deduce only further truths about structure and dynamics. And third: truths about consciousness are not truths about structure and dynamics. “Consciousness and Its Place in Nature

For whatever reason, soul, mind, being-in-the-world, whatever they are, seem dramatically incompatible with objects (whatever they are). Now the attraction of the so-called ‘materialist turn’ in Continental circles is obvious enough: it aligns speculation with the sciences, and thus (apparently) affords it a relevance and theoretical credibility that prior Continental philosophy so obviously lacked. The problem, of course, is that Continental materialisms are by no means content with those limits. Though they repudiate the discourses that preceded them, they refuse to relinquish the domains those discourses took as their natural habitat. Ethics. Politics. Not to mention the human condition more generally. These are the things Continental philosophy takes itself to be primarily about. So even though science–historically at least–has been shut out of the domain of the intentional, these materialisms continue to theorize these domains. But where Brassier or Roden, for instance, advert to an Anglo-American tradition that, because it never abandoned its scientific affiliations, managed to develop sophisticated responses to the question of meaning, others reference vague compatibilities or occult formulations or worse yet, simply stomp their feet.

This is why for me so much of the speculative materialist turn in Continental philosophy strikes me as an exercise in ignorance, wilful or accidental. Historically speaking, soul or mind or being-in-the-world have constituted the great bete noire of all materialist philosophies, and yet these object oriented newcomers, these ‘realists,’ think they can scrupulously theorize things like the materiality of language while completely ignoring the mystery of how that materiality comes to mean.

And this, I’m afraid to say, makes it difficult to see these positions as anything other than sophistry, ingroup language games where the difficult questions, the very questions upon which the bulk of philosophy are raised, are dismissed or wilfully ignored to better facilitate a kind of claim-making possessing no real cognitive constraints whatsoever. A kind of make-believe philosophy.

Some hard words, I know–but these are ideas, not relatives, we’re talking about. Meanings. I encourage anyone who takes umbrage, or just anyone merely sympathetic to Bryant’s (or Hagglund’s or Zizek’s) account, to show me the short-circuit in my thinking. As I’ve said before, I’m just a tourist. When I find issues that seem this glaring, this damning, I can’t shake the feeling that I have to be missing something. Lord knows it’s happened before. In fact, it’s the only reason I occupy the miserable position I hold now… Being wrong.

Zizek, Hollywood, and the Disenchantment of Continental Philosophy

by rsbakker

Aphorism of the Day: At least a flamingo has a leg to stand on.


Back in the 1990’s whenever I mentioned Dennett and the significance of neuroscience to my Continental buddies I would usually get some version of ‘Why do you bother reading that shite?’ I would be told something about the ontological priority of the lifeworld or the practical priority of the normative: more than once I was referred to Hegel’s critique of phrenology in the Phenomenology.

The upshot was that the intentional has to be irreducible. Of course this ‘has to be’ ostensibly turned on some longwinded argument (picked out of the great mountain of longwinded arguments), but I couldn’t shake the suspicion that the intentional had to be irreducible because the intentional had to come first, and the intentional had to come first because ‘intentional cognition’ was the philosopher’s stock and trade–and oh-my, how we adore coming first.

Back then I chalked up this resistance to a strategic failure of imagination. A stupendous amount of work goes into building an academic philosophy career; given our predisposition to rationalize even our most petty acts, the chances of seeing our way past our life’s work are pretty damn slim! One of the things that makes science so powerful is the way it takes that particular task out of the institutional participant’s hands–enough to revolutionize the world at least. Not so in philosophy, as any gas station attendant can tell you.

I certainly understood the sheer intuitive force of what I was arguing against. I quite regularly find the things I argue here almost impossible to believe. I don’t so much believe as fear that the Blind Brain Theory is true. What I do believe is that some kind of radical overturning of noocentrism is not only possible, but probable, and that the 99% of philosophers who have closed ranks against this possibility will likely find themselves in the ignominious position of those philosophers who once defended geocentrism and biocentrism.

What I’ve recently come to appreciate, however, is that I am literally, as opposed to figuratively, arguing against a form of anosognosia, that I’m pushing brains places they cannot go–short of imagination. Visual illusions are one thing. Spike a signal this way or that, trip up the predictive processing, and you have a little visual aporia, an isolated area of optic nonsense in an otherwise visually ‘rational’ world. The kinds of neglect-driven illusions I’m referring to, however, outrun us, as they have to, insofar as we are them in some strange sense.

So here we are in 2013, and there’s more than enough neuroscientific writing on the wall to have captured even the most insensate Continental philosopher’s attention. People are picking through the great mountain of longwinded arguments once again, tinkering, retooling, now that the extent of the threat has become clear. Things are getting serious; the akratic social consequences I depicted in Neuropath are everywhere becoming more evident. The interval between knowledge and experience is beginning to gape. Ignoring the problem now smacks more of negligence than insouciant conviction. The soul, many are now convinced, must be philosophically defended. Thought, whatever it is, must be mobilized against its dissolution.

The question is how.

My own position might be summarized as a kind of ‘Good-Luck-Chuck’ argument. Either you posit an occult brand of reality special to you and go join the Christians in their churches, or you own up to the inevitable. The fate of the transcendental lies in empirical hands now. There is no way, short of begging the question against science, of securing the transcendental against the empirical. Imagine you come up with, say, Argument A, which concludes on non-empirical Ground X that intentionality cannot be a ‘cognitive illusion.’ The problem, obviously, is that Argument A can only take it on faith that no future neuroscience will revise or eliminate its interpretation of Ground X. And that faith, like most faith, only comes easy in the absence of alternatives–of imagination.

The notion of using transcendental speculation to foreclose on possible empirical findings is hopeless. Speculation is too unreliable and nature is too fraught with surprises. One of the things that makes the Blind Brain Theory so important, I think, is the way its mere existence reveals this new thetic landscape. By deriving the signature characteristics of the first-personal out of the mechanical, it provides a kind of ‘proof of concept,’ a demonstration that post-intentional theory is not only possible, but potentially powerful. As a viable alternative to intentional thought (of which transcendental philosophy is a subset), it has the effect of dispelling the ‘only game in town illusion,’ the sense of necessity that accompanies every failure of philosophical imagination. It forces ‘has to be’ down to the level of ‘might be’…

You could say the mere possibility that the Blind Brain Theory might be empirically verified drags the whole of Continental philosophy into the purview of science. The most the Continental philosopher can do is match their intentional hopes against my mechanistic fears. Put simply, the grand old philosophical question of what we are no longer belongs to them: It has fallen to science.


For better and for worse, Metzinger’s Being No One has become the textual locus of the ‘neuroscientific threat’ in Continental circles. His thesis alone would have brought him to attention, I’m sure. That aside, the care, scholarship, and insight he brings to the topic provide the Continental reader with a quite extraordinary (and perhaps too flattering) introduction to cognitive science and Anglo-American philosophy of mind as it stood a decade or so ago.

The problem with Being No One, however, is precisely what renders it so attractive to Continentalists, particularly those invested in the so-called ‘materialist turn’: rather than consider the problem of meaning tout court, it considers the far more topical problem of the self or subject. In this sense, it is thematically continuous with the concerns of much Continental philosophy, particularly in its post-structuralist and psychoanalytic incarnations. It allows the Continentalist, in other words, to handle the ‘neuroscientific threat’ in a diminished and domesticated form, which is to say, as the hoary old problem of the subject. Several people have told me now that the questions raised by the sciences of the brain are ‘nothing new,’ that they simply bear out what this or that philosophical/psychoanalytic figure has said long ago–that the radicality of neuroscience is not all that ‘radical’ at all. Typically, I take the opportunity to ask questions they cannot answer.

Zizek’s reading of Metzinger in The Parallax View, for instance, clearly demonstrates the way some Continentalists regard the sciences of the brain as an empirical mirror wherein they can admire their transcendental hair. For someone like Zizek, who has made a career out of avoiding combs and brushes, Being No One proves to be one the few texts able to focus and hold his rampant attention, the one point where his concern seems to outrun his often brutish zest for ironic and paradoxical formulations. In his reading, Zizek immediately homes in on those aspects of Metzinger’s theory that most closely parallel my view (the very passages that inspired me to contact Thomas years ago, in fact) where Metzinger discusses the relationship between the transparency of the Phenomenal Self-Model (PSM) and the occlusion of the neurofunctionality that renders it. The self, on Metzinger’s account, is a model that cannot conceive itself as a model; it suffers from what he calls ‘autoepistemic closure,’ a constitutive lack of information access (BNO, 338). And its apparent transparency accordingly becomes “a special form of darkness” (BNO, 169).

This is where Metzinger’s account  almost completely dovetails with Zizek’s own notion of the subject, and so holds the most glister for him. But he defers pressing this argument and turns to the conclusion of Being No One, where Metzinger, in an attempt to redeem the Enlightenment ethos, characterizes the loss of self as a gain in autonomy, insofar as scientific knowledge allows us to “grow up,” and escape the ‘tutelary nature’ of our own brain. Zizek only returns to the lessons he finds in Metzinger after a reading of Damasio’s rather hamfisted treatment of consciousness in Descartes’ Error, as well as a desultory and idiosyncratic (which, as my daughter would put it, is a fancy way of saying ‘mistaken’) reading of Dennett’s critique of the Cartesian Theater. Part of the problem he faces is that Metzinger’s PSM, as structurally amenable as it is to his thesis, remains too topical for his argument. The self simply does not exhaust consciousness (even though Metzinger himself often conflates the two in Being No One). Saying there is no such thing as selves is not the same as saying there is no such thing as consciousness. And as his preoccupation with the explanatory gap and cognitive closure makes clear, nothing less than the ontological redefinition of consciousness itself is Zizek’s primary target. Damasio and Dennett provide the material (as well as the textual distance) he requires to expand the structure he isolates in Metzinger. As he writes:

Are we free only insofar as we misrecognize the causes which determine us? The mistake of the identification of (self-)consciousness with misrecognition, with an epistemological obstacle, is that it stealthily (re)introduces the standard, premodern, “cosmological” notion of reality as a positive order of being: in such a fully constituted positive “chain of being” there is, of course, no place for the subject, so the dimension of subjectivity can be conceived of only as something which is strictly co-dependent with the epistemological misrecognition of the positive order of being. Consequently, the only way effectively to account for the status of (self-)consciousness is to assert the ontological incompleteness of “reality” itself: there is “reality” only insofar as there is an ontological gap, a crack, in its very heart, that is to say, a traumatic excess, a foreign body which cannot be integrated into it. This brings us back to the notion of the “Night of the World”: in this momentary suspension of the positive order of reality, we confront the ontological gap on account of which “reality” is never a complete, self-enclosed, positive order of being. It is only this experience of psychotic withdrawal from reality, of absolute self-contraction, which accounts for the mysterious “fact” of transcendental freedom: for a (self-)consciousness which is in effect “spontaneous,” whose spontaneity is not an effect of misrecognition of some “objective” process. 241-242

For those with a background in Continental philosophy, this ‘aporetic’ discursive mode is more than familiar. What I find so interesting about this particular passage is the way it actually attempts to distill the magic of autonomy, to identify where and how the impossibility of freedom becomes its necessity. To identify consciousness as an illusion, he claims, is to presuppose that the real is positive, hierarchical, and whole. Since the mental does not ‘fit’ with this whole, and the whole, by definition, is all there is, it must then be some kind of misrecognition of that whole–‘mind’ becomes the brain’s misrecognition of itself as a brain. Brain blindness. The alternative, Zizek argues, is to assume that the whole has a hole, that reality is radically incomplete, and so transform what was epistemological misrecognition into ontological incompleteness. Consciousness can then be seen as a kind of void (as opposed to blindness), thus allowing for the reflexive spontaneity so crucial to the normative.

In keeping with his loose usage of concepts from the philosophy of mind, Zizek wants to relocate the explanatory gap between mind and brain into the former, to argue that the epistemological problem of understanding consciousness is in fact ontologically constitutive of consciousness. What is consciousness? The subjective hole in the material whole.

[T]here is, of course, no substantial signified content which guarantees the unity of the I; at this level, the subject is multiple, dispersed, and so forth—its unity is guaranteed only by the self-referential symbolic act, that is,”I” is a purely performative entity, it is the one who says “I.” This is the mystery of the subject’s “self-positing,” explored by Fichte: of course, when I say “I,” I do not create any new content, I merely designate myself, the person who is uttering the phrase. This self-designation nonetheless gives rise to (“posits”) an X which is not the “real” flesh-and-blood person uttering it, but, precisely and merely, the pure Void of self-referential designation (the Lacanian “subject of the enunciation”): “I” am not directly my body, or even the content of my mind; “I” am, rather, that X which has all these features as its properties. 244-245

Now I’m no Zizek scholar, and I welcome corrections on this interpretation from those better read than I. At the same time I shudder to think what a stolid, hotdog-eating philosopher-of-mind would make of this ontologization of the explanatory gap. Personally, I lack Zizek’s faith in theory: the fact of human theoretical incompetence inclines me to bet on the epistemological over the ontological most every time. Zizek can’t have it both ways. He can’t say consciousness is ‘the inexplicable’ without explaining it as such.

Either way, this clearly amounts to yet another attempt to espouse a kind of naturalism without transcendental tears. Like Brassier in “The View from Nowhere,” Zizek is offering an account of subjectivity without self. Unlike Brassier, however, he seems to be oblivious to what I have previously called the Intentional Dissociation Problem: he never considers how the very issues that lead Metzinger to label the self hallucinatory also pertain to intentionality more generally. Certainly, the whole of The Parallax View is putatively given over to the problem of meaning as the problem of the relationship between thought/meaning and being/truth, or the problem of the ‘gap’ as Zizek puts it. And yet, throughout the text, the efficacy (and therefore the reality) of meaning–or thought–is never once doubted, nor is the possibility of the post-intentional considered. Much of his discussion of Dennett, for instance, turns on Dennett’s intentional apologetics, his attempt to avoid, among other things, the propositional-attitudinal eliminativism of Paul Churchland (to whom Zizek mistakenly attributes Dennett’s qualia eliminativism (PV, 177)). But where Dennett clearly sees the peril, the threat of nihilism, Zizek only sees an intellectual challenge. For Zizek, the question, Is meaning real? is ultimately a rhetorical one, and the dire challenge emerging out of the sciences of the brain amount to little more than a theoretical occasion.

So in the passage quoted above, the person (subject) is plucked from the subpersonal legion via “the self-referential symbolic act.” The problems and questions that threaten to explode this formulation are numerous, to say the least. The attraction, however, is obvious: It apparently allows Zizek, much like Kant, to isolate a moment within mechanism that nevertheless stands outside of mechanism short of entailing some secondary order of being–an untenable dualism. In this way it provides ‘freedom’ without any incipient supernaturalism, and thus grounds the possibility of meaning.

But like other forms of deflationary transcendentalism, this picture simply begs the question. The cognitive scientist need only ask, What is this ‘self-referential symbolic act’? and the circular penury of Zizek’s position is revealed: How can an act of meaning ground the possibility of meaningful acts? The vicious circularity is so obvious that one might wonder how a thinker as subtle as Zizek could run afoul it. But then, you must first realize (as, say, Dennett realizes) the way intentionality as a whole, and not simply the ‘person,’ is threatened by the mechanistic paradigm of the life sciences. So for instance, Zizek repeatedly invokes the old Derridean trope of bricolage. But there’s ‘bricolage’ and then there’s bricolage: there’s fragments that form happy fragmentary wholes that readily lend themselves to the formation of new functional assemblages, ‘deconstructive ethics,’ say, and then there’s fragments that are irredeemably fragmentary, whose dimensions of fragmentation are such that they can only be misconceived as wholes. Zizek seizes on Metzinger’s account of the self in Being No One precisely because it lends itself to the former, ‘happy’ bricolage, one where we need only fear for the self and not the intentionality that constitutes it.

The Blind Brain Theory, however, paints a far different portrait of ‘selfhood’ than Metzinger’s PSM, one that not only makes hash of Zizek’s thesis, but actually explains the cognitive errors that motivate it. On Metzinger’s account, ‘auto-epistemic closure’ (or the ‘darkness of transparency’) is the primary structural principle that undermines the ‘reality’ of the PSM and the PSM only. The Blind Brain Theory, on the other hand, casts the net wider. Constraints on the information broadcast or integrated are crucial, to be sure, but BBT also considers the way these constraints impact the fractionate cognitive systems that ‘solve’ them. On my view, there is no phenomenal self-model,’ only congeries of heuristic cognitive systems primarily adapted to environmental cognition (including social environmental cognition) cobbling together what they can given what little information they receive. For Metzinger, who remains bound to the ‘Accomplishment Assumption’ that characterizes the sciences of the brain more generally, the cognitive error is one of mistaking a low-dimensional simulation for a reality. The phenomenal self-model, for him, really is something like ‘a flight-simulator that contains its own exits.’

On BBT, however, there is no one error, nor even one coherent system of errors; instead there are any number of information shortfalls and cognitive misapplications leading to this or that form of reflective, acculturated forms of ‘selfness,’ be it ancient Greek, Cartesian, post-structural, or what have you. Selfness, in other words, is the product of compound misapprehensions, both at the assumptive and the theoretical levels (or better put, across the spectrum of deliberative metacognition, from the cursory/pragmatic to the systematic/theoretical).

BBT uses these misconstruals, myopias, and blindnesses to explain the ways intentionality and phenomenality confound the ‘third-person’ mechanistic paradigm of the life sciences. It can explain, in other words, many of the ‘structural’ peculiarities that make the first-person so refractory to naturalization. It does this by interpreting those peculiarities as artifacts of ‘lost dimensions’ of information, particularly with reference to medial neglect. So for instance, our intuition of aboutness derives from the brain’s inability to model its modelling, neglecting, as it must, the neurofunctionality responsible for modelling its distal environments. Thus the peculiar ‘bottomlessness’ of conscious cognition and experience, the way each subsequent moment somehow becomes ground of the moment previous (and all the foundational paradoxes that have arisen from this structure). Thus the metacognitive transformation of asymptotic covariance into ‘aboutness,’ a relation absent the relation.

And so it continues: Our intuition of conscious unity arises from the way cognition confuses aggregates for individuals in the absence of differentiating information. Our intuition of personal identity (and nowness more generally) arises from metacognitive neglect of second-order temporalization, our brain’s blindness to the self-differentiating time of timing. For whatever reason, consciousness is integrative: oscillating sounds and lights ‘fuse’ or appear continuous beyond certain frequency thresholds because information that doesn’t reach consciousness makes no conscious difference. Thus the eerie first-person that neglect hacks from a much higher dimensional third can be said to be inevitable. One need only apply the logic of flicker-fusion to consciousness as a whole, ask why, for instance, facets of conscious experience such as unity or presence require specialized ‘unification devices’ or ‘now mechanisms’ to accomplish what can be explained as perceptual/cognitive errors in conditions of informatic privation. Certainly it isn’t merely a coincidence that all the concepts and phenomena incompatible with mechanism involve drastic reductions in dimensionality.

In explaining away intentionality, personal identity, and presence, BBT inadvertently explains why we intuit the subject we think we do. It sets the basic neurofunctional ‘boundary conditions’ within which Sellars’ manifest image is culturally elaborated–the boundary conditions of intentional philosophy, in effect. In doing so, it provides a means of doing what the Continental tradition, even in its most recent, quasi-materialist incarnations, has regarded as impossible: naturalizing the transcendental, whether in its florid, traditional forms or in its contemporary deflationary guises–including Zizek’s supposedly ineliminable remainder, his subject as ‘gap.’

And this is just to say that BBT, in explaining away the first-person, also explains away Continental philosophy.

Few would argue that many of the ‘conditions of possibility’ that comprise the ‘thick transcendental’ account of Kant, for instance, amount to speculative interpretations of occluded brain functions insofar as they amount to interpretations of anything at all. After all, this is a primary motive for the retreat into ‘materialism’ (a position, as we shall see, that BBT endorses no more than ‘idealism’). But what remains difficult, even apparently impossible, to square with the natural is the question of the transcendental simpliciter. Sure, one might argue, Kant may have been wrong about the transcendental, but surely his great insight was to glimpse the transcendental as such. But this is precisely what BBT and medial neglect allows us to explain: the way the informatic and heuristic constraints on metacognition produce the asymptotic–acausal or ‘bottomless’–structure of conscious experience. The ‘transcendental’ on this view is a kind of ‘perspectival illusion,’ a hallucinatory artifact of the way information pertaining to the limits of any momentary conscious experience can only be integrated in subsequent moments of conscious experience.

Kant’s genius, his discovery, or at least what enabled his account to appeal to the metacognitive intuitions of so many across the ages, lay in making-explicit the occluded medial axis of consciousness, the fact that some kind of orthogonal functionality (neural, we now know) haunts empirical experience. Of course Hume had already guessed as much, but lacking the systematic, dogmatic impulse of his Prussian successor, he had glimpsed only murk and confusion, and a self that could only be chased into the oblivion of the ‘merely verbal’ by honest self-reflection.

Brassier, as we have seen, opts for the epistemic humility of the Humean route, and seeks to retrieve the rational via the ‘merely verbal.’ Zizek, though he makes gestures in this direction, ultimately seizes on a radical deflation of the Kantian route. Where Hume declines the temptation of hanging his ‘merely verbal’ across any ontological guesses, Zizek positions his ‘self-referential symbolic act’ within the ‘Void of pure designation,’ which is to say, the ‘void’ of itself, thus literally construing the subject as some kind of ‘self-interpreting rule’–or better, ‘self-constituting form’–the point where spontaneity and freedom become at least possible.

But again, there’s ‘void,’ the one that somehow magically anchors meaning, an then there’s, well, void. According to BBT, Zizek’s formulation is but one of many ways deliberative metacognition, relying on woefully depleted and truncated information and (mis)applying cognitive tools adapted to distal social and natural environments, can make sense of its own asymptotic limits: by transforming itself into the condition of itself. As should be apparent, the genius of Zizek’s account is entirely strategic. The bootstrapping conceit of subjectivity is preserved in a manner that allows Zizek to affirm the tyranny of the material (being, truth) without apparent contradiction. The minimization of overt ontological commitments, meanwhile, lends a kind of theoretical immunity to traditional critique.

There is no ‘void of pure designation’ because there is no ‘void’ any more than there is ‘pure designation.’ The information broadcast or integrated in conscious experience is finite, thus generating the plurality of asymptotic horizons that carve the hallucinatory architecture of the first-person from the astronomical complexities of our brain-environment. These broadcast or integration limits are a real empirical phenomenon that simply follow from the finite nature of conscious experience. Of BBT’s many empirical claims, these ‘information horizons’ are almost certain to be scientifically vindicated. Given these limits, the question of how they are expressed in conscious experience becomes unavoidable. The interpretations I’ve so far offered are no doubt little more than an initial assay into what will prove a massive undertaking. Once they are taken into account, however, it becomes difficult not to see Zizek’s ‘deflationary transcendental’ as simply one way for a fractionate metacognition to make sense of these limits: unitary because the absence of information is the absence of differentiation, reflexive because the lack of medial temporal information generates the metacognitive illusion of medial timelessness, and referential because the lack of medial functional information generates the metacognitive illusion of afunctional relationality, or intentional ‘aboutness.’

Thus we might speak of the ‘Zizek Fallacy,’ the faux affirmation of a materialism that nevertheless spares just enough of the transcendental to anchor the intentional…

A thread from which to dangle the prescientific tradition.


So does this mean that BBT offers the only ‘true’ route from intentionality to materialism. Not at all.

BBT takes the third-person brain as the ‘rule’ of the first-person mind simply because, thus far at least, science provides the only reliable form of theoretical cognition we know. Thus it would seem to be ‘materialist,’ insofar as it makes the body the measure of the soul. But what BBT shows–or better, hypothesizes–is that this dualism between mind and brain, ideal and real, is itself a heuristic artifact. Given medial neglect, the brain can only model its relation to its environment absent any informatic access to that relation. In other words, the ‘problem’ of its relation to distal environments is one that it can only solve absent tremendous amounts of information. The very structure of the brain, in other words, the fact that the machinery of predictive modelling cannot itself be modelled, prevents it, at a certain level at least, from being a universal problem solver. The brain is itself a heuristic cognitive tool, a system adapted to the solution of particular ‘problems.’ Given neglect, however, it has no way of cognizing its limits, and so regularly takes itself to be omni-applicable.

The heuristic structure of the brain and the cognitive limits this entails are nowhere more evident than in its attempts to cognize itself. So long as the medial mechanisms that underwrite the predictive modelling of distal environments in no way interfere with the environmental systems modelled–or put differently, so long as the systems modelled remain functionally independent of the modelling functions–then medial neglect need not generate problems. When the systems modelled are functionally entangled with medial modelling functions, however, one should expect any number of ‘interference effects’ culminating in the abject inability to predictively model those systems. We find this problem of functional entanglement distally where the systems to be modelled are so delicate that our instrumentation causes ‘observation effects’ that render predictive modelling impossible, and proximally where the systems to be modelled belong to the brain that is modelling. And indeed, as I’ve argued in a number of previous posts, many of the problems confronting the philosophy of mind can be diagnosed in terms of this fundamental misapplication of the ‘Aboutness Heuristic.’

This is where post-intentionalism reveals an entirely new dimension of radicality, one that allows us to identify the metaphysical categories of the ‘material’ and the ‘formal’ (yes, I said, formal) for the heuristic cartoons they are. BBT allows us to finally see what we ‘see’ as subreptive artifacts of our inability to see, as low-dimensional shreds of abyssal complexities. It provides a view where not only can the tradition be diagnosed and explained away, but where the fundamental dichotomies and categories, hitherto assumed inescapable, dissolve into the higher dimensional models that only brains collectively organized into superordinate heuristic mechanisms via the institutional practices of science can realize. Mind? Matter? These are simply waystations on an informatic continuum, ‘concepts’ according to the low-dimensional distortions of the first-person and mechanisms according to the third: concrete, irreflexive, high-dimensional processes that integrate our organism–and therefore us–as componential moments of the incomprehensibly vast mechanism of the universe. Where the tradition attempts, in vain, to explain our perplexing role in this natural picture via a series of extraordinary additions, everything from the immortal soul to happy emergence to Zizek’s fortuitous ‘void,’ BBT merely proposes a network of mundane privations, arguing that the self-congratulatory consciousness we have tasked science with explaining simply does not exist…

That the ‘Hard Problem’ is really one of preserving our last and most cherished set of self-aggrandizing conceits.

It is against this greater canvas that we can clearly see the parochialism of Zizek’s approach, how he remains (despite his ‘merely verbal’ commitment to ‘materialism’) firmly trapped within the hallucinatory ‘parallax’ of intentionality, and so essentializes the (apparently not so) ‘blind spot’ that plays such an important role in the system of conceptual fetishes he sets in motion. It has become fashion in certain circles to impugn ‘correlation’ in an attempt to think being in a manner that surpasses the relation between thought and being. This gives voice to an old hankering in Continental philosophy, the genuinely shrewd suspicion that something is wrong with the traditional, understanding of human cognition. But rather than answer the skepticism that falls out of Hume’s account of human nature or Wittgenstein’s consideration of human normativity, the absurd assumption has been that one can simply think their way beyond the constraints of thought, simply reach out and somehow snatch ‘knowledge at a spooky distance.’ The poverty of this assumption lies in the most honest of all questions: ‘How do you know?’ given that (as Hume taught us) you are a human and so cursed with human cognitive frailties, given that (as Wittgenstein taught us) you are a language-user and so belong to normative communities.

‘Correlation’ is little more than a gimmick, the residue of a magical thinking that assumes naming a thing gives one power over it. It is meant to obscure far more than enlighten, to covertly conserve the Continental tradition of placing the subject on the altar of career-friendly critique, lest the actual problem–intentionality–stir from its slumber and devour twenty-five centuries of prescientific conceit and myopia. The call to think being precritically, which is to say, without thinking the relation of thought and being, amounts to little more than an conceptually atavistic stunt so long as Hume and Wittgenstein’s questions remain unanswered.

The post-intentional philosophy that follows from BBT, however, belongs to the self-same skeptical tradition of disclosing the contextual contingencies that constrain thought’s attempt to cognize being. As opposed to the brute desperation of simply ignoring subjectivity or normativity, it seizes upon them. Intentional concepts and phenomena, it argues, exhibit precisely the acausal ‘bottomlessness’ that medial neglect, a structural inevitability given a mechanistic understanding of the brain, forces on metacognition. A great number of powerful and profound illusions result, illusions that you confuse for yourself. You think you are more a system of levers rather than a tangle of wiretaps. You think that understanding is yours. The low-dimensional cartoon of you standing within and apart from an object world is just that, a low-dimensional cartoon, a surrogate, facile and deceptive, for the high-dimensional facts of the brain-environment.

Thus is the problem of so-called ‘correlation’ solved, not by naming, shaming, and ersatz declaration, but rather by passing through the problematic, by understanding that the ‘subjective’ and the ‘normative’ are themselves natural and therefore amenable to scientific investigation. BBT explains the artifactual nature of the apparently inescapable correlation of thought and being, how medial neglect strands metacognition with an inexplicable covariance that it must conceive otherwise–in supra-natural terms. And it allows one to set aside the intentional conundrums of philosophy for what they are: arguments regarding interpretations of cognitive illusions.

Why assume the ‘design stance,’ given that it turns on informatic neglect? Why not regularly regard others in subpersonal terms, as mechanisms, when it strikes ‘you’ as advantageous? Or, more troubling still, is this simply coming to terms with what you have been doing all along? The ‘pragmatism’ once monopolized by ‘taking the intentional stance’ no longer obtains. For all we know, we could be more a confabulatory interface than anything, an informatic symbiont or parasite–our ‘consciousness’ a kind of tapeworm in the gut of the holy neural host. It could be this bad–worse. Corporate advertisers are beginning to think as much. And as I mentioned above, this is where the full inferential virulence of BBT stands revealed: it merely has to be plausible to demonstrate that anything could be the case.

And the happy possibilities are drastically outnumbered.

As for the question, ‘How do you know?’ BBT cheerfully admits that it does not, that it is every bit as speculative as any of its competitors. It holds forth its parsimonious explanatory reach, the way it can systematically resolve numerous ancient perplexities using only a handful of insights, as evidence of its advantage, as well as the fact that it is ultimately empirical, and so awaits scientific arbitration. BBT, unlike ‘OOO’ for instance, will stand or fall on the findings of cognitive science, rather than fade as all such transcendental positions fade on the tide of academic fashion.

And, perhaps most importantly, it is timely. As the brain becomes ever more tractable to science, the more antiquated and absurd prescientific discourses of the soul will become. It is folly to think that one’s own discourse is ‘special,’ that it will be the first prescientific discourse in history to be redeemed rather than relegated or replaced by the findings of science. What cognitive science discovers over the next century will almost certainly ruin or revolutionize fairly everything that has been assumed regarding the soul. BBT is mere speculation, yes, but mere speculation that turns on the most recent science and remains answerable to the science that will come. And given that science is the transformative engine of what is without any doubt the most transformative epoch in human history, BBT provides a means to diagnose and to prognosticate what is happening to us now–even going so far as to warn that intentionality will not constrain the posthuman.

What it does not provide is any redeeming means to assess or to guide. The post-intentional holds no consolation. When rules become regularities, nothing pretty can come of life. It is an ugly, even horrifying, conclusion, suggesting, as it does, that what we hold the most sacred and profound is little more than a subreptive by-product of evolutionary indifference. And even in this, the relentless manner in which it explodes and eviscerates our conceptual conceits, it distinguishes itself from its soft-bellied competitors. It simply follows the track of its machinations, the algorithmic grub of ‘reason.’ It has no truck with flattering assumptions.

And this is simply to say is that the Blind Brain Theory offers us a genuine way out, out of the old dichotomies, the old problems. It bids us to moult, to slough off transcendental philosophy like a dead serpentine skin. It could very well achieve the dream of all philosophy–only at the cost of everything that matters.

And really. What else did you fucking expect? A happy ending? That life really would turn out to be ‘what we make it’?

Whatever the conclusion is, it ain’t going to be Hollywood.