Three Pound Brain

No bells, just whistling in the dark…

Category: Uncategorized

Reactionary Atheism: Hagglund, Derrida, and Nooconservatism*

by rsbakker

The difference between the critic and the apologist in philosophy, one would think, is the difference between conceiving philosophy as refuge, a post hoc means to rationalize and so recuperate what we cherish or require, and conceiving philosophy as exposure, an ad hoc means to mutate thought and so see our way through what we think we cherish or require. Now in Continental philosophy so-called, the overwhelming majority of thinkers would consider themselves critics and not apologists. They would claim to be proponents of exposure, of the new, and deride the apologist for abusing reason in the service of wishful thinking.

But this, I hope to show, is little more than a flattering conceit. We are all children of Hollywood, all prone to faux-renegade affectations. Nowadays ‘critic,’ if anything, simply names a new breed of apologist. This is perhaps inevitable, in a certain sense. The more cognitive science learns regarding reason, the more intrinsically apologetic it seems to become, a confabulatory organ primarily adapted to policing and protecting our parochial ingroup aspirations. But it is also the case that thought (whatever the hell it is) has been delivered to a radically unprecedented juncture, one that calls its very intelligibility into question. Our ‘epoch of thinking’ teeters upon the abyssal, a future so radical as to make epic fantasy of everything we are presently inclined to label ‘human.’ Whether it acknowledges as much or not, all thought huddles in the shadow of the posthuman–the shadow of its end.

I’ve been thumping this particular tub for almost two decades now. It has been, for better or worse, the thematic impetus behind every novel I have written and every paper I have presented. And at long last, what was once a smattering of voices has become a genuine chorus (for reasons quite independent of my tub thumping I’m sure). Everyone agrees that something radical is happening. Also, everyone agrees that this ‘something’ turns on the every-expanding powers of science–and the sciences of the brain in particular. This has led to what promises to become one of those generational changes in philosophical thinking, at least in its academic incarnation. Though winded, thought is at last attempting to pace the times we live in. But I fear that it’s failing this attempt, that, far from exposing itself to the most uncertain future humanity has ever known, materially let alone intellectually, it is rather groping for ways to retool and recuperate a philosophical heritage that the sciences are transforming into mythology as we speak. It is attempting to innoculate thought as it exists against the sweeping transformations engulfing its social conditions. To truly expose thought, I want to argue, is to be willing to let it die…

Or become inhuman.

My position is quite simple: Now that science is overcoming the neural complexities that have for so long made an intentional citadel out of the soul, it will continue doing what it has always done, which is offer sometimes simple, sometimes sophisticated, mechanical explanations of what it finds, and so effectively ‘disenchanting’ the brain the way it has the world. This first part, at least, is uncontroversial. The real question has to do with the ‘disenchantment,’ which is to say the degree to which these mechanical explanations will be commensurate with our intentional self-understanding, or what Sellars famously called the ‘manifest image.’ Since there are infinitely more ways for our mechanistic scientific understanding to contradict our intentional prescientific understanding, we should, all things being equal, expect that the latter will be overthrown. Indeed, we already have a growing mountain of evidence trending in this direction. Given our apologetic inclinations, however, it should come as no surprise that the literature is rife with arguments why all things are not equal. Aside from an ingrained suspicion of happy endings, especially where science is concerned (I’m inclined to think it will cut our throats), the difficulty I have with such arguments lies in their reliance on metacognitive intuition. For the life of me, I cannot understand why we are in any better position peering into our souls than our ancestors were peering into the heavens. Why should the accumulation of scientific information be any friendlier to our traditional, prescientific assumptions this one time around?

I simply don’t think the human, or for that matter, any of the concepts science has chased from the world into the shadows of the human brain, will prove to be the miraculous exception. Science will rewrite ‘rules’ the way it has orbits, ‘meanings’ the way it has planets, and so on, doing what it has done so many times in the past: take simplistic, narcissistic notions founded on spare and fragmentary information and replacing them portraits of breathtaking causal complexity.

This is why I’m so suspicious of the ongoing ‘materialist turn’ in Continental philosophy, why I see it more as a crypto-apologetic attempt to rescue traditional conceptual conceits than any genuine turn away from ‘experience.’ This is how I read Zizek’s The Parallax View several weeks back, and this is how I propose to read Martin Hagglund’s project in his recent (and quite wonderfully written), Radical Atheism: Derrida and the Time of Life. Specifically, I want to take issue with his materialist characterization of Derrida’s work, even though this seems to be the aspect of his book that has drawn the most praise. Aaron Hodges, in “Martin Hagglund’s Speculative Materialism,” contends that Radical Atheism has “effectively dealt the coup de grace to any understanding of deconstructive logic that remains under the sway of idealist interpretation.” Even John Caputo, in his voluminous counterargument concedes that Hagglund’s Derrida is a materialist Derrida; he just happens to think that there are other Derridas as well.

Against the grain of Radical Atheism’s critical reception, then, I want to argue that no Derrida, Hagglund’s or otherwise, can be ‘materialist’ in any meaningful sense and remain recognizable as a ‘Derrida.’ He simply is not, as Hagglund claims, a philosopher of ‘ultratranscendence’ (as Hagglund defines the term). Derrida is not the author of any singular thought ‘beyond’ the empirical and the transcendental. Nor does he, most importantly, provide any way to explain the fundamental ‘synthesis,’ as Hagglund calls it, required to make sense of experience.

To evidence this last point, I will rehearse the explanation of ‘synthesis’ provided by the Blind Brain Theory (BBT). I will then go on to flex a bit of theoretical muscle, to demonstrate the explanatory power of BBT, the way it can ‘get behind’ and explicate philosophical positions even as notoriously arcane as Husserlian phenomenology or Derridean deconstruction. This provides us with the conceptual resources required to see the extent of Derrida’s noocentrism, the way he remains, despite the apparent profundity of his aleatory gestures, thoroughly committed to the centrality of meaning–the intentional. Far from ‘radical,’ I will contend, Derrida remains a nooconservative thinker, one thoroughly enmeshed in the very noocentric thinking Hagglund and so many others seem to think he has surpassed.

For those not familiar with Radical Atheism, I should note the selective, perhaps even opportunistic, nature of the reading I offer. From the standpoint of BBT, the distinction between deconstruction and negative theology is the distinction between deflationary conceptions of intentionality in its most proximal and distal incarnations. Thus the title of the present piece, ‘Reactionary Atheism.’ To believe in meaning of any sort is to have faith in some version of ‘God.’ Finite or infinite, mortal or immortal, the intentional form is conserved–and as I hope to show, that form is supernatural. BBT is a genuinely post-intentional theoretical position. According to it, there are no meaning makers,’ objective or subjective. According to it, you are every bit as mythological as the God you would worship or honour. In this sense, the contest between atheistic and apophatic readings of Derrida amounts to little more than another intractable theological dispute. On the account offered here, both houses are equally poxed.

My reading therefore concentrates on the first two chapters of Radical Atheism, where Hagglund provides an interpretation of how (as Derrida himself claims) trace and differance arise out of his critique of Husserl’s Phenomenology of Internal Time-consciousness. Since Hagglund’s subsequent defence of ‘radical atheism’ turns on the conclusions he draws from this interpretation–namely, the ‘ultratranscendental’ status of trace and differance and the explanation of synthesis they offer–undermining these conclusions serves to undermine Hagglund’s thesis as a whole.

Horn head

Atheism as traditionally understood, Hagglund begins, does not question the desire for God or immortality and so leaves ‘mortal’ a privative concept. To embrace atheism is to settle for mere mortality. He poses radical atheism as Derrida’s alternative, the claim that the conceptual incoherence of the desire for God and immortality forces us to affirm its contrary, the mortal:

The key to radical atheism is what I analyze as the unconditional affirmation of survival. This affirmation is not a matter of choice that some people make and others do not: it is unconditional because everyone is engaged by it without exception. Whatever one may want or whatever one may do, one has to affirm the time of survival, since it opens the possibility to live on–and thus to want something or to do something–in the first place. This unconditional affirmation of survival allows us to read the purported desire for immortality against itself. The desire to live on after death is not a desire for immortality, since to live on is to remain subjected to temporal finitude. The desire for survival cannot aim at transcending time, since the given time is the only chance for survival. There is thus an internal contradiction in the so-called desire for immortality. Radical Atheism, 2

Time becomes the limit, the fundamental constraint, the way, Hagglund argues, to understand how the formal commitments at the heart of Derrida’s work render theological appropriations of deconstruction unworkable. To understand deconstruction, you need to understand Derrida’s analysis of temporality. And once you understand Derrida’s analysis of temporality, he claims, you will see that deconstruction entails radical atheism, the incoherence of desiring immortality.

Although Hagglund will primarily base his interpretation of deconstructive temporality on a reading of Speech and Phenomena, it is significant, I think, that he begins with a reading of “Ousia and Gramme,” which is to say, a reading of Derrida’s reading of Heidegger’s reading of Hegel! In “Ousia and Gramme,” Derrida is concerned with the deconstructive revision of the Heideggerean problematic of presence. The key to this revision, he argues, lies in one of the more notorious footnotes in Being and Time, where Heidegger recapitulates the parallels between Hegel’s and Aristotle’s considerations of temporality. This becomes “the hidden passageway that makes the problem of presence communicate with the problem of the written trace” (Margins of Philosophy, 34). Turning from Heidegger’s reading of Hegel, Derrida considers what Aristotle himself has to say regarding time in Physics (4:10), keen to emphasize Aristotle’s concern with the apories that seem to accompany any attempt to think the moment. The primary problem, as Aristotle sees it, is the difficulty of determining whether the now, which divides the past from the future, is always one and the same or distinct, for the now always seems to somehow be the same now, even as it is unquestionably a different now. The lesson that Derrida eventually draws from this has to do with the way Heidegger, in his attempt to wrest time from the metaphysics of presence, ultimately commits the very theoretical sins that he imputes to Hegel and Aristotle. As he writes: “To criticize the manipulation or determination of any one of these concepts from within the system always amounts, and let this expression be taken with its full charge of meaning here, to going around in circles: to reconstituting, according to another configuration, the same system” (60). The lesson, in other words, is that there is no escaping the metaphysics of presence. Heidegger’s problem isn’t that he failed to achieve what he set out to achieve–How could it be when such failure is constitutive of philosophical thought?–but that he thought, if only for a short time, that he had succeeded.

The lesson that Hagglund draws from “Ousia and Gramme,” however, is quite different:

The pivotal question is what conclusion to draw from the antinomy between divisible time and indivisible presence. Faced with the relentless division of temporality, one must subsume time under a nontemporal presence in order to secure the philosophical logic of identity. The challenge of Derrida’s thinking stems from his refusal of this move. Deconstruction insists on a primordial division and thereby enables us to think the radical irreducibility of time as constitutive of any identity. Radical Atheism, 16-17

If there is one thing about Hagglund’s account that almost all his critics agree on, it is his clarity. But even at this early juncture, it should be clear that this purported ‘clarity’ possesses a downside. Derrida raises and adapts the Aristotelian problem of divisibility in “Ousia and Gramme” to challenge, not simply Heidegger’s claim to primordiality, but all claims to primordiality. And he criticizes Heidegger, not for thinking time in terms of presence, but for believing it was possible to think time in any other way. Derrida is explicitly arguing that ‘refusing this move’ is simply not possible, and he sees his own theoretical practice as no exception. His ‘challenge,’ as Hagglund calls it, lies in conceiving presence as something at once inescapable and impossible. Hagglund, in other words, distills his ‘pivotal question’ via a reading of “Ousia and Gramme” that pretty clearly runs afoul the very theoretical perils it warns against. We will return to this point in due course.

Having isolated the ‘pivotal,’ Hagglund turns to the ‘difficult’:

The difficult question is how identity is possible in spite of such division. Certainly, the difference of time could not even be marked without a synthesis that relates the past to the future and thus posits an identity over time. Philosophies of time-consciousness have usually solved the problem by anchoring the synthesis in a self-present subject, who relates the past to the future through memories and expectations that are given in the form of the present. The solution to the problem, however, must assume that the consciousness that experiences time in itself is present and thereby exempt from the division of time. Hence, if Derrida is right to insist that the self-identity of presence is impossible a priori, then it is all the more urgent to account for how the synthesis of time is possible without being grounded in the form of presence. 17

Identity has to come from somewhere. And this is where Derrida, according to Hagglund, becomes a revolutionary part of the philosophical solution. “For philosophical reason to advocate endless divisibility,” he writes, “is tantamount to an irresponsible empiricism that cannot account for how identity is possible” (25). This, Hagglund contends, is Derrida’s rationale for positing the trace. The nowhere of the trace becomes the ‘from somewhere’ of identity, the source of ‘originary synthesis.’ Hagglund offers Derrida’s account of the spacing of time and the temporalizing of space as a uniquely deconstructive account of synthesis, which is to say, an account of synthesis that does not “subsume time under a nontemporal presence in order to secure the philosophical logic of identity” (16).

Given the centrality of the trace to his thesis, critics of Radical Atheism were quick to single it out for scrutiny. Where Derrida seems satisfied with merely gesturing to the natural, and largely confining actual applications of trace and difference to semantic contexts, Hagglund presses further: “For Derrida, the spacing of time is an ‘ultratranscendental’ condition from which nothing can be exempt” (19). And when he says ‘nothing,’ Hagglund means nothing, arguing that everything from the ideal to “minimal forms of life” answers to the trace and differance. Hagglund was quick to realize the problem. In a 2011 Journal of Philosophy interview, he writes, “[t]he question then, is how one can legitimize such a generalization of the structure of the trace. What is the methodological justification for speaking of the trace as a condition for not only language and experience but also processes that extend beyond the human and even the living?”

Or to put the matter more simply, just what is ‘ultratranscendental’ supposed to mean?

Derrida, for his part, saw trace and differance as (to use Gasche’s term) ‘quasi-transcendental.’ Derrida’s peculiar variant of contextualism turns on his account of trace and differance. Where pragmatic contextualists are generally fuzzy about the temporality implicit to the normative contexts they rely upon, Derrida actually develops what you could call a ‘logic of context’ using trace and differance as primary operators. This is why his critique of Husserl in Speech and Phenomena is so important. He wants to draw our eye to the instant-by-instant performative aspect of meaning. When you crank up the volume on the differential (as opposed to recuperative) passage of time, it seems to be undeniably irreflexive. Deconstruction is a variant of contextualism that remains ruthlessly (but not exclusively) focussed on the irreflexivity of semantic performances, dramatizing the ‘dramatic idiom’ through readings that generate creativity and contradiction. The concepts of trace and differance provide synchronic and diachronic modes of thinking this otherwise occluded irreflexivity. What renders these concepts ‘quasi-transcendental,’ as opposed to transcendental in the traditional sense, is nothing other than trace and differance. Where Hegel temporalized the krinein of Critical Philosophy across the back of the eternal, conceiving the recuperative role of the transcendental as a historical convergence upon his very own philosophy, Derrida temporalizes the krinein within the aporetic viscera of this very moment now, overturning the recuperative role of the transcendental, reinterpreting it as interminable deflection, deferral, divergence–and so denying his thought any self-consistent recourse to the transcendental. The concept DIFFERANCE can only reference differance via the occlusion of differance. “The trace,” as Derrida writes, “is produced as its own erasure” (“Ousia and Gramme,” 65). One can carve out a place for trace and differance in the ‘system space’ of philosophical thinking, say their ‘quasi-transcendentality’ (as Gasche does in The Tain of the Mirror, for instance) resides in the way they name both the condition of possibility and impossibility of meaning and life, or one can, as I would argue Derrida himself did, evince their ‘quasi-transcendentality’ through actual interpretative performances. One can, in other words, either refer or revere.

Since second-order philosophical accounts are condemned to the former, it has become customary in the philosophical literature to assign content to the impossibility of stable content assignation, to represent the way performance, or the telling, cuts against representation, or the told. (Deconstructive readings, you could say, amount to ‘toldings,’ readings that stubbornly refuse to allow the antinomy of performance and representation to fade into occlusion). This, of course, is one of the reasons late 20th century Continental philosophy came to epitomize irrationalism for so many in the Anglo-American philosophical community. It’s worth noting, however, that in an important sense, Derrida agreed with these worries: this is why he prioritized demonstrations of his position over schematic statements, drawing cautionary morals as opposed to traditional theoretical conclusions. As a way of reading, deconstruction demonstrates the congenital inability of reason and representation to avoid implicitly closing the loop of contradiction. As a speculative account of why reason and representation possess this congenital inability, deconstruction explicitly closes that loop itself.

Far from being a theoretical virtue, then, ‘quasi-transcendence’ names a liability. Derrida is trying to show philosophy that inconsistency, far from being a distal threat requiring some kind of rational piety to avoid, is maximally proximal, internal to its very practice. The most cursory survey of intellectual history shows that every speculative position is eventually overthrown via the accumulation of interpretations. Deconstruction, in this sense, can be seen as a form of ‘interpretative time-travel,’ a regimented acceleration of processes always already in play, a kind of ‘radical translation’ put into action in the manner most violent to theoretical reason. The only way Derrida can theoretically describe this process, however, is by submitting to it–which is to say, by failing the way every other philosophy has failed. ‘Quasi-transcendence’ is his way of building this failure in, a double gesture of acknowledging and immunizing; his way of saying, ‘In speaking this, I speak what cannot be spoken.’

(This is actually the insight that ended my tenure as a ‘Branch Derridean’ what seems so long ago, the realization that theoretical outlooks that manage to spin virtue out of their liabilities result in ‘performative first philosophy,’ positions tactically immune to criticism because they incorporate some totalized interpretation of critique, thus rendering all criticisms of their claims into exemplifications of those claims. This is one of the things I’ve always found the most fascinating about deconstruction: the way it becomes (for those who buy into it) a performative example of the very representational conceit it sets out to demolish.)

‘Quasi-transcendental,’ then, refers to ‘concepts’ that can only be shown. So what then, does Hagglund mean by ‘utlratranscendental’ as opposed to ‘transcendental’ and ‘quasi-transcendental’? The first thing to note is that Hagglund, like Gasche and others, is attempting to locate Derrida within the ‘system space’ of philosophy and theory more generally. For him (opposed to Derrida), deconstruction implies a distinct position that rationalizes subsequent theoretical performances. As far as I can tell, he views the recursive loop of performance and representation, telling and told, as secondary. The ultratranscendental is quite distinct from the quasi-transcendental (though my guess is that Hagglund would dispute this). For Hagglund, rather, the ultratranscendental is thought through the lense of the transcendental more traditionally conceived:

On the one hand, the spacing of time has an ultratranscendental status because it is the condition for everything all the way up and including the ideal itself. The spacing of time is the condition not only for everything that can be cognized and experienced, but also for everything that can be thought and desired. On the other hand, the spacing of time has an ultratranscendental status because it is the condition for everything all the way down to minimal forms of life. As Derrida maintains, there is no limit to the generality of differance and the structure of the trace applies to all fields of the living. Radical Atheism, 19

The ultratranscendental, in other words, is simply an ‘all the way’ transcendental, as much a condition of possibility of life as a condition of possibility of experience. “The succession of time,” Hagglund states in his Journal of Philosophy interview, “entails that every moment negates itself–that it ceases to be as soon as it comes to be–and therefore must be inscribed as trace in order to be at all.” Trace and differance, he claims, are logical as opposed to ontological implications of succession, and succession seems to be fundamental to everything.

This is what warrants the extension of trace and differance from the intentional (the kinds of contexts Derrida was prone to deploy them) to the natural. And this is why Hagglund is convinced he’s offering a materialist reading of Derrida, one that allows him to generalize Derrida’s arche-writing to an ‘arche-materiality’ consonant with philosophical naturalism. But when you turn to his explicit statements to this effect, you find that the purported, constitutive generality of the trace, what makes it ultratranscendental, becomes something quite different:

This notion of the arche-materiality can accommodate the asymmetry between the living and the nonliving that is integral to Darwinian materialism (the animate depends upon the inanimate but not the other way around). Indeed, the notion of arche-materiality allows one to account for the minimal synthesis of time–namely, the minimal recording of temporal passage–without presupposing the advent or existence of life. The notion of arche-materiality is thus metatheoretically compatible with the most significant philosophical implications of Darwinism: that the living is essentially dependant on the nonliving, that animated intention is impossible without mindless, inanimate repetition, and that life is an utterly contingent and destructible phenomenon. Unlike current versions of neo-realism or neo-materialism, however, the notion of arche-materiality does not authorize its relation to Darwinism by constructing an ontology or appealing to scientific realism but rather articulating a logical infrastructure that is compatible with its findings. Journal of Philosophy

The important thing to note here is how Hagglund is careful to emphasize that the relationship between arche-materiality and Darwinian naturalism is one of compatibility. Arche-materiality, here, is posited as an alternative way to understand the mechanistic irreflexivity of the life sciences. This is more than a little curious given the ‘ultratranscendental’ status he wants to accord to the former. If it is the case that trace and differance understood as arche-materiality are merely compatible with rather than anterior to and constitutive of the mechanistic, Darwinian paradigm of the life sciences, then how could they be ‘ultratranscendental,’ which is to say, constitutive, in any sense? As an alternative, one might wonder what advantages, if any, arche-materiality has to offer theory. The advantages of mechanistic thinking should be clear to anyone who has seen a physician. So the question becomes one of what kind of conceptual work do trace and differance do.

Hagglund, in effect, has argued himself into the very bind which I fear is about to seize Continental philosophy as a whole. He recognizes the preposterous theoretical hubris involved in arguing that the mechanistic paradigm depends on arche-materiality, so he hedges, settles for ‘compatibility’ over anteriority. In a sense, he has no choice. Time is itself the object of scientific study, and a divisive one at that. Asserting that trace and differance are constitutive of the mechanistic paradigm places his philosophical speculation on firmly empirical ground (physics and cosmology, to be precise)–a place he would rather not be (and for good reason!).

But this requires that he retreat from his earlier claims regarding the ultratranscendental status of trace and differance, that he rescind the claim that they constitute an ‘all the way down’ condition. He could claim they are merely transcendental in the Kantian, or ‘conditions of experience,’ sense, but then that would require abandoning his claim to materialism, and so strand him with the ‘old Derrida.’ So instead he opts for ‘compatibility,’ and leaves the question of theoretical utility, the question of why we should bother with arcane speculative tropes like trace and differance given the boggling successes of the mechanistic paradigm, unasked.

One could argue, however, that Hagglund has already given us his answer: trace and differance, he contends, allow us to understand how reflexivity arises from irreflexivity absent the self-present subject. This is their signature contribution. As he writes:

The synthesis of the trace follows from the constitution of time we have considered. Given that the now can appear only by disappearing–that it passes away as soon as it comes to be–it must be inscribed as a trace in order to be at all. This is the becoming-space of time. The trace is necessarily spatial, since spatiality is characterized by the ability to remain in spite of temporal succession. Spatiality is thus the condition for synthesis, since it enables the tracing of relations between past and future. Radical Atheism, 18

But as far as ‘explanations’ are concerned it remains unclear as to how this can be anything other than a speculative posit. The synthesis of now moments occurs somehow. Since the past now must be recuperated within future nows, it makes sense to speak of some kind of residuum or ‘trace.’ If this synthesis isn’t the product of subjectivity, as Kant and Husserl would have it, then it has to be the product of something. The question is why this ‘something’ need have anything to do with space. Why does the fact that the trace (like the Dude) ‘abides’ have anything to do with space? The fact that both are characterized by immunity to succession implies, well… nothing. The trace, you could say, is ‘spatial’ insofar as it possesses location. But it remains entirely unclear how spatiality ‘enables the tracing of relations between past and future,’ and so becomes the ‘condition for synthesis.’

Hagglund’s argument simply does not work. I would be inclined to say the same of Derrida, if I actually thought he was trying to elaborate a traditional theoretical position in the system space of philosophy. But I don’t: I think the aporetic loop he establishes between deconstructive theory and practice is central to understanding his corpus. Derrida takes the notion of quasi-transcendence (as opposed to ultratranscendence) quite seriously. ‘Trace’ and ‘differance’ are figures as much as concepts, which is precisely why he resorts to a pageant of metaphors in his subsequent work, ‘originary supplements’ such as spectres, cinders, gifts, pharmakons and so on: The same can be said of ‘arche-writing’ and yes, even ‘spacing’: Derrida literally offers these as myopic and defective ways of thinking some fraction of the unthinkable. Derrida has no transcendental account of how reflexivity arises from irreflexivity, only a myriad of quasi-transcendental ways we might think the relation of reflexivity and irreflexivity. The most he would say is that trace and differance allow us to understand how the irreflexivity characteristic of mechanism operates both on and within the synthesis of experience.

At the conclusion of “Freud and the Scene of Writing,” Derrida discusses the ‘radicalization of the thought of the trace,’ adding parenthetically, “a thought because it escapes the binarism and makes binarism possible on the basis of a nothing” (Writing and Difference, 230). This, once again, is what makes the trace and differance ‘quasi-transcendental.’ Our inability to think the contemporaneous, irreflexive origin of our thinking means that we can only think that irreflexivity under ‘erasure,’ which is to say, in terms at once post hoc and ad hoc. Given that trace and differance refer to the irreflexive, procrustean nature of representation (or ‘presence’), the fact that being ‘vanishes’ in the disclosure of beings, it seems to make sense that we should wed our every reference to them with an admission of the vehicular violence involved, the making present (via the vehicle of thought) of what can never be, nor ever has been, present.

In positioning Derrida’s thought beyond the binarism of transcendental and empirical, Hagglund is situating deconstruction in the very place Derrida tirelessly argues thought cannot go. As we saw above, Hagglund thinks advocating ‘endless divisibility’ is ‘philosophically irresponsible’ given the fact of identity (Radical Atheism, 25). What he fails to realize is that this is precisely the point: preaching totalized irreflexivity is a form of ‘irresponsible empiricism’ for philosophical reason. Trace and differance, as more than a few Anglo-American philosophical commentators have noted, are rationally irresponsible. No matter how fierce the will to hygiene and piety, reason is always besmirched and betrayed by its occluded origins. Thus the aporetic loop of theory and practice, representation and performance, reflexivity and irreflexivity–and, lest we forget, interiority and exteriority…

Which is to say, the aporetic loop of spacing. As we’ve seen, Hagglund wants to argue that spacing constitutes a solution to the fundamental philosophical problem of synthesis. If this is indeed the cornerstone of Derrida’s philosophy as he claims, then the ingenious Algerian doesn’t seem to think it bears making explicit. If anything, the sustained, explicit considerations of temporality that characterize his early work fade into the implicit background of his later material. This is because Derrida offers spacing, not as an alternate, nonintentional explanation of synthesis, but rather as a profound way to understand the aporetic form of that synthesis:

Even before it ‘concerns’ a text in narrative form, double invagination constitutes the story of stories, the narrative of narrative, the narrative of deconstruction in deconstruction: the apparently outer edge of an enclosure [cloture], far from being simple, simply external and circular, in accordance with the philosophical representation of philosophy, makes no sign beyond itself, toward what is utterly other, without becoming double or dual, without making itself be ‘represented,’ refolded, superimposed, re-marked within the enclosure, at least in what the structure produces as an effect of interiority. But it is precisely this structure-effect that is being deconstructed here. “More Than One Language,” 267-8

The temporal assumptions Derrida isolates in his critique of Husserl are clearly implicit here, but it’s the theme of spacing that remains explicit. What Derrida is trying to show us, over and over again, is a peculiar torsion in what we call experience: the ‘aporetic loop’ I mentioned above. It’s most infamous statement is “there is nothing outside the text” (Of Grammatology, 158) and its most famous image is that of the “labyrinth which includes in itself its own exits” (Speech and Phenomena, 104). Derrida never relinquishes the rhetoric of space because the figure it describes is the figure of philosophy itself, the double-bind where experience makes possible the world that makes experience possible.

What Hagglund calls synthesis is at once the solution and the dilemma. It relates to the outside by doubling, becoming ‘inside-outside,’ thus exposing itself to what lays outside the possibility of inside-outside (and so must be thought under erasure). Spacing refers to the interiorization of exteriority via the doubling of interiority. The perennial philosophical sin (the metaphysics of presence) is to confuse this folding of interiority for all there is, for inside and outside. So to take Kant as an example, positing the noumenal amounts to a doubling of interiority: the binary of empirical and transcendental. What Derrida is attempting is nothing less than a thinking that remains, as much as possible, self-consciously open to what lies outside the inside-outside, the ‘nothing that makes such binarisms possible.’ Since traditional philosophy can only think this via presence, which is to say, via another doubling, the generation of another superordinate binary (the outside-outside versus the inside-outside (or as Hagglund would have it, the ultratranscendental versus the transcendental/empirical)), it can only remain unconsciously open to this absolute outside. Thus Derrida’s retreat into performance.

Far from any ‘philosophical solution’ to the ‘philosophical problem of synthesis,’ spacing provides a quasi-transcendental way to understand the dynamic and aporetic form of that synthesis, giving us what seems to be the very figure of philosophy itself, as well as a clue as to how thinking might overcome the otherwise all-conquering illusion of presence. Consider the following passage from “Differance,” a more complete version of the quote Hagglund uses to frame his foundational argument in Radical Atheism:

An interval must separate the present from what it is not in order for the present to be itself, but this interval that constitutes it as present must, by the same token, divide the present in and of itself, thereby also dividing, along with the present, everything that is thought on the basis of the present, that is, in our metaphysical language, every being, and singularly substance or the subject. In constituting itself, in dividing itself dynamically, this interval is what might be called spacing, the becoming-space of time or the becoming-time of space (temporization). And it is this constitution of the present, as an ‘originary’ and irreducibly nonsimple (and therefore, stricto sensu nonoriginary) synthesis of marks, or traces of retentions and protentions (to reproduce analogically and provisionally a phenomenological and transcendental language that soon will reveal itself to be inadequate), that I propose to call archi-writing, archi-traces, or differance. Which (is) (simultaneously) spacing (and) temporization. Margins of Philosophy, 13

Here we clearly see the movement of ‘double invagination’ described above, the way the ‘interval’ divides presence from itself both within itself and without, generating the aporetic figure of experience/world that would for better or worse become Derrida’s lifelong obsession. The division within is what opens the space (as inside/outside), while the division without, the division that outruns the division within, is what makes this space the whole of space (because of the impossibility of any outside inside/outside). Hagglund wants to argue “that an elaboration of Derrida’s definition allows for the most rigourous thinking of temporality by accounting for an originary synthesis without grounding it in an indivisible presence” (Radical Atheism, 18). Not only is his theoretical, ultratranscendental ‘elaboration’ orthogonal to Derrida’s performative, quasi-transcendental project, his rethinking of temporality (despite its putative ‘rigour’), far from explaining synthesis, ultimately re-inscribes him within the very metaphysics of presence he seeks to master and chastise. The irony, then, is that even though Hagglund utterly fails to achieve his thetic goals, there is a sense in which he unconsciously (and inevitably) provides a wonderful example of the very figure Derrida is continually calling to our attention. The problem of synthesis is the problem of presence, and it is insoluble, insofar as any theoretical solution, for whatever reason, is doomed to merely reenact it.

Derrida does not so much pose a solution to the problem of synthesis as he demonstrates the insolubility of the problem given the existing conceptual resources of philosophy. At most Derrida is saying that whatever brings about synthesis does so in a way that generates presence as deconstructively conceived, which is to say, structured as inside/outside, self/other, experience/world–at once apparently complete and ‘originary’ and yet paradoxically fragmentary and derivative. Trace and differance provide him with the conceptual means to explore the apparent paradoxicality at the heart of human thought and experience at a particular moment of history:

Differance is neither a word nor a concept. In it, however, we see the juncture–rather than the summation–of what has been most decisively inscribed in the thought of what is conveniently called our ‘epoch’: the difference of forces in Nietzche, Saussure’s principle of semiological difference, difference as the possibility of [neurone] facilitation, impression and delayed effect in Freud, difference as the irreducibility of the trace of the other in Levinas, and the ontic-ontological difference in Heidegger. Speech and Phenomena, 130

It is this last ‘difference,’ the ontological difference, that Derrida singles out for special consideration. Differance, he continues, is strategic, a “provisionally privileged” way to track the “closure of presence” (131). In fact, if anything is missing in an exegetical sense from Hagglund’s consideration of Derrida it has to be Heidegger, who edited The Phenomenology of Internal Time-consciousness and, like Derrida, arguably devised his own philosophical implicature via a critical reading of Husserl’s account of temporality. In this sense, you could say that trace and differance are not the result of a radicalization of Husserl’s account of time, but rather a radicalization of a radicalization of that account. It is the ontological difference, the difference between being and beings, that makes presence explicit as a problem. Differance, you could say, startegically and provisionally renders the problem of presence (or ‘synthesis’) dynamic, conceives it as an effect of the trace. Where the ontological difference allows presence to hang pinned in philosophical system space for quick reference and retrieval, differance ‘references’ presence as a performative concern, as something pertaining to this very moment now. Far from providing the resources to ‘solve’ presence, differance expands the problem it poses by binding (and necessarily failing to bind) it to the very kernel of now.

Contra Hagglund, trace and differance do not possess the resources to even begin explaining synthesis in any meaningful sense of the term ‘explanation.’ To think that it does, I have argued, is to misconceive both the import and the project of deconstruction. But this does not mean that presence/synthesis is in fact insoluble. As the above quote suggests, Derrida himself understood the ‘epochal’ (as opposed to ‘ultratranscendental’) nature of the problematic motivating trace and differance. A student of intellectual history, he understood the contingency of the resources we are able to bring to any philosophical problem. He did not, as Adorno did working through the same conceptual dynamics via negative dialectics and identity thinking, hang his project from the possibility of some ‘Messianic moment,’ but this doesn’t mean he didn’t think the radical exposure whose semantic shadow he tirelessly attempted to chart wasn’t itself radically exposed.

And as it so happens, we are presently living through what is arguably the most revolutionary philosophical epoch of all, the point when the human soul, so long sheltered by the mad complexities of the brain, is at long last yielding to the technical and theoretical resources of the natural sciences. What Hagglund, deferring to the life sciences paradigm, calls ‘compatibility’ is a constitutive relation after all, only one running from nature to thought, world to experience. Trace and differance, far from ‘explaining’ the ‘ultratranscendental’ possibility of ‘life,’ are themselves open/exposed to explanation in naturalistic terms. They are not magical.

Deconstruction can be naturalized.


So what then is synthesis? How does reflexivity arise from irreflexivity?

Before tackling this question we need to remind ourselves of the boggling complexity of the world as revealed by the natural sciences. Phusis kruptesthai philei, Heraclitus allegedly said, ‘nature loves hiding.’ What it hides ‘behind’ is nothing less than our myriad cognitive incapacities, our inability to fathom complexities that outrun our brain’s ability to sense and cognize. ‘Flicker fusion’ in psychophysics provides a rudimentary and pervasive example: when the frequency of a flickering light crosses various (condition-dependent) thresholds, our experience of it will ‘fuse.’ What was a series of intermittent flashes becomes continuous illumination. As pedestrian as this phenomena seems, it has enormous practical and theoretical significance. This is the threshold that determines, for instance, the frame rate for the presentation of moving images in film or video. Such technologies, you could say, actively exploit our sensory and cognitive bottlenecks, hiding with nature beyond our ability differentiate.

Differentiations that exceed our brain’s capacity to sense/cognize make no difference. Or put differently, information (understood in the basic sense of systematic differences making systematic differences) that exceeds the information processing capacities of our sensory and cognitive systems simply does not exist for those systems–not even as an absence. It simply never occurs to people that their incandescent lights are in fact discontinuous. Thus the profundity of the Heraclitean maxim: not only does nature conceal itself behind the informatic blind of complexity, it conceals this concealment. This is what makes science such a hard-won cultural achievement, why it took humanity so long (almost preposterously so, given hindsight) to see that it saw so little. Lacking information pertaining to our lack of information, we assumed we possessed all the information required. We congenitally assumed, in other words, the sufficiency of what little information we had available. Only now, after centuries of accumulating information via institutionalized scientific inquiry, can we see how radically insufficient that information was.

Take geocentrism for instance. Lacking information regarding the celestial motion and relative location of the earth, our ancestors assumed it was both motionless and central, which is to say, positionally self-identical relative to itself and the cosmos. Geocentrism is the result of a basic perspectival illusion, a natural assumption to make given the information available and the cognitive capacities possessed. As strange as it may sound, it can be interpreted as a high-dimensional, cognitive manifestation of flicker fusion, the way the absence of information (differences making differences) results in the absence of differentiation, which is to say, identity.

Typically we construe ‘misidentifications’ with the misapplication of representations, as when, for example, children call whales fish. Believing whales are fish and believing the earth is the motionless centre of the universe would thus seem to be quite different kinds of mistakes. Both are ‘misrepresentations,’ mismatches between cognition and the world, but where the former mistake is categorical, the latter is empirical. The occult nature of this ‘matching’ makes it difficult to do much more than classify them together as mistakes, the one a false identification, the other a false theory.

Taking an explicitly informatic view, however, allows us to see both as versions of the mistake you’re making this very moment, presuming as you do the constancy of your illuminated computer screen (among other things). Plugging the brain into its informatic environment reveals the decisive role played by the availability of information, how thinking whales are fish and thinking the earth is the motionless centre of the universe both turn on the lack of information, the brain’s inability to access the systematic differences required to differentiate whales from fish or the earth’s position over time. Moreover, it demonstrates the extraordinarily granular nature of human cognition as traditionally conceived. It reveals, in effect, the possibility that our traditional, intentional understanding of cognition should itself be seen as an artifact of information privation.

Each of the above cases–flicker fusion, geocentrism, and misidentification–involve our brain’s ability to comprehend its environments given its cognitive resources and the information available. With respect to cognizing cognition, however, we need to consider the brain’s ability to cognize itself given, once again, its cognitive resources and the information available. Much of the philosophical tradition has attributed an exemplary status to self-knowledge, thereby assuming that the brain is in a far better position to cognize itself than its environments. But as we saw in the case with environmental cognition, the absence of information pertaining to the absence of information generates the illusion of sufficiency, the assumption that the information available is all the information there is. A number of factors, including the evolutionary youth of metacognition, the astronomical complexity of the brain, not to mention the growing mountain of scientific evidence indicating rampant metacognitive error, suggest that our traditional assumptions regarding the sufficiency theoretical metacognition need to be set aside. It’s becoming increasingly likely that metacognitive intuitions, far from constituting some ‘plenum,’ are actually the product of severe informatic scarcity.

Nor should we be surprised: science is only just beginning to mine the informatic complexities of the human brain. Information pertaining to what we are as a matter of scientific fact is only now coming to light. Left to our own devices, we can only see so much of the sky. The idea of our ancient ancestors looking up and comprehending everything discovered by modern physics and cosmology is, well, nothing short of preposterous. They quite simply lacked the information. So why should we think peering at the sky within will prove any different than the sky above? Taking the informatic perspective thus raises the spectre of noocentrism, the possibility that our conception of ourselves as intentional is a kind of perspectival illusion pertaining to metacognition not unlike geocentrism in the case of environmental cognition.

Thus the Blind Brain Theory, the attempt to naturalistically explain intentional phenomena in terms of the kinds and amounts of information missing. Where Hagglund claims ‘compatibility’ with Darwinian naturalism, BBT exhibits continuity: it takes the mechanistic paradigm of the life sciences as its basis. To the extent that it can explain trace and difference, then, it can claim to have naturalized deconstruction.

According to BBT, the intentional structure of first-person experience–the very thing phenomenology takes itself to be describing–is an artifact of informatic neglect, a kind of cognitive illusion. So, for instance, when Hagglund (explaining Husserl’s account of time-consciousness) writes “[t]he notes that run off and die away can appear as a melody only through an intentional act that apprehends them as an interconnected sequence” (56) he is literally describing the way that experience appears to a metacognition trussed in various forms of neglect. As we shall see, where Derrida, via the quasi-transcendentals of trace and differance, can only argue the insufficiencies plaguing such intentional acts, BBT possesses the resources to naturalistically explain, not only the insufficiencies, but why metacognition attributes intentionality to temporal cognition at all, why the apparent paradoxes of time-consciousness arise, and why it is that trace and differance make ‘sense’ the way they do. ‘Brain blindness’ or informational lack, in other words, can not only explain many of the perplexities afflicting consciousness and the first-person, it can also explain–if only in a preliminary and impressionistic way–much of the philosophy turning on what seem to be salient intentional intuitions.

Philosophy becoming transcendentally self-conscious as it did with Hume and Kant can be likened to a kid waking up to the fact that he lives in a peculiar kind of box, one not only walled by neglect (which is to say, the absence of information–or nothing at all), but unified by it as well. Kant’s defining metacognitive insight came with Hume: Realizing the wholesale proximal insufficiency of experience, he understood that philosophy must be ‘critical.’ Still believing in reason, he hoped to redress that insufficiency via his narrow form of transcendental interpretation. He saw the informatic box, in other words, and he saw how everything within it was conditioned, but assuming the sufficiency of metacognition, he assumed the validity of his metacognitive ‘deductions.’ Thus the structure of the empirical, the conditioned, and the transcendental, the condition: the attempt to rationally recuperate the sufficiency of experience.

But the condition is, as a matter of empirical fact, neural. The speculative presumption that something resembling what we think we metacognize as soul, mind, or being-in-the-world arises at some yet-to-be naturalized ‘level of description’–noocentrism–is merely that, a speculative presumption that in this one special case (predictably, our case) science will redeem our intentional intuitions. BBT offers the contrary speculative presumption, that something resembling what we think we metacognize as soul, mind, or being-in-the-world will not arise at some yet-to-be naturalized ‘level of description’ because nothing resembles what we think we metacognize at any level. Cognition is fractionate, heuristic, and captive to the information available. The more scant or mismatched the information, the more error prone cognition becomes. And no cognitive system faces the informatic challenges confronting metacognition. The problem, simply put, is that we lack any ‘meta-metacognition,’ and thus any intuition of the radical insufficiency of the information available relative to the cognitive resources possessed. The kinds of low-dimensional distortions revealed are therefore taken as apodictic.

There are reasons why first-person experience appears the way it does, they just happen to be empirical rather than transcendental. Transcendental explanation, you could say, is an attempt to structurally regiment first-person experience in terms that take the illusion to be real. The kinds of tail-chasing analyses one finds in Husserl literally represent an attempt to dredge some kind of formal science out of what are best understood as metacognitive illusions. The same can be said for Kant. Although he deserves credit for making the apparent asymptotic structure of conscious experience explicit, he inevitably confused the pioneering status of his subsequent interpretations–the fact that they were, for the sake of sheer novelty, the ‘only game in town’–for a kind of synthetic deductive validity. Otherwise he was attempting to ‘explain’ what are largely metacognitive illusions.

According to BBT, ‘transcendental interpretation’ represents the attempt to rationalize what it is we think we see when we ‘reflect’ in terms (intentional) congenial to what it is we think we see. The problem isn’t simply that we see far too little, but that we are entirely blind to the very thing we need to see: the context of neurofunctional processes that explains the why and how of the information broadcast to or integrated within conscious experience. To say the neurofunctionality of conscious experience is occluded is to say metacognition accesses no information regarding the actual functions discharged by the information broadcast or integrated. Blind to what lies outside its informatic box, metacognition confuses what it sees for all there is (as Kahneman might say), and generates ‘transcendental interpretations’ accordingly. Reasoning backward with inadequate cognitive tools from inadequate information, it provides ever more interpretations to ‘hang in the air’ with the interpretations that have come before.

‘Transcendental,’ in other words, simply names those prescientific, medial interpretations that attempt to recuperate the apparent sufficiency of conscious experience as metacognized. BBT, on the other hand, is exclusively interested in medial interpretations of what is actually going on, regardless of speculative consequences. It is an attempt to systematically explain away conscious experience as metacognized–the first-person–in terms of informatic privation and heuristic misadventure.

This will inevitably strike some readers as ‘positivist,’ ‘scientistic,’ or ‘reductive,’ terms that have become scarce more than dismissive pejoratives in certain philosophical circles, an excuse to avoid engaging what science has to say regarding their domain–the human. BBT, in other words, is bound to strike certain readers as chauvinistic, even imperial. But, if anything, BBT is bent upon dispelling views grounded in parochial sources of information–chauvinism. In fact, it is transcendental interpretation that restricts itself to nonscientific sources of information under the blanket assumption of metacognitive sufficiency, the faith that enough information of the right kind is available for actual cognition. Transcendental interpretation, in other words, remains wedded to what Kant called ‘tutelary natures.’ BBT, however, is under no such constraint; it considers both metacognitive and scientific information, understanding that the latter, on pain of supernaturalism, simply has to provide the baseline for reliable theoretical cognition (whatever that ultimately turns out to be). Thus the strange amalgam of scientific and philosophical concepts found here.

If reliable theoretical cognition requires information of the right kind and amount, then it behooves the philosopher, deconstructive or transcendental, to take account of the information their intentional rationales rely upon. If that information is primarily traditional and metacognitive–prescientific–then that philosopher needs some kind of sufficiency argument, some principled way of warranting the exclusion of scientific information. And this, I fear, has become all but impossible to do. If the sufficiency argument provided is speculative–that is, if it also relies on traditional claims and metacognitive intuitions–then it simply begs the question. If, on the other hand, it marshals information from the sciences, then it simply acknowledges the very insufficiency it is attempting to fend.

The epoch of intentional philosophy is at an end. It will deny and declaim–it can do nothing else–but to little effect. Like all prescientific domains of discourse it can only linger and watch its credibility evaporate into New Age aether as the sciences of the brain accumulate ever more information and refine ever more instrumentally powerful interpretations of that information. It’s hard to argue against cures. Any explanatory paradigm that restores sight to the blind, returns mobility to the crippled, not to mention facilitates the compliance of the masses, will utterly dominate the commanding heights of cognition.

Far more than mere theoretical relevance is at stake here.

On BBT, all traditional and metacognitive accounts of the human are the product of extreme informatic poverty. Ironically enough, many have sought intentional asylum within that poverty in the form of apriori or pragmatic formalisms, confusing the lack of information for the lack of substantial commitment, and thus for immunity against whatever the sciences of the brain may have to say. But this just amounts to a different way of taking refuge in obscurity. What are ‘rules’? What are ‘inferences’? Unable to imagine how science could answer these questions, they presume either that science will never be able to answer them, or that it will answer them in a manner friendly to their metacognitive intuitions. Taking the history of science as its cue, BBT entertains no such hopes. It sees these arguments for what they happen to be: attempts to secure the sufficiency of low-dimensional, metacognitive information, to find gospel in a peephole glimpse.

The same might be said of deconstruction. Despite their purported radicality, trace and differance likewise belong to a low-dimensional conceptual apparatus stemming from a noocentric account of intentional sufficiency. ‘Mystic writing pad’ or no, Derrida remains a philosopher of experience as opposed to nature. As David Roden has noted, “while Derrida’s work deflates the epistemic primacy of the ‘first person,’ it exhibits a concern with the continuity of philosophical concepts that is quite foreign to the spirit of contemporary naturalism” (“The Subject”). The ‘advantage’ deconstruction enjoys, if it can be called such, lies in its relentless demonstration of the insufficiency plaguing all attempts to master meaning, including its own. But as we have seen above, it can only do such from the fringes of meaning, as a ‘quasi-transcendentally’ informed procedure of reading. Derrida is, strangely enough, like Hume in this regard, only one forewarned of the transcendental apologetics of Kant.

Careful readers will have already noted a number of striking parallels between the preceding account of BBT and the deconstructive paradigm. Cognition (or the collection of fractionate heuristic subsystems we confuse for such) only has recourse to whatever information is available, thus rendering sufficiency the perennial default. Even when cognition has recourse to supplementary information pertaining to the insufficiency of information, information is processed, which is to say, the resulting complex (which might be linguaformally expressed as, ‘Information x is insufficient for reliable cognition’) is taken as sufficient insofar as the system takes it up at all. Informatic insufficiency is parasitic on sufficiency, as it has to be, given the mechanistic nature of neural processing. For any circuit involving inputs and outputs, differences must be made. Sufficient or not, the system, if it is to function at all, must take it as such.

(I should pause to note a certain temptation at this juncture, one perhaps triggered by the use of the term ‘supplementary.’ One can very easily deconstruct the above set of claims the way one can deconstruct any set of theoretical claims, scientific or speculative. But where the deconstruction of speculative claims possesses or at least seems to possess clear speculative effects, the deconstruction of scientific claims does not, as a rule, possess any scientific effects. BBT, recall, is an empirical theory, and as such stands beyond the pale of decisive speculative judgment (if indeed, there is such a thing).)

The cognition of informatic insufficiency always requires sufficiency. To ‘know’ that you are ‘wrong’ is to be right about being wrong. The positivity of conscious experience and cognition follows from the mechanical nature of brain function, the mundane fact that differences must be made. Now, whatever ‘consciousness’ happens to be as a natural phenomenon (apart from our hitherto fruitless metacognitive attempts to make sense of it), it pretty clearly involves the ‘broadcasting’ or ‘integration’ of information (systematic differences made) from across the brain. At any given instant, conscious experience and cognition access only an infinitesimal fraction of the information processed by the brain: conscious experience and cognition, in other words, possess any number of informatic limits. Conscious experience and cognition are informatically encapsulated at any given moment. It’s not just that huge amounts of information are simply not available to the conscious subsystems of the brain, it’s that information allowing the cognition of those subsystems for what they are isn’t available. The positivity of conscious experience and cognition turns on what might be called medial neglect, the structural inability to consciously experience or cognize the mechanisms behind conscious experience and cognition.

Medial neglect means the mechanics of system are not available to the system. The importance of this observation cannot be overstated. The system cannot cognize itself the way it cognizes its environments, which is to say, causally, and so must cognize itself otherwise. What we call ‘intentionality’ is this otherwise. Most of the peculiarities of this ‘cognition otherwise’ stem from the structural inability of the system to track its own causal antecedents. The conscious subsystems of the brain cannot cognize the origins of any of its processes. Moreover, they cannot even cognize the fact that this information is missing. Medial neglect means conscious experience and cognition are constituted by mechanistic processes that structural escape conscious experience and cognition. And this is tantamount to saying that consciousness is utterly blind to its own irreflexivity.

And as we saw above, in the absence of differences we experience/cognize identity.

On BBT, then, the ‘fundamental synthesis’ described by Hagglund is literally a kind of flicker fusion,’ a metacognitive presumption of identity where there is none. It is a kind of mandatory illusion: illusory because it egregiously mistakes what is the case, and mandatory because, like the illusion of continuous motion in film, it involves basic structural capacities that cannot be circumvented and so ‘seen through.’ But where with film environmental cognition blurs the distinction between discrete frames into an irreflexive, sensible continuity, the ‘trick’ played upon metacognition is far more profound. The brain has evolved to survive and exploit environmental change, irreflexivity. First and foremost, human cognition is the evolutionary product of the need to track environmental irreflexivity with enough resolution and fidelity to identify and avoid threats and identify and exploit opportunities. You could say it is an ensemble of irreflexivities (mechanisms) parasitic upon the greater irreflexitivity of its environment (or to extend Craver’s terms, the brain is a component of the ‘brain/environment’). Lacking the information required to cognize temporal difference, it perceives temporal continuity. Our every act of cognition is at once irrevocable and blind to itself as irrevocable. Because it is blind to itself, it cannot, temporally speaking, differentiate itself from itself. As a result, such acts seem to arise from some reflexive source. The absence of information, once again, means the absence of distinction, which means identity. The now, the hitherto perplexing and inexplicable fusion of distinct times, becomes the keel of subjectivity, something that appears (to metacognition at least) to be a solitary, reflexive exception in an universe entirely irreflexive otherwise.

This is the cognitive illusion that both Kant and Husserl attempted to conceptually regiment, Kant by positing the transcendental unity of apperception, and Husserl via the transcendental ego. This is also the cognitive illusion that stands at the basis of our understanding of persons, both ourselves and others.

When combined with sufficiency, this account of reflexivity provides us with an elegant way to naturalize presence. Sufficiency means that the positivity of conscious experience and cognition ‘fills the existential screen’: there is nothing but what is experienced and cognized at any given moment. The illusion of reflexivity can be seen as a temporalization of the illusion of sufficiency: lacking the information required to relativize sufficiency to any given moment, metacognition blurs it across all times. The ‘only game in town effect’ becomes an ‘only game in time effect’ for the mere want of metacognitive information–medial neglect. The target of metacognition, conscious experience and cognition, appears to be something self-sustaining, something immediately, exhaustively self-present, something utterly distinct from the merely natural, and something somehow related to the eternal.

And with the naturalization of presence comes the naturalization of the aporetic figure of philosophy that so obsessed Derrida for the entirety of his career. Sufficiency, the fact that conscious experience and cognition ‘fills the screen,’ means that the limits of conscious experience and cognition always outrun conscious experience and cognition. Sufficiency means the boundaries of consciousness are asymptotic, ‘limits with only one side.’ The margins of your visual attention provide a great example of this. The limits of seeing can never be seen: the visual information integrated into conscious experience and cognition simply trails into ‘oblivion.’ The limits of seeing are thus visually asymptotic, though the integration of vision into a variety of other systems allows those limits to be continually, effortlessly cognized. Such, however, is not the case when it comes to the conscious subsystems of the brain as a whole. They are, once again, encapsulated. Conscious experience and cognition only exists ‘for’ conscious experience and cognition ‘within’ conscious experience and cognition. To resort to the language of representation favoured by Derrida, the limits of representation only become available via representation.

And all this, once again, simply follows from the mechanistic nature of the human brain, the brute fact that the individual mechanisms engaged in informatically comporting our organism to itself and its (social and natural) environments, are engaged and so incapable of systematically tracking their own activities let alone the limitations besetting them. Sufficiency is asymptosis. Such tracking requires a subsequent reassignation of neurocomputational resources–it must always be deferred to a further moment that is likewise mechanically incapable of tracking its own activities. This post hoc tracking, meanwhile, literally has next to nothing that it can systematically comport itself to (or ‘track’). Thus each instant of functioning blots the instant previous, rendering medial neglect all but complete. Both the incalculably intricate and derived nature of each instant is lost as is the passage between instants, save for what scant information is buffered or stored. And so are irreflexive repetitions whittled into anosognosiac originals.

Theoretical metacognition, or philosophical reflection, confronts the compelling intuition that it is originary, that it stands outside the irreflexive order of its environments, that it is in some sense undetermined or free. Precisely because it is mechanistic, it confuses itself for ‘spirit,’ for something other than nature. As it comes to appreciate (through the accumulation of questions (such as those posed by Hume)) the medial insufficiency of conscious experience as metacognized, it begins to posit medial prosthetics that dwell in the asymptotic murk, ‘conditions of possibility,’ formal rationalizations of conscious experience as metacognized. Asymptosis is conceived as transcendence in the Kantian sense (as autoaffection, apperceptive unity, so on), forms that appeal to philosophical intuition because of the way they seem to conserve the illusions compelled by informatic neglect. But since the assumption of metacognitive identity is an artifact of missing information, which is to say, cognitive incapacity, the accumulation of questions (which provide information regarding the absence of information) and the accumulation of information pertaining to irreflexivity (which, like external relationality, always requires more information to cognize), inevitably cast these transcendental rationalizations into doubt. Thus the strange inevitability of deconstruction (or negative dialectics, or the ‘philosophies of difference’ more generally), the convergence of philosophical imagination about the intuition of some obdurate, inescapable irreflexivity concealed at the very root of conscious experience and cognition.

Deconstruction can be seen as a ‘low resolution’ (strategic, provisional) recognition of the medial mechanicity that underwrites the metacognitive illusion of ‘meaning.’ Trace and differance are emissaries of irreflexivity, an expression of the neuromechanics of conscious experience and cognition given only the limited amount of information available to conscious experience and cognition. As mere glimmers of our mechanistic nature, however, they can only call attention to the insufficiencies that haunt the low-dimensional distortions of the soul. Rather than overthrow the illusions of meaning, they can at most call attention to the way it ‘wobbles,’ thus throwing a certain image of subjective semantic stability and centrality into question. Deconstruction, for all its claims to ‘radicalize,’ remains a profoundly noocentric philosophy, capable of conceiving the irreflexive only as the ‘hidden other’ of the reflexive. The claim to radicality, if anything, cements its status as a profoundly nooconservative mode of philosophical thought. Deconstruction becomes, as we can so clearly see in Hagglund, a form of intellectual hygiene. ‘Deconstructed’ intentional concepts begin to seem like immunized intentional concepts, ‘subjects’ and ‘norms’ and ‘meanings’ that are all the sturdier for referencing their ‘insufficiency’ in theoretical articulations that take them as sufficient all the same. Thus the oxymoronic doubling evinced by ‘deconstructive ethics’ or ‘deconstructive politics.’

The most pernicious hallucination, after all, is the hallucination that claims to have been seen through.

The present account, however, does not suffer happy endings, no matter how aleatory or conditional. According to BBT, nothing has nor ever will be ‘represented.’ Certainly our brains mechanically recapitulate myriad structural features of their environments, but at no point do these recapitulations inherit the occult property of aboutness. With BBT, these phantasms that orthogonally double the world become mere mechanisms, environmentally continuous components that may or may not covary with their environments, just more ramshackle life, the product of over 3 billion years of blind guessing. We become lurching towers of coincidence, happenstance conserved in meat. Blind to neurofunctionality, the brain’s metacognitive systems have no choice but to characterize the relation between the environmental information accumulated and those environments in acausal, nonmechanical terms. Sufficiency assures that this metacognitive informatic poverty will seem a self-evident plenum. The swamp of causal complexity is drained. The fantastically complicated mechanistic interactions constituting the brain/environment vanish into the absolute oblivion of the unknown unknown, stranding metacognition with the binary cartoon of a ‘subject’ ‘intending’ some ‘object.’ Statistical gradations evaporate into the procrustean discipline of either/or.

This, if anything, is the image I want to leave you with, one where the traditional concepts of philosophy can be seen for the granular grotesqueries they are, the cartoonish products of a metacognition pinioned between informatic scarcity and heuristic incapacity. I want to leave you with, in effect, an entirely new way to conceive philosophy, one adequate to the new and far more terrifying ‘Enlightenment’ presently revolutionizing the world around us. Does anyone really think their particular, prescientific accounts of the soul will escape unscathed or emerge redeemed by what sciences of the brain will reveal over the coming decades? Certainly one can argue points with BBT, a position whose conclusions are so dismal that I cannot myself fully embrace them. What one cannot argue against is the radical nature of our times, with the fact that science has at long last colonized the soul, that it is, even now, doing what it always does when it breaches some traditional domain of discourse: replace our always simplistic and typically flattering assumptions with portraits of bottomless intricacy and breathtaking indifference. We are just beginning, as a culture, to awaken to the fact that we are machines. Throw words against this prospect if you must. The engineers and the institutions that own them will find you a most convenient distraction.

Wire Finger

*Originally posted 02/27/2013

Braced for Launch

by rsbakker

Happy New Year all. I greeted 2017 with the Norovirus, so I guess you could say I’m not liking the omens so far. Either I’ll be immune when the shit starts flying in Washington or I’ll fold like a napkin.

I did have occasion to reread my interview of David Roden for Figure/Ground a while back, and I thought it worth linking because I believe the points of contention between us could very well map the philosophy of the future. I think the heuristic dependence of intentional cognition on ancestral cognitive backgrounds means intentional cognition has no hope of surviving the ongoing (and accelerating) technological renovation of those backgrounds. The posthuman, whatever it amounts to, will crash our every attempt to ethically understand it. David thinks my pessimism is premature, that ethical cognition, at least, can be knapped/exapted (via minimal notions of agency and value) into something that can leap the breach between human and posthuman. You decide.

Parental Advisory: Contains Grammatical Violence, Excessive Jargon, and Scenes of Conceptual Nudity.

Donald Trump and the Failure of 20th Century Progressive Culture

by rsbakker


In the middle of an economic expansion, America has elected a bigot and a misogynist as their president. Why pretend otherwise? Unity? Tell me, how does one unify behind a bigot and a misogynist? Millions of white Americans have poured the world’s future into a cracked bowl, and now we all hold our breath and wonder what comes next, while the wonks scramble looking for excuses for what went wrong. Everyone is certain to blame the economics of deindustrialization, but the polling data suggests that Trump supporters are actually more affluent than Clinton supporters. This suggests the issue is actually more cultural than economic—which is a sobering thought. Even if economics had been the primary driving factor, Donald Trump would represent an almost unimaginable failure of progressive culture.

Since this is a failure I have been ranting about for a long time, I’m going to be really pious, and play the part of pompous Canadian observer (as if there were any other kind). We have our own Donald up here, you know, our own Mr. Tell-it-how-it-is, both beloved and despised. His name is Donald Cherry and he’s famous in these parts for saying, You heard it here first!

I told you so.

It’s been surreal watching the past few days unfold. I mean, my whole artistic project turns on working against the very processes we have witnessed these past 18 mos. Christ, I even waged blog war against the alt-right, convinced that they lay at the root of the very cultural transformation we’ve witnessed now. How is it possible to feel at once smug and horrified?

Because I do.

For years now I’ve been shouting from the fringe, shouting, warning about the political and social consequences of academic ingroup excesses. I’ve been telling humanities academics that what they called ‘critical thinking’ was primarily an ingroup conceit, a way to be both morally and intellectually self-righteous at once, and that this, inevitably, was contributing to a vast process of counter-identification, actually stoking the very racism and sexism they claimed to be combating. Trump is what happens when you claim to be whipping yourself while leaving only welts on those (apparently) lacking the ability to defend themselves in academic contexts. For more than a decade now I’ve been telling literary writers that ‘literature’ that challenges no one real is quite simply not literature, but genre. For years now I’ve been warning about the way the accelerating pace of change increases the appeal of atavism, how the web allows these atavisms to incubate, to immunize themselves from rational appeal, how only a creative class that despises ingroup insularity could have any hope of spanning these growing divides. The need for real literary creation, narratives (in all media) that both challenge and reach out.

Ingroup artistic creation needs to be identified as part of the social short circuit operating here. Ingroup producers either must set their empty emancipatory rhetoric aside, embrace their ingroup function, or fundamentally reinvent themselves and their art… the way I’ve been trying to do.

Donald Trump’s success is the failure of progressive culture is the vindication, I think, of the very kind of radical revaluation propounded here at Three Pound Brain. Your cultural insularity is only as laudable as its cultural consequences. It’s the arrogance I’ve personally encountered countless times pounding at the door, the idea that these parochial enclaves of likeminded tastes are the only places that matter, the only places where merit could find ‘serious’ adjudication. And remaining ‘serious,’ you thought, was the way to serve your culture best.

But this was just another flattering ingroup illusion. All along you were serving your subculture—yourselves—merely, no different than any other human institution, really, except you thought ‘critical thinking’ rendered you more or less exempt. All along, you’ve had no idea how pedestrian you look from the outside, a particularly egregious outgroup competitor, intoxicated by the self-evidence of your moral standing.

I tried to warn you, tried to tell you that the ecosystem of art has been irrevocably changed by technology. But hey, I’m just a fantasy author. Never mind the fact that my books actually provoke controversy, actually make their way into the hands of Trump supporters…

Now you have a preening, sneering authoritarian sociopath as your president. This is what happens when you find one another so interesting and agreeable that you forget the people who make you possible, the people who have always made you possible. You are delivered a true-blue demagogue in a time of economic expansion.

Now we’re about to see just how frail or robust the American democratic system proves to be in this, the opening stages of the internet age.

Here’s my guess at what will happen. Donald has a fantastic narrative in his head, a grandiose image of the heroic kind of President he will prove to be—bold, effective, bringing his business acumen to bear—and this will lead to a brief Republican honeymoon, and legislation virtually institutionalizing aristocracy in America will be passed. (His argument, remember, is that the easier it is for he and his buddies to get rich, the better things will be for you decent, hardworking folk). The White House will be a circus—I mourn those Americans who always prized the dignity of the institution. He will kill climate change progress. The first conflict of interest scandals will rock him. The white working poor will cheer as their infrastructure rots and their entitlements are stripped away. More tapes and sexual misconduct accusations will surface. He will stack the Supreme Court with Scalia clones. Obamacare will be set to be revoked, but the legislation will be perpetually hung by a Congress articulating competing corporate interests. Irresistibly drawn to what flatters, Trump will begin avoiding the media, reaching directly out to his base for feedback, growing more verbally extreme in efforts to chase the fading cheer. A young black man will be shot, and Trump will drop the hammer on protesters. He will accept no responsibility for anything. There will be marches on Washington and tense standoffs. Trade sabres will be rattled, but nothing will happen because Trump is a billionaire who is up to his eyeballs in other billionaires—and has left his children as hostages in their world. He will drive wedges between Americans, simply because he is racist and misogynistic. He will reward fools with positions of power. He will attempt to install his children in positions of power. The bullshit gaffes will continue piling on, the media conspiracy will become ‘disgusting and traitorous to America,’ and not a thing will be done about immigration, outside shutting the door. His approval ratings will tank and he will start a war somewhere… and lord knows what kinds of convenient exigencies he might derive from this.

Let’s hope that ‘provoke a constitutional crisis’ and ‘launch nuclear weapons’ doesn’t find its way into there. Let’s hope you’ve merely found your own Berlusconi and nothing more disastrous.

If that happens, I promise I will spare you the Donald Cherry routine. I genuinely love you America, Trump supporters and all. I want to help you build the kind of culture you will need to survive the even greater upheavals to come. The technological revolution is just beginning and already progressive culture is foundering.


Look! Up in the sky!

by rsbakker



Varieties of Aboutness

by rsbakker

So I’ve swapped out the old About page for something more self-promotional. I can’t read it without cringing, tweaking and re-tweaking, so I figured it would pay to find out what eyes less invested make of the thing. It’s hard to walk magical tight-ropes when you can’t bear to look…

You would think I’d be more comfortable with the abyss by now!


by rsbakker

you were wondering why people think I’m such an asshole…

I was just checking out the Second Apocalypse trailer again (for the umpteenth time) and a link to this video was tiled in with the others. I howled at the way the two of us tried to look as though we were listening to the translations and either a) agreeing with the Spanish version of our insight, or b) expressing approval of the quality of a translation! I forgot all about it: La Semana Negra, organized by the inimitable Paco Taibo II, an annual celebration of genre in Gijon, Spain, and, get this, one of the biggest book extravaganzas in the world. What a magical time that was. Jim Sallis. Practical jokes. George and Paris. Booze, laughs, and greasy, greasy northern Spanish cuisine.

At the time George was talking about the Game of Thrones being in development, and as I write this, HBO is running a spot on the television in the livingroom. Surreal.



The Lingering of Philosophy

by rsbakker

Nietzsche Poster

The ‘Death of Philosophy’ is something that circulates through arterial underbelly of culture with quite some regularity, a theme periodically goosed whenever high-profile scientific figures bother to express their attitudes on the subject. Scholars in the humanities react the same way stakeholders in any institution react when their authority and privilege are called into question: they muster rationalizations, counterarguments, and pejoratives. They rally troops with whooping war-cries of “positivism” or “scientism,” list all the fields of inquiry where science holds no sway, and within short order the whole question of whether philosophy is dead begins to look very philosophical, and the debate itself becomes evidence that philosophy is alive and well—in some respects at least.

The problem with this pattern, of course, is that the terms like ‘philosophy’ or ‘science’ are so overdetermined that no one ends up talking about the same thing. For physicists like Stephen Hawking or Lawrence Krauss or Neil deGrasse Tyson, the death of philosophy is obvious insofar as the institution has become almost entirely irrelevant to their debates. There are other debates, they understand, debates where scientists are the hapless ones, but they see the process of science as an inexorable, and yes, imperialistic one. More and more debates fall within its purview as the technical capacities of science improve. They presume the institution of philosophy will become irrelevant to more and more debates as this process continues. For them, philosophy has always been something to chase away. Since the presence of philosophers in a given domain of inquiry reliably indicates scientific ignorance to important features of that domain, the relevance of philosophers is directly related to the maturity of a science.

They have history on their side.

There will always be speculation—science is our only reliable provender of theoretical cognition, after all. The question of the death of philosophy cannot be the question of the death of theoretical speculation. The death of philosophy as I see it is the death of a particular institution, a discourse anchored in the tradition of using intentional idioms and metacognitive deliverances to provide theoretical solutions. I think science is killing that philosophy as we speak.

The argument is surprisingly direct, and, I think, fatal to intentionalism, but as always, I would love to hear dissenting opinions.


1) Human cognition only has access to the effects of the systems cognized.

2) The mechanical structure of our environments is largely inaccessible.

3) Cognition exploits systematic correlations—‘cues’—between those effects that can be accessed and the systems engaged to solve for those systems.

4) Cognition is heuristic.

5) Metacognition is a form of cognition.

6) Metacognition also exploits systematic correlations—‘cues’—between those effects that can be accessed and the systems engaged to solve for those systems.

7) Metacognition is also heuristic.

8) Metacognition is the product of adventitious adaptations exploiting onboard information in various reproductively decisive ways.

9) The applicability of that ancestral information to second order questions regarding the nature of experience is highly unlikely.

10) The inability of intentionalism to agree on formulations, let alone resolve issues, evidences as much.

11) Intentional cognition is a form of cognition.

12) Intentional cognition also exploits systematic correlations—‘cues’—between those effects that can be accessed and the systems engaged to solve for those systems.

13) Intentional cognition is also heuristic.

14) Intentional cognition is the product of adventitious adaptations exploiting available onboard information in various reproductively decisive ways.

15) The applicability of that ancestral information to second order questions regarding the nature of meaning is highly unlikely.

16) The inability of intentionalism to agree on formulations, let alone resolve issues, evidences as much.

Intentional Philosophy as the Neuroscientific Explananda Problem

by rsbakker

The problem is basically that the machinery of the brain has no way of tracking its own astronomical dimensionality; it can at best track problem-specific correlational activity, various heuristic hacks. We lack not only the metacognitive bandwidth, but the metacognitive access required to formulate the explananda of neuroscientific investigation.

A curious consequence of the neuroscientific explananda problem is the glaring way it reveals our blindness to ourselves, our medial neglect. The mystery has always been one of understanding constraints, the question of what comes before we do. Plans? Divinity? Nature? Desires? Conditions of possibility? Fate? Mind? We’ve always been grasping for ourselves, I sometimes think, such was the strategic value of metacognitive capacity in linguistic social ecologies. The thing to realize is that grasping, the process of developing the capacity to report on our experience, was bootstapped out of nothing and so comprised the sum of all there was to the ‘experience of experience’ at any given stage of our evolution. Our ancestors had to be both implicitly obvious, and explicitly impenetrable to themselves past various degrees of questioning.

We’re just the next step.

What is it we think we want as our neuroscientific explananda? The various functions of cognition. What are the various functions of cognition? Nobody can seem to agree, thanks to medial neglect, our cognitive insensitivity to our cognizing.

Here’s what I think is a productive way to interpret this conundrum.

Generally what we want is a translation between the manipulative and the communicative. It is the circuit between these two general cognitive modes that forms the cornerstone of what we call scientific knowledge. A finding that cannot be communicated is not a finding at all. The thing is, this—knowledge itself—all functions in the dark. We are effectively black boxes to ourselves. In all math and science—all of it—the understanding communicated is a black box understanding, one lacking any natural understanding of that understanding.

Crazy but true.

What neuroscience is after, of course, is a natural understanding of understanding, to peer into the black box. They want manipulations they can communicate, actionable explanations of explanation. The problem is that they have only heuristic, low-dimensional, cognitive access to themselves: they quite simply lack the metacognitive access required to resolve interpretive disputes, and so remain incapable of formulating the explananda of neuroscience in any consensus commanding way. In fact, a great many remain convinced, on intuitive grounds, that the explananda sought, even if they could be canonically formulated, would necessarily remain beyond the pale of neuroscientific explanation. Heady stuff, given the historical track record of the institutions involved.

People need to understand that the fact of a neuroscientific explananda problem is the fact of our outright ignorance of ourselves. We quite simply lack the information required to decide what it is we’re explaining. What we call ‘philosophy of mind’ is a kind of metacognitive ‘crash space,’ a point where our various tools seem to function, but nothing ever comes of it.

The low-dimensionality of the information begets underdetermination, underdetermination begets philosophy, philosophy begets overdetermination. The idioms involved become ever more plastic, more difficult to sort and arbitrate. Crash space bloats. In a sense, intentional philosophy simply is the neuroscientific explananda problem, the florid consequence of our black box souls.

The thing that can purge philosophy is the thing that can tell you what it is.

Alien Philosophy

by rsbakker

[Consolidated, with pretty pictures]

Face on moon

The highest species concept may be that of a terrestrial rational being; however, we shall not be able to name its character because we have no knowledge of non-terrestrial rational beings that would enable us to indicate their characteristic property and so to characterize this terrestrial being among rational beings in general. It seems, therefore, that the problem of indicating the character of the human species is absolutely insoluble, because the solution would have to be made through experience by means of the comparison of two species of rational being, but experience does not offer us this. (Kant: Anthropology from a Pragmatic Point of View, 225)

Little alien sasquatch


Are there alien philosophers orbiting some faraway star, opining in bursts of symbolically articulated smells, or parsing distinctions-without-differences via the clasp of neural genitalia? What would an alien philosophy look like? Do we have any reason to think we might find some of them recognizable? Do the Greys have their own version of Plato? Is there a little green Nietzsche describing little green armies of little green metaphors?



I: The Story Thus Far

A couple years back, I published a piece in Scientia Salon, “Back to Square One: Toward a Post-intentional Future,” that challenged the intentional realist to warrant their theoretical interpretations of the human. What is the nature of the data that drives their intentional accounts? What kind of metacognitive capacity can they bring to bear?

I asked these questions precisely because they cannot be answered. The intentionalist has next to no clue as to the nature, let alone the provenance, of their data, and even less inkling as to the metacognitive resources at their disposal. They have theories, of course, but it is the proliferation of theories that is precisely the problem. Make no mistake: the failure of their project, their consistent inability to formulate their explananda, let alone provide any decisive explanations, is the primary reason why cognitive science devolves so quickly into philosophy.

But if chronic theoretical underdetermination is the embarrassment of intentionalism, then theoretical silence has to be the embarrassment of eliminativism. If meaning realism offers too much in the way of theory—endless, interminable speculation—then meaning skepticism offers too little. Absent plausible alternatives, intentionalists naturally assume intrinsic intentionality is the only game in town. As a result, eliminativists who use intentional idioms are regularly accused of incoherence, of relying upon the very intentionality they’re claiming to eliminate. Of course eliminativists will be quick to point out the question-begging nature of this criticism: They need not posit an alternate theory of their own to dispute intentional theories of the human. But they find themselves in a dialectical quandary, nonetheless. In the absence of any real theory of meaning, they have no substantive way of actually contributing to the domain of the meaningful. And this is the real charge against the eliminativist, the complaint that any account of the human that cannot explain the experience of being human is barely worth the name. [1] Something has to explain intentional idioms and phenomena, their apparent power and peculiarity; If not intrinsic or original intentionality, then what?

My own project, however, pursues a very different brand of eliminativism. I started my philosophical career as an avowed intentionalist, a one-time Heideggerean and Wittgensteinian. For decades I genuinely thought philosophy had somehow stumbled into ‘Square Two.’ No matter what doubts I entertained regarding this or that intentional account, I was nevertheless certain that some intentional account had to be right. I was invested, and even though the ruthless elegance of eliminativism made me anxious, I took comfort in the standard shibboleths and rationalizations. Scientism! Positivism! All theoretical cognition presupposes lived life! Quality before quantity! Intentional domains require intentional yardsticks!

Then, in the course of writing a dissertation on fundamental ontology, I stumbled across a new, privative way of understanding the purported plenum of the first-person, a way of interpreting intentional idioms and phenomena that required no original meaning, no spooky functions or enigmatic emergences—nor any intentional stances for that matter. Blind Brain Theory begins with the assumption that theoretically motivated reflection upon experience co-opts neurobiological resources adapted to far different kinds of problems. As a co-option, we have no reason to assume that ‘experience’ (whatever it amounts to) yields what philosophical reflection requires to determine the nature of experience. Since the systems are adapted to discharge far different tasks, reflection has no means of determining scarcity and so generally presumes sufficiency. It cannot source the efficacy of rules so rules become the source. It cannot source temporal awareness so the now becomes the standing now. It cannot source decisions so decisions (the result of astronomically complicated winner-take-all processes) become ‘choices.’ The list goes on. From a small set of empirically modest claims, Blind Brain Theory provides what I think is the first comprehensive, systematic way to both eliminate and explain intentionality.

In other words, my reasons for becoming an eliminativist were abductive to begin with. I abandoned intentionalism, not because of its perpetual theoretical disarray (though this had always concerned me), but because I became convinced that eliminativism could actually do a better job explaining the domain of meaning. Where old school, ‘dogmatic eliminativists’ argue that meaning must be natural somehow, my own ‘critical eliminativism’ shows how. I remain horrified by this how, but then I also feel like a fool for ever thinking the issue would end any other way. If one takes mediocrity seriously, then we should expect that science will explode, rather than canonize our prescientific conceits, no matter how near or dear.

But how to show others? What could be more familiar, more entrenched than the intentional philosophical tradition? And what could be more disparate than eliminativism? To quote Dewey from Experience and Nature, “The greater the gap, the disparity, between what has become a familiar possession and the traits presented in new subject-matter, the greater is the burden imposed upon reflection” (Experience and Nature, ix). Since the use of exotic subject matters to shed light on familiar problems is as powerful a tool for philosophy as for my chosen profession, speculative fiction, I propose to consider the question of alien philosophy, or ‘xenophilosophy,’ as a way to ease the burden. What I want to show is how, reasoning from robust biological assumptions, one can plausibly claim that aliens—call them ‘Thespians’—would also suffer their own versions of our own (hitherto intractable) ‘problem of meaning.’ The degree to which this story is plausible, I will contend, is the degree to which critical eliminativism deserves serious consideration. It’s the parsimony of eliminativism that makes it so attractive. If one could combine this parsimony with a comprehensive explanation of intentionality, then eliminativism would very quickly cease to be a fringe opinion.

Alien autopsy


II: Aliens and Philosophy

Of course, the plausibility of humanoid aliens possessing any kind of philosophy requires the plausibility of humanoid aliens. In popular media, aliens are almost always exotic versions of ourselves, possessing their own exotic versions of the capacities and institutions we happen to have. This is no accident. Science fiction is always about the here and now—about recontextualizations of what we know. As a result, the aliens you tend to meet tend to seem suspiciously humanoid, psychologically if not physically. Spock always has some ‘mind’ with which to ‘meld’. To ask the question of alien philosophy, one might complain, is to buy into this conceit, which although flattering, is almost certainly not true.

And yet the environmental filtration of mutations on earth has produced innumerable examples of convergent evolution, different species evolving similar morphologies and functions, the same solutions to the same problems, using entirely different DNA. As you might imagine, however, the notion of interstellar convergence is a controversial one. [2] Supposing the existence of extraterrestrial intelligence is one thing—cognition is almost certainly integral to complex life elsewhere in the universe—but we know nothing about the kinds of possible biological intelligences nature permits. Short of actual contact with intelligent aliens, we have no way of gauging how far we can extrapolate from our case. [3] All too often, ignorance of alternatives dupes us into making ‘only game in town assumptions,’ so confusing mere possibility with necessity. But this debate need not worry us here. Perhaps the cluster of characteristics we identify with ‘humanoid’ expresses a high-probability recipe for evolving intelligence—perhaps not. Either way, our existence proves that our particular recipe is on file, that aliens we might describe as ‘humanoid’ are entirely possible.

So we have our humanoid aliens, at least as far as we need them here. But the question of what alien philosophy looks like also presupposes we know what human philosophy looks like. In “Philosophy and the Scientific Image of Man,” Wilfred Sellars defines the aim of philosophy as comprehending “how things in the broadest possible sense of the term hang together in the broadest possible sense of the term” (1). Philosophy famously attempts to comprehend the ‘big picture.’ The problem with this definition is that it overlooks the relationship between philosophy and ignorance, and so fails to distinguish philosophical inquiry from scientific or religious inquiry. Philosophy is invested in a specific kind of ‘big picture,’ one that acknowledges the theoretical/speculative nature of its claims, while remaining beyond the pale of scientific arbitration. Philosophy is better defined, then, as the attempt to comprehend how things in general hang together in general absent conclusive information.

All too often philosophy is understood in positive terms, either as an archive of theoretical claims, or as a capacity to ‘see beyond’ or ‘peer into.’ On this definition, however, philosophy characterizes a certain relationship to the unknown, one where inquiry eschews supernatural authority, and yet lacks the methodological, technical, and institutional resources of science. Philosophy is the attempt to theoretically explain in the absence of decisive warrant, to argue general claims that cannot, for whatever reason, be presently arbitrated. This is why questions serve as the basic organizing principles of the institution, the shared boughs from which various approaches branch and twig in endless disputation. Philosophy is where we ponder the general questions we cannot decisively answer, grapple with ignorances we cannot readily overcome.



III: Evolution and Ecology

A: Thespian Nature

It might seem innocuous enough defining philosophy in privative terms as the attempt to cognize in conditions of information scarcity, but it turns out to be crucial to our ability to make guesses regarding potential alien analogues. This is because it transforms the question of alien philosophy into a question of alien ignorance. If we can guess at the kinds of ignorance a biological intelligence would suffer, then we can guess at the kinds of questions they would ask, as well as the kinds of answers that might occur to them. And this, as it turns out, is perhaps not so difficult as one might suppose.

The reason is evolution. Thanks to evolution, we know that alien cognition would be bounded cognition, that it would consist of ‘good enough’ capacities adapted to multifarious environmental, reproductive impediments. Taking this ecological view of cognition, it turns out, allows us to make a good number of educated guesses. (And recall, plausibility is all that we’re aiming for here).

So for instance, we can assume tight symmetries between the sensory information accessed, the behavioural resources developed, and the impediments overcome. If gamma rays made no difference to their survival, they would not perceive them. Gamma rays, for Thespians, would be unknown unknowns, at least pending the development of alien science. The same can be said for evolution, planetary physics—pretty much any instance of theoretical cognition you can adduce. Evolution assures that cognitive expenditures, the ability to intuit this or that, will always be bound in some manner to some set of ancestral environments. Evolution means that information that makes no reproductive difference makes no biological difference.

An ecological view, in other words, allows us to naturalistically motivate something we might have been tempted to assume outright: original naivete. The possession of sensory and cognitive apparatuses comparable to our own means Thespians will possess a humanoid neglect structure, a pattern of ignorances they cannot even begin to question, that is, pending the development of philosophy. The Thespians would not simply be ignorant of the microscopic and macroscopic constituents and machinations explaining their environments, they would be oblivious to them. Like our own ancestors, they wouldn’t even know they didn’t know.

Theoretical knowledge is a cultural achievement. Our Thespians would have to learn the big picture details underwriting their immediate environments, undergo their own revolutions and paradigm shifts as they accumulate data and refine interpretations. We can expect them to possess an implicit grasp of local physics, for instance, but no explicit, theoretical understanding of physics in general. So Thespians, it seems safe to say, would have their own version of natural philosophy, a history of attempts to answer big picture questions about the nature of Nature in the absence of decisive data.

Not only can we say their nascent, natural theories will be underdetermined, we can also say something about the kinds of problems Thespians will face, and so something of the shape of their natural philosophy. For instance, needing only the capacity to cognize movement within inertial frames, we can suppose that planetary physics would escape them. Quite simply, without direct information regarding the movement of the ground, the Thespians would have no sense of the ground changing position. They would assume that their sky was moving, not their world. Their cosmological musings, in other words, would begin supposing ‘default geocentrism,’ an assumption that would only require rationalization once others, pondering the movement of the skies, began posing alternatives.

One need only read On the Heavens to appreciate how the availability of information can constrain a theoretical debate. Given the imprecision of the observational information at his disposal, for instance, Aristotle’s stellar parallax argument becomes well-nigh devastating. If the earth revolves around the sun, then surely such a drastic change in position would impact our observations of the stars, the same way driving into a city via two different routes changes our view of downtown. But Aristotle, of course, had no decisive way of fathoming the preposterous distances involved—nor did anyone, until Galileo turned his Dutch Spyglass to the sky. [4]

Aristotle, in other words, was victimized not so much by poor reasoning as by various perspectival illusions following from a neglect structure we can presume our Thespians share. And this warrants further guesses. Consider Aristotle’s claim that the heavens and the earth comprise two distinct ontological orders. Of course purity and circles rule the celestial, and of course grit and lines rule the terrestrial—that is, given the evidence of the naked eye from the surface of the earth. The farther away something is, the less information observation yields, the fewer distinctions we’re capable of making, the more uniform and unitary it is bound to seem—which is to say, the less earthly. An inability to map intuitive physical assumptions onto the movements of the firmament, meanwhile, simply makes those movements appear all the more exceptional. In terms of the information available, it seems safe to suppose our Thespians would at least face the temptation of Aristotle’s long-lived ontological distinction.

I say ‘temptation,’ because certainly any number of caveats can be raised here. Heliocentrism, for instance, is far more obvious in our polar latitudes (where the earth’s rotation is as plain as the summer sun in the sky), so there are observational variables that could have drastically impacted the debate even in our own case. Who knows? If it weren’t for the consistent failure of ancient heliocentric models to make correct predictions (the models assumed circular orbits), things could have gone differently in our own history. The problem of where the earth resides in the whole might have been short-lived.

But it would have been a problem all the same, simply because the motionlessness of the earth and the relative proximity of the heavens would have been our (erroneous) default assumptions. Bound cognition suggests our Thespians would find themselves in much the same situation. Their world would feel motionless. Their heavens would seem to consist of simpler stuff following different laws. Any Thespian arguing heliocentrism would have to explain these observations away, argue how they could be moving while standing still, how the physics of the ground belongs to the physics of the sky.

We can say this because, thanks to an ecological view, we can make plausible empirical guesses as to the kinds of information and capacities Thespians would have available. Not only can we predict what would have remained unknown unknowns for them, we can also predict what might be called ‘unknown half-knowns.’ Where unknown unknowns refer to things we can’t even question, unknown half-knowns refer to theoretical errors we cannot perceive simply because the information required to do so remains—you guessed it—unknown unknown.

Think of Plato’s allegory of the cave. The chained prisoners confuse the shadows for everything because, unable to move their heads from side to side, they just don’t ‘know any different.’ This is something we understand so intuitively we scarce ever pause to ponder it: the absence of information or cognitive capacity has positive cognitive consequences. Absent certain difference making differences, the ground will be cognized as motionless rather than moving, and celestial objects will be cognized as simples rather than complex entities in their own right. The ground might as well be motionless and the sky might as well be simple as far as evolution is concerned. Once again, distinctions that make no reproductive difference make no biological difference. Our lack of radio telescope eyes is no genetic or environmental fluke: such information simply wasn’t relevant to our survival.

This means that a propensity to theorize ‘ground/sky dualism’ is built into our biology. This is quite an incredible claim, if you think about it, but each step in our path has been fairly conservative, given that mere plausibility is our aim. We should expect Thespian cognition to be bounded cognition. We should expect them to assume the ground motionless, and the constituents of the sky simple. We can suppose this because we can suppose them to be ignorant of their ignorances, just as we were (and remain). Cognizing the ontological continuity of heaven and earth requires the proper data for the proper interpretation. Given a roughly convergent sensory predicament, it seems safe to say that aliens would be prone as we were to mistake differences in signal with differences in being and so have to discover the universality of nature the same as we did.

But if we can assume our Thespians—or at least some of them—would be prone to misinterpret their environments the way we did, what about themselves? For centuries now humanity has been revising and sharpening its understanding of the cosmos, to the point of drafting plausible theories regarding the first second of creation, and yet we remain every bit as stumped regarding ourselves as Aristotle. Is it fair to say that our Thespians would suffer the same millennial myopia?

Would they have their own version of our interminable philosophy of the soul?

Hitler and alien


B: Thespian Souls

Given a convergent environmental and biological predicament, we can suppose our Thespians would have at least flirted with something resembling Aristotle’s dualism of heaven and earth. But as I hope to show, the ecological approach pays even bigger theoretical dividends when one considers what has to be the primary domain of human philosophical speculation: ourselves.

With evolutionary convergence, we can presume our Thespians would be eusocial, [5] displaying the same degree of highly flexible interdependence as us. This observation, as we shall see, possesses some startling consequences. Cognitive science is awash in ‘big questions’ (philosophy), among them the problem of what is typically called ‘mindreading,’ our capacity to explain/predict/manipulate one another on the basis of behavioural data alone. How do humans regularly predict the output of something so preposterously complicated as human brains on the basis of so little information?

The question is equally applicable to our Thespians, who would, like humans, possess formidable socio-cognitive capacities. As potent as those capacities were, however, we can also suppose they would be bounded, and—here’s the thing—radically so. When one Thespian attempts to cognize another, they, like us, will possess no access whatsoever to the biological systems actually driving behaviour. This means that Thespians, like us, would need to rely on so-called ‘fast and frugal heuristics’ to solve each other. [6] That is to say they would possess systems geared to the detection of specific information structures, behavioural precursors that reliably correlate, as opposed to cause, various behavioural outcomes. In other words, we can assume that Thespians will possess a suite of powerful, special purpose tools adapted to solving systems in the absence of causal information.

Evolutionary convergence means Thespians would understand one another (as well as other complex life) in terms that systematically neglect their high-dimensional, biological nature. As suggestive as this is, things get really interesting when we consider the way Thespians pose the same basic problem of computational intractability (the so-called ‘curse of dimensionality’) to themselves as they do to their fellows. The constraints pertaining to Thespian social cognition, in other words, also apply to Thespian metacognition, particularly with respect to complexity. Each Thespian, after all, is just another Thespian, and so poses the same basic challenge to metacognition as they pose to social cognition. By sheer dint of complexity, we can expect the Thespian brain would remain opaque to itself as such. This means something that will turn out to be quite important: namely that Thespian self-understanding, much like ours, would systematically neglect their high-dimensional, biological nature. [7]

This suggests that life, and intelligent life in particular, would increasingly stand out as a remarkable exception as the Thespians cobbled together a mechanical understanding of nature. Why so? Because it seems a stretch to suppose they would possess a capacity so extravagant as accurate ‘meta-metacognition.’ Lacking such a capacity would strand them with disparate families of behaviours and entities, each correlated with different intuitions, which would have to be recognized as such before any taxonomy could be made. Some entities and behaviours could be understood in terms of mechanical conditions, while others could not. So as extraordinary as it sounds, it seems plausible to think that our Thespians, in the course of their intellectual development, would stumble across some version of their own ‘fact-value distinction.’ All we need do is posit a handful of ecological constraints.

But of course things aren’t nearly so simple. Metacognition may solve for Thespians the same ‘fast and frugal’ manner as social cognition, but it entertains a far different relationship to its putative target. Unlike social cognition, which tracks functionally distinct systems (others) via the senses, metacognition is literally hardwired to the systems it tracks. So even though metacognition faces the same computational challenge as social cognition—cognizing a Thespian—it requires a radically different set of tools to do so. [8]

It serves to recall that evolved intelligence is environmentally oriented intelligence. Designs thrive or vanish depending on their ability to secure the resources required to successfully reproduce. Because of this, we can expect that all intelligent aliens, not just Thespians, would possess highdimensional cognitive relations with their environments. Consider our own array of sensory modalities, how the environmental here and now ‘hogs bandwidth.’ The degree to which your environment dominates your experience is the degree to which you’re filtered to solve your environments. We live in the world simply because we’re distilled from it, the result of billions of years of environmental tuning. We can presume our aliens would be thoroughly ‘in the world’ as well, that the bulk of their cognitive capacities would be tasked with the behavioural management of their immediate environments for similar evolutionary reasons.

Since all cognitive capacities are environmentally selected, we can expect whatever basic metacognitive capacity the Thespians possess will also be geared to the solution of environmental problems. Thespian metacognition will be an evolutionary artifact of getting certain practical matters right in certain high-impact environments, plain and simple. Add to this the problem of computational intractability (which metacognition shares with social cognition) and it becomes almost certain that Thespian metacognition would consist of multiple fast and frugal heuristics (because solving on the basis of scarce data requires less, not more, parameters geared to particular information structures to be effective). [9] We have very good reason to suspect the Thespian brain would access and process its own structure and dynamics in ways that would cut far more corners than joints. As is the case with social cognition, it would belong to Thespian nature to neglect Thespian nature—to cognize the cognizer as something other, something geared to practical contexts.

Thespians would cognize themselves and their fellows via correlational, as opposed to causal, heuristic cognition. The curse of dimensionality necessitates it. It’s hard, I think, to overstate the impact this would have on an alien species attempting to cognize their nature. What it means is that the Thespians would possess a way to engineer systematically efficacious comportments to themselves, each other, even their environments, without being able to reverse engineer those relationships. What it means, in other words, is that a great deal of their knowledge would be impenetrable—tacit, implicit, automatic, or what have you. Thespians, like humans, would be able to solve a great many problems regarding their relations to themselves, their fellows, and their world without possessing the foggiest idea of how. The ignorance here is structural ignorance, as opposed to the ignorance, say, belonging to original naivete. One would expect the Thespians would be ignorant of their nature absent the cultural scaffolding required to unravel the mad complexity of their brains. But the problem isn’t simply that Thespians would be blind to their inner nature; they would also be blind to this blindness. Since their metacognitive capacities consistently yield the information required to solve in practical, ancestral contexts, the application of those capacities to the theoretical question of their nature would be doomed from the outset. Our Thespians would consistently get themselves wrong.

Is it fair to say they would be amazed by their incapacity, the way our ancestors were? [10] Maybe—who knows. But we could say, given the ecological considerations adduced here, that they would attempt to solve themselves assuming, at least initially, that they could be solved, despite the woefully inadequate resources at their disposal.

In other words, our Thespians would very likely suffer what might be called theoretical anosognosia. In clinical contexts, anosognosia applies to patients who, due to some kind of pathology, exhibit unawareness of sensory or cognitive deficits. Perhaps the most famous example is Anton-Babinski Syndrome, where physiologically blind patients persistently claim they can in fact see. This is precisely what we could expect from our Thespians vis a vis their ‘inner eye.’ The function of metacognitive systems is to engineer environmental solutions via the strategic uptake of limited amounts of information, not to reverse engineer the nature of the brain it belongs to. Repurposing these systems means repurposing systems that generally take the adequacy of their resources for granted. When we catch our tongue at Christmas dinner, we just do; we ‘implicitly assume’ the reliability our metacognitive capacity to filter our speech. It seems wildly implausible to suppose that theoretically repurposing these systems would magically engender a new biological capacity to automatically assess the theoretical viability of the resources available. It stands to reason, rather, that we would assume sufficiency the same as before, only to find ourselves confounded after the fact.

Of course, saying that our Thespians suffer theoretical anosognosia amounts to saying they would suffer chronic, theoretical hallucinations. And once again, ecological considerations provide a way to guess at the kinds of hallucinations they might suffer.

Dualism is perhaps the most obvious. Aristotle, recall, drew his conclusions assuming the sufficiency of the information available. Contrasting the circular, ageless, repeating motion of the stars and planets to the linear riot of his immediate surroundings, he concluded that the celestial and the terrestrial comprised two distinct ontological orders governed by different natural laws, a dichotomy that prevailed some 1800 years. The moral is quite clear: Where and how we find ourselves within a system determines what kind of information we can access regarding that system, including information pertaining to the sufficiency of that information. Lacking instrumentation, Aristotle simply found himself in a position where the ontological distinction between heaven and earth appeared obvious. Unable to cognize the limits imposed by his position within the observed systems, he had no idea that he was simply cognizing one unified system from two radically different perspectives, one too near, the other too far.

Trapped in a similar structural bind vis a vis themselves, our navel-gazing Thespians would almost certainly mistake properties pertaining to neglect with properties pertaining to what is, distortions in signal, for facts of being. Once again, since the posits possessing those properties belong to correlative cognitive systems, they would resist causal cognition. No matter how hard Thespian philosophers tried, they would find themselves unable to square their apparent functions with the machinations of nature more generally. Correlative functions would appear autonomous, as somehow operating outside the laws of nature. Embedded in their environment in a manner that structurally precludes accurately intuiting that embedment, our alien philosophers would conceive themselves as something apart, ontologically distinct. Thespian philosophy would have its own versions of ‘souls’ or ‘minds’ or ‘Dasein’ or ‘a priori’ or what have you—a disparate order somehow ‘accounting’ for various correlative cognitive modes, by anchoring the bare cognition of constraint in posits (inherited or not) rationalized on the back of Thespian fashion.

Dualisms, however, require that manifest continuities be explained, or explained away. Lacking any ability to intuit the actual machinations binding them to their environments, Thespians would be forced to rely on the correlative deliverances of metacognition to cognize their relation to their world—doing so, moreover, without the least inkling of as much. Given theoretical anosognosia (the inability to intuit metacognitive incapacity), it stands to reason that they would advance any number of acausal versions of this relationship, something similar to ‘aboutness,’ and so reap similar bewilderment. Like us, they would find themselves perpetually unable to decisively characterize ‘knowledge of the world.’ One could easily imagine the perpetually underdetermined nature of these accounts convincing some Thespian philosophers that the deliverances of metacognition comprised the whole of existence (engendering Thespian idealism), or were at least the most certain, most proximate thing, and therefore required the most thorough and painstaking examination (engendering a Thespian phenomenology)…

Could this be right?

This story is pretty complex, so it serves to review the modesty of our working assumptions. The presumption of interstellar evolutionary convergence warranted assuming that Thespian cognition, like human cognition, would be bounded, a complex bundle of ‘kluges,’ heuristic solutions to a wide variety of ecological problems. The fact that Thespians would have to navigate both brute and intricate causal environments, troubleshoot both inorganic and organic contexts, licenses the claim that Thespian cognition would be bifurcated between causal systems and a suite of correlational systems, largely consisting of ‘fast and frugal heuristics,’ given the complexity and/or the inaccessibility of the systems involved. This warranted claiming that both Thespian social cognition and metacognition would be correlational, heuristic systems adapted to solve very complicated ecologies on the basis of scarce data. This posed the inevitable problem of neglect, the fact that Thespians would have no intuitive way of assessing the adequacy of their metacognitive deliverances once they applied them to theoretical questions. This let us suppose theoretical anosognosia, the probability that Thespian philosophers would assume the sufficiency of radically inadequate resources—systematically confuse artifacts of heuristic neglect for natural properties belonging to extraordinary kinds. And this let us suggest they would have their own controversies regarding mind-body dualism, intentionality, even knowledge of the external world.

As with Thespian natural philosophy, any number of caveats can be raised at any number of junctures, I’m sure. What if, for instance, Thespians were simply more pragmatic, less inclined to suffer speculation in the absence of decisive application? Such a dispositional difference could easily tilt the balance in favour of skepticism, relegating the philosopher to the ghettos of Thespian intellectual life. Or what if Thespians were more impressed by authority, to the point where reflection could only be interrogated refracted through the lens of purported revelation? There can be no doubt that my account neglects countless relevant details. Questions like these chip away at the intuition that the Thespians, or something like them, might be real

Luckily, however, this doesn’t matter. The point of posing the problem of xenophilosophy wasn’t so much to argue that Thespians are out there, as it was, strangely enough, to recognize them in here

After all, this exercise in engineering alien philosophy is at once an exercise in reverse-engineering our own. Blind Brain Theory only needs Thespians to be plausible to demonstrate its abductive scope, the fact that it can potentially explain a great many perplexing things on nature’s dime alone.

So then what have we found? That traditional philosophy something best understood as… what?

A kind of cognitive pathology?

A disease?

Ripley's nightmare


IV: Conclusion

It’s worth, I think, spilling a few words on the subject of that damnable word, ‘experience.’ Dogmatic eliminativism is a religion without gods or ceremony, a relentlessly contrarian creed. And this has placed it in the untenable dialectical position of apparently denying what is most obvious. After all, what could be more obvious than experience?

What do I mean by ‘experience’? Well, the first thing I generally think of is Holocaust, and the palpable power of the Survivor.

Blind Brain Theory paints a theoretical portrait wherein experience remains the most obvious thing in practical, correlational ecologies, while becoming a deeply deceptive, largely chimerical artifact in high-dimensional, causal ones. We have no inkling of tripping across ecological boundaries when we propose to theoretically examine the character of experience. What was given to deliberative metacognition in some practical context (ruminating upon a social gaffe, say) is now simply given to deliberative metacognition in an artificial one—‘philosophical reflection.’ The difference between applications is nothing if not extreme, and yet conclusions are drawn assuming sufficiency, again and again and again—for millennia.

Think of the difference between your experience and your environment, say, in terms of the difference between concentrating on a mental image of your house and actually observing it. Think of how few questions the mental image can answer compared to the visual image. Where’s the grass the thickest? Is there birdshit on the lane? Which branch comes closest to the ground? These questions just don’t make sense in the context of mental imagery.

Experience, like mental imagery, is something that only answers certain questions. Of course, the great, even cosmic irony is that this is the answer that has been staring us in the fucking face all along. Why else would experience remain an enduring part of philosophy, the institution that asks how things in the most general sense hang together in the most general sense without any rational hope of answer?

Experience is obvious—it can be nothing but obvious. The palpable power of the Holocaust Survivor is, I think, as profound a testament to the humanity of experience as there is. Their experience is automatically our own. Even philosophers shut up! It correlates us in a manner as ancient as our species, allows us to engineer the new. At the same time, it cannot but dupe and radically underdetermine our ancient, Sisyphean ambition to peer into the soul through the glass of the soul. As soon as we turn our rational eye to experience in general, let alone the conditions of possibility of experience, we run afoul illusions, impossible images that, in our diseased state, we insist are real.

This is what our creaking bookshelves shout in sum. The narratives, they proclaim experience in all its obvious glory, while treatise after philosophical treatise mutters upon the boundary of where our competence quite clearly comes to an end. Where we bicker.


At least we have reason to believe that philosophers are not alone in the universe.

Alien role-reversal



[1] The eliminativism at issue here is meaning eliminativism, and not, as Stich, Churchland, and many others have advocated, psychological eliminativism. But where meaning eliminativism clearly entails psychological eliminativism, it is not at all obvious the psychological eliminativism entails meaning eliminativism. This was why Stich found himself so perplexed by the implications of reference (see his, Deconstructing the Mind, especially Chapter 1). To assume that folk psychology is a mistaken theory is to assume that folk psychology is representational, something that is true or false of the world. The critical eliminativism espoused here suffers no such difficulty, but at the added cost of needing to explain meaning in general, and not simply commonsense human psychology.

[2] See Kathryn Denning’s excellent, “Social Evolution in Cosmic Context,”

[3] Nicolas Rescher, for instance, makes hash of the time-honoured assumption that aliens would possess a science comparable to our own by cataloguing the myriad contingencies of the human institution. See Finitude, 28, or Unknowability, “Problems of Alien Cognition,” 21-39.

[4] Stellar parallax, on this planet at least, was not measured until 1838.

[5] In the broad sense proposed by Wilson in The Social Conquest of the Earth.

[6] This amounts to taking a position in the mindreading debate that some theorists would find problematic, particularly those skeptical of modularity and/or with representationalist sympathies. Since the present account provides a parsimonious means of explaining away the intuitions informing both positions, it would be premature to engage the debate regarding either at this juncture. The point is to demonstrate what heuristic neglect, as a theoretical interpretative tool, allows us to do.

[7] The representationalist would cry foul at this point, claim the existence of some coherent ‘functional level’ accessible to deliberative metacognition (the mind) allows for accurate and exhaustive description. But once again, since heuristic neglect explains why we’re so prone to develop intuitions along these lines, we can sidestep this debate as well. Nobody knows what the mind is, or whatever it is they take themselves to be describing. The more interesting question is one of whether a heuristic neglect account can be squared with the research pertaining directly to this field. I suspect so, but for the interim I leave this to individuals more skilled and more serious than myself to investigate.

[8] In the literature, accounts that claim metacognitive functions for mindreading are typically called ‘symmetrical theories.’ Substantial research supports the claim that metacognitive reporting involves social cognition. See Carruthers, “How we know our own minds: the relationship between mindreading and metacognition,” for an outstanding review.

[9] Gerd Gigerenzer and the Adaptive Behaviour and Cognition Research Group have demonstrated that simple heuristics are often far more effective than even optimization methods possessing far greater resources. “As the amount of data available to make predictions in an environment shrinks, the advantage of simple heuristics over complex algorithms grows” (Hertwig and Hoffrage, “The Research Agenda,” Simple Heuristics in a Social World, 23).

[10] “Quid est enim tempus? Quis hoc facile breuiterque explicauerit? Quis hoc ad uerbum de illo proferendum uel cogitatione comprehenderit? Quid autem familiarius et notius in loquendo commemoramus quam tempus? Et intellegimus utique cum id loquimur, intellegimus etiam cum alio loquente id audimus. Quid est ergo tempus? Si nemo ex me quærat, scio; si quærenti explicare uelim, nescio.

The Lesser Sound and Fury

by rsbakker

So a storm blew through last Tuesday night, a real storm, the kind we haven’t seen in a couple of years at least. I was just finishing up a disastrous night of NHL 13 (because NHL 14 is a rip off) on PS3 (because my PS4 is a paperweight) with my buds out back in the garage. Frank fled. Ken strolled. ‘Good night, Motherfucker.’ ‘Goodnight.’ The sky was alive, strobes glimpsed through the Dark Lord’s laundry, thunder rattling the teeth of the world, booming across houses lined up like molars. I sat on the front porch to watch, squinting more for the booze than for the wind. There had been talk of tornados, but I wasn’t buying it, having lived in Tennessee. No, just a storm. We just don’t get the parking lot heat they need to germinate, let alone to feed. The air lacked the required energy.

The rain fell like gravel. Straight down. Euclidean rain, I thought.

But there was nothing linear about the lightning. The first strike ripped fabric too fundamental to be seen. The second had me out of my stupor as much as out of my seat, blinking for the instantaneous execution of night and shadow. Everything revealed God’s way: too quick to be grasped by eyes so small as these.

I stood, another animal floating in solution. I laughed the laugh of monkeys too stupid to cower. I thought of ancient fools.

The rain fell like gravel, massing across all the terrestrial surfaces, hard enough to shatter into sand, hanging like dust across ankles in summer fields. Then it faded, trailed into silence with analogue perfection, and I found myself standing in a glazed nocturnal world, everything turgid… shivering for the high-altitude chill.

I locked up the house, crawled into bed. I lay in bed listening to the passage of thunder… the far side of some cataclysmic charge. I watched white splash across the skylights.

And then came the blitz.


Something—an artillery shell pilfered from some World War I magazine from the sounds of it—exploded just a few blocks over. The house shook everyone awake.


Closer than the last—even nature believes in the strategic value of carpet bombing.

We huddled together, our small family of three, grinning for terror and wonder. I spoke brave words I can no longer remember.


Loud enough to crack wood, to swear in front of little children.

The next morning I awoke to the smell of a five year old farting. It seemed a miracle that everything was intact and sodden—no hoary old trees torn from their sockets, no branches hanging necks broken from powerlines. It seemed miraculous that a beast so vast could stomp through our neighbourhood with nary a casualty. Not a shrub. Not one drowned squirrel.

Only my fucking modem, and a week to remember what it was like, back before all this lesser sound and fury.