Three Pound Brain

No bells, just whistling in the dark…

Tag: intentionality

The Truth Behind the Myth of Correlationism

by rsbakker

A wrong turn lies hidden in the human cultural code, an error that has scuttled our every attempt to understand consciousness and cognition. So much philosophical activity reeks of dead ends: we try and we try, and yet we find ourselves mired in the same ancient patterns of disputation. The majority of thinkers believe the problem is local, that they need only tinker with the tools they’ve inherited. They soldier on, arguing that this or that innovative modification will overcome our confusion. Some, however, believe the problem lies deeper. I’m one of those thinkers, as is Meillassoux. I think the solution lies in speculation bound to the hip of modern science, in something I call ‘heuristic neglect.’ For me, the wrong turn lies in the application of intentional cognition to solve the theoretical problem of intentional cognition. Meillassoux thinks it lies in what he calls ‘correlationism.’

Since I’ve been accused of ‘correlationism’ on a couple of occasions now, I thought it worthwhile tackling the issue in more detail. This will not be an institutional critique a la Golumbia’s, who manages to identify endless problems with Meillassoux’s presentation, while somehow entirely missing his skeptical point: once cognition becomes artifactual, it becomes very… very difficult to understand. Cognitive science is itself fractured about Meillassoux’s issue.

What follows will be a constructive critique, an attempt to explain the actual problem underwriting what Meillassoux calls ‘correlationism,’ and why his attempt to escape that problem simply collapses into more interminable philosophy. The problem that artifactuality poses to the understanding of cognition is very real, and it also happens to fall into the wheelhouse of Heuristic Neglect Theory (HNT). For those souls growing disenchanted with Speculative Realism, but unwilling to fall back into the traditional bosom, I hope to show that HNT not only offers the radical break with tradition that Meillassoux promises, it remains inextricably bound to the details of this, the most remarkable age.

What is correlationism? The experts explain:

Correlation affirms the indissoluble primacy of the relation between thought and its correlate over the metaphysical hypostatization or representational reification of either term of the relation. Correlationism is subtle: it never denies that our thoughts or utterances aim at or intend mind-independent or language-independent realities; it merely stipulates that this apparently independent dimension remains internally related to thought and language. Thus contemporary correlationism dismisses the problematic of scepticism, and or epistemology more generally, as an antiquated Cartesian hang-up: there is supposedly no problem about how we are able to adequately represent reality; since we are ‘always already’ outside ourselves and immersed in or engaging with the world (and indeed, this particular platitude is constantly touted as the great Heideggerean-Wittgensteinian insight). Note that correlationism need not privilege “thinking” or “consciousness” as the key relation—it can just as easily replace it with “being-in-the-world,” “perception,” “sensibility,” “intuition,” “affect,” or even “flesh.” Ray Brassier, Nihil Unbound, 51

By ‘correlation’ we mean the idea according to which we only ever have access to the correlation between thinking and being, and never to either term considered apart from the other. We will henceforth call correlationism any current of thought which maintains the unsurpassable character of the correlation so defined. Consequently, it becomes possible to say that every philosophy which disavows naive realism has become a variant of correlationism. Quentin Meillassoux, After Finitude, 5

Correlationism rests on an argument as simple as it is powerful, and which can be formulated in the following way: No X without givenness of X, and no theory about X without a positing of X. If you speak about something, you speak about something that is given to you, and posited by you. Consequently, the sentence: ‘X is’, means: ‘X is the correlate of thinking’ in a Cartesian sense. That is: X is the correlate of an affection, or a perception, or a conception, or of any subjective act. To be is to be a correlate, a term of a correlation . . . That is why it is impossible to conceive an absolute X, i.e., an X which would be essentially separate from a subject. We can’t know what the reality of the object in itself is because we can’t distinguish between properties which are supposed to belong to the object and properties belonging to the subjective access to the object. Quentin Meillassoux,”Time without Becoming

The claim of correlationism is the corollary of the slogan that ‘nothing is given’ to understanding: everything is mediated. Once knowing becomes an activity, then the objects insofar as they are known become artifacts in some manner: reception cannot be definitively sorted from projection and as a result no knowledge can be said to be absolute. We find ourselves trapped in the ‘correlationist circle,’ trapped in artifactual galleries, never able to explain the human-independent reality we damn well know exists. Since all cognition is mediated, all cognition is conditional somehow, even our attempts (or perhaps, especially our attempts) to account for those conditions. Any theory unable to decisively explain objectivity is a theory that cannot explain cognition. Ergo, correlationism names a failed (cognitivist) philosophical endeavour.

It’s a testament to the power of labels in philosophy, I think, because as Meillassoux himself acknowledges there’s nothing really novel about the above sketch. Explaining the ‘cognitive difference’ was my dissertation project back in the 90’s, after all, and as smitten as I was with my bullshit solution back then, I didn’t think the problem itself was anything but ancient. Given this whole website is dedicated to exploring and explaining consciousness and cognition, you could say it remains my project to this very day! One of the things I find so frustrating about the ‘critique of correlationism’ is that the real problem—the ongoing crisis—is the problem of meaning. If correlationism fails because correlationism cannot explain cognition, then the problem of correlationism is an expression of a larger problem, the problem of cognition—or in other words, the problem of intentionality.

Why is the problem of meaning an ongoing crisis? In the past six fiscal years, from 2012 to 2017, the National Institute of Health will have spent more than 113 billion dollars funding research bent on solving some corner of the human soul. [1] And this is just one public institution in one nation involving health related research. If you include the cognitive sciences more generally—research into everything from consumer behaviour to AI—you could say that solving the human soul commands more resources than any other domain in history. The reason all this money is being poured into the sciences rather than philosophy departments is that the former possesses real world consequences: diseases cured, soap sold, politicians elected. As someone who tries to keep up with developments in Continental philosophy, I already find the disconnect stupendous, how whole populations of thinkers continue discoursing as if nothing significant has changed, bitching about traditional cutlery in the shadow of the cognitive scientific tsunami.

Part of the popularity of the critique of correlationism derives from anxieties regarding the growing overlap of the sciences of the human and the humanities. All thinkers self-consciously engaged in the critique of correlationism reference scientific knowledge as a means of discrediting correlationist thought, but as far as I can tell, the project has done very little to bring the science, what we’re actually learning about consciousness and cognition, to the fore of philosophical debates. Even worse, the notion of mental and/or neural mediation is actually central to cognitive science. What some neuroscientists term ‘internal models,’ which monolopolize our access to ourselves and the world, is nothing if not a theoretical correlation of environments and cognition, trapping us in models of models. The very science that Meillassoux thinks argues against correlationism in one context, explicitly turns on it in another. The mediation of knowledge is the domain of cognitive science—full stop. A naturalistic understanding of cognition is a biological understanding is an artifactual understanding: this is why the upshot of cognitive science is so often skeptical, prone to further diminish our traditional (if not instinctive) hankering for unconditioned knowledge—to reveal it as an ancestral conceit

A kind of arche-fossil.

If an artifactual approach to cognition is doomed to misconstrue cognition, then cognitive science is a doomed enterprise. Despite the vast sums of knowledge accrued, the wondrous and fearsome social instrumentalities gained, knowledge itself will remain inexplicable. What we find lurking in the bones of Meillassoux’s critique, in other words, is precisely the same commitment to intentional exceptionality we find in all traditional philosophy, the belief that the subject matter of traditional philosophical disputation lies beyond the pale of scientific explanation… that despite the cognitive scientific tsunami, traditional intentional speculation lies secure in its ontological bunkers.

Only more philosophy, Meillassoux thinks, can overcome the ‘scandal of philosophy.’ But how is mere opinion supposed to provide bona fide knowledge of knowledge? Speculation on mathematics does nothing to ameliorate this absurdity: even though paradigmatic of objectivity, mathematics remains as inscrutable as knowledge itself. Perhaps there is some sense to be found in the notion of interrogating/theorizing objects in a bid to understand objectivity (cognition), but given what we now know regarding our cognitive shortcomings in low-information domains, we can be assured that ‘object-oriented’ approaches will bog down in disputation.

I just don’t know how to make the ‘critique of correlationism’ workable, short ignoring the very science it takes as its motivation, or just as bad, subordinating empirical discoveries to some school of ‘fundamental ontological’ speculation. If you’re willing to take such a leap of theoretical faith, you can be assured that no one in the vicinity of cognitive science will take it with you—and that you will make no difference in the mad revolution presently crashing upon us.

We know that knowledge is somehow an artifact of neural function—full stop. Meillassoux is quite right to say this renders the objectivity of knowledge very difficult to understand. But why think the problem lies in presuming the artifactual nature of cognition?—especially now that science has begun reverse-engineering that nature in earnest! What if our presumption of artifactuality weren’t so much the problem, as the characterization? What if the problem isn’t that cognitive science is artifactual so much as how it is?

After all, we’ve learned a tremendous amount about this how in the past decades: the idea of dismissing all this detail on the basis of a priori guesswork seems more than a little suspect. The track record would suggest extreme caution. As the boggling scale of the cognitive scientific project should make clear, everything turns on the biological details of cognition. We now know, for instance, that the brain employs legions of special purpose devices to navigate its environments. We know that cognition is thoroughly heuristic, that it turns on cues, bits of available information statistically correlated to systems requiring solution.

Most all systems in our environment shed information enabling the prediction of subsequent behaviours absent the mechanical particulars of that information. The human brain is exquisitely tuned to identify and exploit the correlation of information available and subsequent behaviours. The artifactuality of biology is an evolutionary one, and as such geared to the thrifty solution of high impact problems. To say that cognition (animal or human) is heuristic is to say it’s organized according to the kinds of problems our ancestors needed to solve, and not according to those belonging to academics. Human cognition consists of artifactualities, subsystems dedicated to certain kinds of problem ecologies. Moreover, it consists of artifactualities selected to answer questions quite different from those posed by philosophers.

These two facts drastically alter the landscape of the apparent problem posed by ‘correlationism.’ We have ample theoretical and empirical reasons to believe that mechanistic cognition and intentional cognition comprise two quite different cognitive regimes, the one dedicated to explanation via high-dimensional (physical) sourcing, the other dedicated to explanation absent that sourcing. As an intentional phenomena, objectivity clearly belongs to the latter. Mechanistic cognition, meanwhile, is artifactual. What if it’s the case that ‘objectivity’ is the turn of a screw in a cognitive system selected to solve in the absence of artifactual information? Since intentional cognition turns on specific cues to leverage solutions, and since those cues appear sufficient (to be the only game in town where that behaviour is concerned), the high-dimensional sourcing of that same behavior generates a philosophical crash space—and a storied one at that! What seems sourceless and self-evident becomes patently impossible.

Short magic, cognitive systems possess the environmental relationships they do thanks to super-complicated histories of natural and neural selection—evolution and learning. Let’s call this their orientation, understood as the nonintentional (‘zombie’) correlate of ‘perspective.’ The human brain is possibly the most complex thing we know of in the universe (a fact which should render any theory of the human neglecting that complexity suspect). Our cognitive systems, in other words, possess physically intractable orientations. How intractable? Enough that billions of dollars in research has merely scratched the surface.

Any capacity to cognize this relationship will perforce be radically heuristic, which is to say, provide a means to solve some critical range of problems—a problem ecology—absent natural historical information. The orientation heuristically cognized, of course, is the full-dimensional relationship we actually possess, only hacked in ways that generate solutions (repetitions of behaviour) while neglecting the physical details of that relationship.

Most significantly, orientation neglects the dimension of mediation: thought and perception (whatever they amount to) are thoroughly blind to their immediate sources. This cognitive blindness to the activity of cognition, or medial neglect, amounts to a gross insensitivity to our physical continuity with our environments, the fact that we break no thermodynamic laws. Our orientation, in other words, is characterized by a profound, structural insensitivity to its own constitution—its biological artifactuality, among other things. This auto-insensitivity, not surprisingly, includes insensitivity to the fact of this insensitivity, and thus the default presumption of sufficiency. Specialized sensitivities are required to flag insufficiencies, after all, and like all biological devices, they do not come for free. Not only are we blind to our position within the superordinate systems comprising nature, we’re blind to our blindness, and so, unable to distinguish table-scraps from a banquet, we are duped into affirming inexplicable spontanieties.

‘Truth’ belongs to our machinery for communicating (among other things) the sufficiency of iterable orientations within superordinate systems given medial neglect. You could say it’s a way to advertise clockwork positioning (functional sufficiency) absent any inkling of the clock. ‘Objectivity,’ the term denoting the supposed general property of being true apart from individual perspectives, is a deliberative contrivance derived from practical applications of ‘truth’—the product of ‘philosophical reflection.’ The problem with objectivity as a phenomenon (as opposed to ‘objectivity’ as a component of some larger cognitive articulation) is that the sufficiency of iterable orientations within superordinate systems is always a contingent affair. Whether ‘truth’ occasions sufficiency is always an open question, since the system provides, at best, a rough and ready way to communicate and/or troubleshoot orientation. Unpredictable events regularly make liars of us all. The notion of facts ‘being true’ absent the mediation of human cognition, ‘objectivity,’ also provides a rough and ready way to communicate and/or troubleshoot orientation in certain circumstances. We regularly predict felicitous orientations without the least sensitivity to their artifactual nature, absent any inkling how their pins lie in intractable high-dimensional coincidences between buzzing brains. This insensitivity generates the illusion of absolute orientation, a position outside natural regularities—a ‘view from nowhere.’ We are a worm in the gut of nature convinced we possess disembodied eyes. And so long as the consequences of our orientations remain felicitous, our conceit need not be tested. Our orientations might as well ‘stand nowhere’ absent cognition of their limits.

Thus can ‘truth’ and ‘objectivity’ be naturalized and their peculiarities explained.

The primary cognitive moral here is that lacking information has positive cognitive consequences, especially when it comes to deliberative metacognition, our attempts to understand our nature via philosophical reflection alone. Correlationism evidences this in a number of ways.

As soon as the problem of cognition is characterized as the problem of thought and being, it becomes insoluble. Intentional cognition is heuristic: it neglects the nature of the systems involved, exploiting cues correlated to the systems requiring solution instead. The application of intentional cognition to theoretical explanation, therefore, amounts to the attempt to solve natures using a system adapted to neglect natures. A great deal of traditional philosophy is dedicated to the theoretical understanding of cognition via intentional idioms—via applications of intentional cognition. Thus the morass of disputation. We presume that specialized problem-solving systems possess general application. Lacking the capacity to cognize our inability to cognize the theoretical nature of cognition, we presume sufficiency. Orientation, the relation between neural systems and their proximal and distal environments—between two systems of objects—becomes perspective, the relation between subjects (or systems of subjects) and systems of objects (environments). If one conflates the manifest artifactual nature of orientation for the artifactual nature of perspective (subjectivity), then objectivity itself becomes a subjective artifact, and therefore nothing objective at all. Since orientation characterizes our every attempt to solve for cognition, conflating it with perspective renders perspective inescapable, and objectivity all but inexplicable. Thus the crash space of traditional epistemology.

Now I know from hard experience that the typical response to the picture sketched above is to simply insist on the conflation of orientation and perspective, to assert that my position, despite its explanatory power, simply amounts to more of the same, another perspectival Klein Bottle distinctive only for its egregious ‘scientism.’ Only my intrinsically intentional perspective, I am told, allows me to claim that such perspectives are metacognitive artifacts, a consequence of medial neglect. But asserting perspective before orientation on the basis of metacognitive intuitions alone not only begs the question, it also beggars explanation, delivering the project of cognizing cognition to never-ending disputation—an inability to even formulate explananda, let alone explain anything. This is why I like asking intentionalists how many centuries of theoretical standstill we should expect before that oft advertised and never delivered breakthrough finally arrives. The sin Meillassoux attributes to correlationism, the inability to explain cognition, is really just the sin belonging to intentional philosophy as a whole. Thanks to medial neglect, metcognition,  blind to both its sources and its source blindness, insists we stand outside nature. Tackling this intuition with intentional idioms leaves our every attempt to rationalize our connection underdetermined, a matter of interminable controversy. The Scandal dwells on eternal.

I think orientation precedes perspective—and obviously so, having watched loved ones dismantled by brain disease. I think understanding the role of neglect in orientation explains the peculiarities of perspective, provides a parsimonious way to understand the apparent first-person in terms of the neglect structure belonging to the third. There’s no problem with escaping the dream tank and touching the world simply because there’s no ontological distinction between ourselves and the cosmos. We constitute a small region of a far greater territory, the proximal attuned to the distal. Understanding the heuristic nature of ‘truth’ and ‘objectivity,’ I restrict their application to adaptive problem-ecologies, and simply ask those who would turn them into something ontologically exceptional why they would trust low-dimensional intuitions over empirical data, especially when those intuitions pretty much guarantee perpetual theoretical underdetermination. Far better trust to our childhood presumptions of truth and reality, in the practical applications of these idioms, than in any one of the numberless theoretical misapplications ‘discovering’ this trust fundamentally (as opposed to situationally) ‘naïve.’

The cognitive difference, what separates the consequences of our claims, has never been about ‘subjectivity’ versus ‘objectivity,’ but rather intersystematicity, the integration of ever-more sensitive orientations possessing ever more effectiveness into the superordinate systems encompassing us all. Physically speaking, we’ve long known that this has to be the case. Short actual difference making differences, be they photons striking our retinas or compression waves striking our eardrums or so on, no difference is made. Even Meillassoux acknowledges the necessity of physical contact. What we’ve lacked is a way of seeing how our apparently immediate intentional intuitions, be they phenomenological, ontological, or normative, fit into this high-dimensional—physical—picture.

Heuristic Neglect Theory not only provides this way, it also explains why it has proven so elusive over the centuries. HNT explains the wrong turn mentioned above. The question of orientation immediately cues the systems our ancestors developed to circumvent medial neglect. Solving for our behaviourally salient environmental relationships, in other words, automatically formats the problem in intentional terms. The automaticity of the application of intentional cognition renders it apparently ‘self-evident.’

The reason the critique of correlationism and speculative realism suffer all the problems of underdetermination their proponents attribute to correlationism is that they take this very same wrong turn. How is Meillassoux’s ‘hyper-chaos,’ yet another adventure in a priori speculation, anything more than another pebble tossed upon the heap of traditional disputation? Novelty alone recommends them. Otherwise they leave us every bit as mystified, every bit as unable to accommodate the torrent of relevant scientific findings, and therefore every bit as irrelevant to the breathtaking revolutions even now sweeping us and our traditions out to sea. Like the traditions they claim to supersede, they peddle cognitive abjection, discursive immobility, in the guise of fundamental insight.

Theoretical speculation is cheap, which is why it’s so frightfully easy to make any philosophical account look bad. All you need do is start worrying definitions, then let the conceptual games begin. This is why the warrant of any account is always a global affair, why the power of Evolutionary Theory, for example, doesn’t so much lie in the immunity of its formulations to philosophical critique, but in how much it explains on nature’s dime alone. The warrant of Heuristic Neglect Theory likewise turns on the combination of parsimony and explanatory power.

Anyone arguing that HNT necessarily presupposes some X, be it ontological or normative, is simply begging the question. Doesn’t HNT presuppose the reality of intentional objectivity? Not at all. HNT certainly presupposes applications of intentional cognition, which, given medial neglect, philosophers pose as functional or ontological realities. On HNT, a theory can be true even though, high-dimensionally speaking, there is no such thing as truth. Truth talk possesses efficacy in certain practical problem-ecologies, but because it participates in solving something otherwise neglected, namely the superordinate systematicity of orientations, it remains beyond the pale of intentional resolution.

Even though sophisticated critics of eliminativism acknowledge the incoherence of the tu quoque, I realize this remains a hard twist for many (if not most) to absorb, let alone accept. But this is exactly as it should be, both insofar as something has to explain why isolating the wrong turn has proven so stupendously difficult, and because this is precisely the kind of trap we should expect, given the heuristic and fractionate nature of human cognition. ‘Knowledge’ provides a handle on the intersection of vast, high-dimensional histories, a way to manage orientations without understanding the least thing about them. To know knowledge, we will come to realize, is to know there is no such thing, simply because ‘knowing’ is a resolutely practical affair, almost certainly inscrutable to intentional cognition. When you’re in the intentional mode, this statement simply sounds preposterous—I know it once struck me as such! It’s only when you appreciate how far your intuitions have strayed from those of your childhood, back when your only applications of intentional cognition were practical, that you can see the possibility of a more continuous, intersystematic way to orient ourselves to the cosmos. There was a time before you wandered into the ancient funhouse of heuristic misapplication, when you could not distinguish between your perspective and your orientation. HNT provides a theoretical way to recover that time and take a radically different path.

As a bona fide theory of cognition, HNT provides a way to understand our spectacular inability to understand ourselves. HNT can explain ‘aporia.’ The metacognitive resources recruited for the purposes of philosophical reflection possess alarm bells—sensitivities to their own limits—relevant only to their ancestral applications. The kinds of cognitive apories (crash spaces) characterizing traditional philosophy are precisely those we might expect, given the sudden ability to exercise specialized metacognitive resources out of school, to apply, among other things, the problem-solving power of intentional cognition to the question of intentional cognition.

As a bona fide theory of cognition, HNT bears as much on artificial cognition as on biological cognition, and as such, can be used to understand and navigate the already radical and accelerating transformation of our cognitive ecologies. HNT scales, from the subpersonal to the social, and this means that HNT is relevant to the technological madness of the now.

As a bona fide empirical theory, HNT, unlike any traditional theory of intentionality, will be sorted. Either science will find that metacognition actually neglects information in the ways I propose, or it won’t. Either science will find this neglect possesses the consequences I theorize, or it won’t. Nothing exceptional and contentious is required. With our growing understanding of the brain and consciousness comes a growing understanding of information access and processing capacity—and the neglect structures that fall out of them. The human brain abounds in bottlenecks, none of which are more dramatic than consciousness itself.

Cognition is biomechanical. The ‘correlation of thought and being,’ on my account, is the correlation of being and being. The ontology of HNT is resolutely flat. Once we understand that we only glimpse as much of our orientations as our ancestors required for reproduction, and nothing more, we can see that ‘thought,’ whatever it amounts to, is material through and through.

The evidence of this lies strewn throughout the cognitive wreckage of speculation, the alien crash site of philosophy.



[1] This includes, in addition to the neurosciences proper, research into Basic Behavioral and Social Science (8.597 billion), Behavioral and Social Science (22.515 billion), Brain Disorders (23.702 billion), Mental Health (13.699 billion), and Neurodegenerative (10.183 billion). 21/01/2017


Real Systems

by rsbakker


Now I’ve never had any mentors; my path has been too idiosyncratic, for the better, since I think it’s the lack of institutional constraints that has allowed me to experiment the way I have. But if I were pressed to name any spiritual mentor, Daniel Dennett would be the first name to cross my lips—without the least hesitation. Nevertheless, I see the theoretical jewel of his project, the intentional stance, as the last gasp of what will one day, I think, count as one of humanity’s great confusions… and perhaps the final one to succumb to science.

A great many disagree, of course, and because I’ve been told so many times to go back to “Real Patterns” to discover the error of my ways, I’ve decided I would use it to make my critical case.

Defenders of Dennett (including Dennett himself) are so quick to cite “Real Patterns,” I think, because it represents his most sustained attempt to situate his position relative to his fellow philosophical travelers. At issue is the reality of ‘intentional states,’ and how the traditional insistence on some clear cut binary answer to this question—real/unreal—radically underestimates the ontological complexity charactering both everyday life and the sciences. What he proposes is “an intermediate doctrine” (29), a way of understanding intentional states as real patterns.

I have claimed that beliefs are best considered to be abstract objects rather like centers of gravity. Smith considers centers of gravity to be useful fictions while Dretske considers them to be useful (and hence?) real abstractions, and each takes his view to constitute a criticism of my position. The optimistic assessment of these opposite criticisms is that they cancel each other out; my analogy must have hit the nail on the head. The pessimistic assessment is that more needs to be said to convince philosophers that a mild and intermediate sort of realism is a positively attractive position, and not just the desperate dodge of ontological responsibility it has sometimes been taken to be. I have just such a case to present, a generalization and extension of my earlier attempts, via the concept of a pattern. My aim on this occasion is not so much to prove that my intermediate doctrine about the reality of psychologcal states is right, but just that it is quite possibly right, because a parallel doctrine is demonstrably right about some simpler cases. 29

So what does he mean by ‘real patterns’? Dennett begins by considering a diagram with six rows of five black boxes each characterized by varying degrees of noise, so extreme in some cases as completely obscure the boxes. He then, following the grain of his characteristic genius, provides a battery of different ways these series might find themselves used.

This crass way of putting things-in terms of betting and getting rich-is simply a vivid way of drawing attention to a real, and far from crass, trade-off that is ubiquitous in nature, and hence in folk psychology. Would we prefer an extremely compact pattern description with a high noise ratio or a less compact pattern description with a lower noise ratio? Our decision may depend on how swiftly and reliably we can discern the simple pattern, how dangerous errors are, how much of our resources we can afford to allocate to detection and calculation. These “design decisions” are typically not left to us to make by individual and deliberate choices; they are incorporated into the design of our sense organs by genetic evolution, and into our culture by cultural evolution. The product of this design evolution process is what Wilfrid Sellars calls our manifest image, and it is composed of folk physics, folk psychology, and the other pattern-making perspectives we have on the buzzing blooming confusion that bombards us with data. The ontology generated by the manifest image has thus a deeply pragmatic source. 36

The moral is straightforward: the kinds of patterns that data sets yield are both perspectival and pragmatic. In each case, the pattern recognized is quite real, but bound upon some potentially idiosyncratic perspective possessing some potentially idiosyncratic needs.

He then takes this moral to Conway’s Game of Life, a computer program where cells in a grid are switched on or off in successive turns depending on the number of adjacent cells switched on. The marvelous thing about this program lies in the kinds of dynamic complexities arising from this simple template and single rule, subsystems persisting from turn to turn, encountering other subsystems with predictable results. Despite the determinism of this system, patterns emerge that only the design stance seems to adequately capture, a level possessing “it’s own language, a transparent foreshortening of the tedious descriptions one could give at the physical level” (39).

For Dennett, the fact that one can successfully predict via the design stance clearly demonstrates that it’s picking out real patterns somehow. He asks us to imagine transforming the Game into a supersystem played out on a screen miles wide and using the patterns picked out to design a Turing Machine playing chess against itself. Here, Dennett argues, the determinacy of the microphysical picture is either intractable or impracticable, yet we need only take up a chess stance or a computational stance to make, from a naive perspective, stunning predictions as to what will happen next.

And this is of course as true of life life as it is the Game of Life: “Predicting that someone will duck if you throw a brick at him is easy from the folk-psychological stance; it is and will always be intractable if you have to trace the photons from brick to eyeball, the neurotransmitters from optic nerve to motor nerve, and so forth” (42). His supersized Game of Life, in other words, makes plain the power and the limitations of heuristic cognition.

This brings him to his stated aim of clarifying his position vis a vis his confreres and Fodor. As he points out, everyone agrees there’s some kind of underlying “order which is there,” as Anscombe puts it in Intention. The million dollar question, of course, is what this order amounts to:

Fodor and others have claimed that an interior language of thought is the best explanation of the hard edges visible in “propositional attitude psychology.” Churchland and I have offered an alternative explanation of these edges… The process that produces the data of folk psychology, we claim, is one in which the multidimensional complexities of the underlying processes are projected through linguistic behavior, which creates an appearance of definiteness and precision, thanks to the discreteness of words. 44-45

So for traditional realists, like Fodor, the structure beliefs evince in reflection and discourse expresses the structure beliefs must possess in the head. For Dennett, on the other hand, the structure beliefs evince in reflection and discourse expresses, among other things, the structure of reflection and discourse. How could it be otherwise, he asks, given the ‘stupendous scale of compression’ (42) involved?

As Haugeland points out in “Pattern and Being,” this saddles Dennett’s account of patterns with a pretty significant ambiguity: if the patterns characteristic of intentional states express the structure of reflection and discourse, then the ‘order which is there’ must be here as well. Of course, this much is implicit in Dennett’s preamble: the salience of certain patterns depends on the perspective we possess on them. But even though this implicit ‘here-there holism’ becomes all but explicit when Dennett turns to Radical Translation and the distinction between his and Davidson’s views, his emphasis nevertheless remains on the order out there. As he writes:

Davidson and I both like Churchland’s alternative idea of propositional-attitude statements as indirect “measurements” of a reality diffused in the behavioral dispositions of the brain (and body). We think beliefs are quite real enough to call real just so long as belief talk measures these complex behavior-disposing organs as predictively as it does. 45-46

Rhetorically (even diagrammatically if one takes Dennett’s illustrations into account), the emphasis is on the order there, while here is merely implied as a kind of enabling condition. Call this the ‘epistemic-ontological ambiguity’ (EOA). On the one hand, it seems to make eminent sense to speak of patterns visible only from certain perspectives and to construe them as something there, independent of any perspective we might take on them. But on the other hand, it seems to make jolly good sense to speak of patterns visible only from certain perspectives and to construe them as around here, as something entirely dependent on the perspective we find ourselves taking. Because of this, it seems pretty fair to ask Dennett which kind of pattern he has in mind here. To speak of beliefs as dispositions diffused in the brain seems to pretty clearly imply the first. To speak of beliefs as low dimensional, communicative projections, on the other hand, seems to clearly imply the latter.

Why this ambiguity? Do the patterns underwriting belief obtain in individual believers, dispositionally diffused as he says, or do they obtain in the communicative conjunction of witnesses and believers? Dennett promised to give us ‘parallel examples’ warranting his ‘intermediate realism,’ but by simply asking the whereabouts of the patterns, whether we will find them primarily out there as opposed to around here, we quickly realize his examples merely recapitulate the issue they were supposed to resolve.



Welcome to crash space. If I’m right then you presently find yourself strolling through a cognitive illusion generated by the application of heuristic capacities outside their effective problem ecology.

Think of how curious the EOA is. The familiarity of it should be nothing short of gobsmacking: here, once again we find ourselves stymied by the same old dichotomies: here versus there, inside versus outside, knowing versus known. Here, once again we find ourselves trapped in the orbit of the great blindspot that still, after thousands of years, stumps the wise of the world.

What the hell could be going on?

Think of the challenge facing our ancestors attempting cognize their environmental relationships for the purposes of communication and deliberate problem-solving. The industrial scale of our ongoing attempt to understand as much demonstrates the intractability of that relationship. Apart from our brute causal interactions, our ability to cognize our cognitive relationships is source insensitive through and through. When a brick is thrown at us, “the photons from brick to eyeball, the neurotransmitters from optic nerve to motor nerve, and so forth” (42) all go without saying. In other words, the whole system enabling cognition of the brick throwing is neglected, and only information relevant to ancestral problem-solving—in this case, brick throwing—finds its way to conscious broadcast.

In ancestral cognitive ecologies, our high-dimensional (physical) continuity with nature mattered as much as it matters now, but it quite simply did not exist for them. They belonged to any number of natural circuits across any number of scales, and all they had to go on was the information that mattered (disposed them to repeat and optimize behaviours) given the resources they possessed. Just as Dennett argues, human cognition is heuristic through and through. We have no way of cognizing our position within any number of the superordinate systems science has revealed in nature, so we have to make do with hacks, subsystems allowing us to communicate and troubleshoot our relation to the environment while remaining almost entirely blind to it. About talk belongs to just such a subsystem, a kluge communicating and troubleshooting our relation to our environments absent cognition of our position in larger systems. As I like to say, we’re natural in such a way as to be incapable of cognizing ourselves as natural.

About talk facilitates cognition and communication of our worldly relation absent any access to the physical details of that relation. And as it turns out, we are that occluded relation’s most complicated component—we are the primary thing neglected in applications of about talk. As the thing most neglected, we are the thing most presumed, the invariant background guaranteeing the reliability of about talk (this is why homuncular arguments are so empty). This combination of cognitive insensitivity to and functional dependence upon the machinations of cognition (what I sometimes refer to as medial neglect) suggests that about talk would be ideally suited to communicating and troubleshooting functionally independent systems, processes generally insensitive to our attempts to cognize them. This is because the details of cognition make no difference to the details cognized: the automatic distinction about talk poses between cognizing system and the system cognized poses no impediment to understanding functionally independent systems. As a result, we should expect about talk to be relatively unproblematic when it comes to communicating and troubleshooting things ‘out there.’

Conversely, we should expect about talk to generate problems when it comes to communicating and troubleshooting functionally dependent systems, processes somehow sensitive to our attempts to cognize them. Consider ‘observer effects,’ the problem researchers themselves pose when their presence or their tools/techniques interfere with the process they are attempting to study. Given medial neglect, the researchers themselves always constitute a black box. In the case of systems functionally sensitive to the activity of cognition, as is often the case in psychology and particle physics, understanding the system requires we somehow obviate our impact on the system. As the interactive, behavioural components of cognition show, we are in fact quite good (though far from perfect) at inserting and subtracting our interventions in processes. But since we remain a black box, since our position in the superordinate systems formed by our investigations remains occluded, our inability to extricate ourselves, to gerrymander functional independence, say, undermines cognition.

Even if we necessarily neglect our positions in superordinate systems, we need some way of managing the resulting vulnerabilities, to appreciate that patterns may be artifacts of our position. This suggests one reason, at least, for the affinity of mechanical cognition and ‘reality.’ The more our black box functions impact the system to be cognized, the less cognizable that system becomes in source sensitive terms. We become an inescapable source of noise. Thus our intuitive appreciation of the need for ‘perspective,’ to ‘rise above the fray’: The degree to which a cognitive mode preserves (via gerrymandering if not outright passivity) the functional independence of a system is the degree to which that cognitive mode enables reliable source sensitive cognition is the degree to which about talk can be effectively applied.

The deeper our entanglements, on the other hand, the more we need to rely on source insensitive modes of cognition to cognize target systems. Even if our impact renders the isolation of source signals impossible, our entanglement remains nonetheless systematic, meaning that any number of cues correlated in any number of ways to the target system can be isolated (which is really all ‘radical translation’ amounts to). Given that metacognition is functionally entangled by definition, it becomes easy to see why the theoretical question of cognition causes about talk to crash the spectacular ways it does: our ability to neglect the machinations of cognition (the ‘order which is here’) is a boundary condition for the effective application of ‘orders which are there’—or seeing things as real. Systems adapted to work around the intractability of our cognitive nature find themselves compulsively applied to the problem of our cognitive nature. We end up creating a bestiary of sourceless things, things that, thanks to the misapplication of the aboutness heuristic, have to belong to some ‘order out there,’ and yet cannot be sourced like anything else out there… as if they were unreal.

The question of reality cues the application of about talk, our source insensitive means of communicating and troubleshooting our cognitive relation to the world. For our ancient ancestors, who lacked the means to distinguish between source sensitive and source insensitive modes of cognition, asking, ‘Are beliefs real?’ would have sounded insane. HNT, in fact, provides a straightforward explanation for what might be called our ‘default dogmatism,’ our reflex for naive realism: not only do we lack any sensitivity to the mechanics of cognition, we lack any sensitivity to this insensitivity. This generates the persistent illusion of sufficiency, the assumption (regularly observed in different psychological phenomena) that the information provided is all the information there is.

Cognition of cognitive insufficiency always requires more resources, more information. Sufficiency is the default. This is what makes the novel application of some potentially ‘good trick,’ as Dennett would say, such tricky business. Consider philosophy. At some point, human culture acquired the trick of recruiting existing metacognitive capacities to explain the visible in terms of the invisible in unprecedented (theoretical) ways. Since those metacognitive capacities are radically heuristic, specialized consumers of select information, we can suppose retasking those capacities to solve novel problems—as philosophers do when they, for instance, ‘ponder the nature of knowledge’—would run afoul some pretty profound problems. Even if those specialized metacognitive consumers possessed the capacity to signal cognitive insufficiency, we can be certain the insufficiency flagged would be relative to some adaptive problem-ecology. Blind to the heuristic structure of cognition, the first philosophers took the sufficiency of their applications for granted, much as very many do now, despite the millennia of prior failure.

Philosophy inherited our cognitive innocence and transformed it, I would argue, into a morass of competing cognitive fantasies. But if it failed to grasp the heuristic nature of much cognition, it did allow, as if by delayed exposure, a wide variety of distinctions to blacken the photographic plate of philosophical reflection—that between is and ought, fact and value, among them. The question, ‘Are beliefs real?’ became more a bona fide challenge than a declaration of insanity. Given insensitivity to the source insensitive nature of belief talk, however, the nature of the problem entirely escaped them. Since the question of reality cues the application of about talk, source insensitive modes of cognition struck them as the only game in town. Merely posing the question springs the trap (for as Dennett says, selecting cues is “typically not left to us to make by individual and deliberate choices” (36)). And so they found themselves attempting to solve the hidden nature of cognition via the application of devices adapted to ignore hidden natures.

Dennett runs into the epistemic-ontological ambiguity because the question of the reality of intentional states cues the about heuristic out of school, cedes the debate to systems dedicated to gerrymandering solutions absent high-dimensional information regarding our cognitive predicament—our position within superordinate systems. Either beliefs are out there, real, or they’re in here, merely, an enabling figment of some kind. And as it turns out, IST is entirely amenable to this misapplication, in that ‘taking the intentional stance’ involves cuing the about heuristic, thus neglecting our high-dimensional cognitive predicament. On Dennett’s view, recall, an intentional system is any system that can be predicted/explained/manipulated via the intentional stance. Though the hidden patterns can only be recognized from the proper perspective, they are there nonetheless, enough, Dennett thinks, to concede them reality as intentional systems.

Heuristic Neglect Theory allows us to see how this amounts to mistaking a CPU for a PC. On HNT, the trick is to never let the superordinate systems enabling and necessitating intentional cognition out of view. Recall the example of the gaze heuristic from my prior post, how fielders essentially insert—functionally entangle—themselves into the pop fly system to let the ball itself guide them in. The same applies to beliefs. When your tech repairs your computer, you have no access to her personal history, the way thousands of hours have knapped her trouble-shooting capacities, and even less access to her evolutionary history, the way continual exposure to problematic environments has sculpted her biological problem-solving capacities. You have no access, in other words, to the vast systems of quite natural relata enabling her repair. The source sensitive story is unavailable, so you call her ‘knowledgeable’ instead; you presume she possesses something—a fetish, in effect—possessing the sourceless efficacy explaining her almost miraculous ability to make your PC run: a mass of true beliefs (representations), regarding personal computer repair. You opt for a source insensitive means that correlates with her capacities well enough to neglect the high-dimensional facts—the natural and personal histories—underwriting her ability.

So then where does the ‘real pattern’ gainsaying the reality of belief lie? The realist would say in the tech herself. This is certainly what our (heuristic) intuitions tell us in the first instance. But as we saw above, squaring sourceless entities in a world where most everything has a source is no easy task. The instrumentalist would say in your practices. This certainly lets us explain away some of the peculiarities crashing our realist intuitions, but at the cost of other, equally perplexing problems (this is crash space, after all). As one might expect, substituting the use heuristic for the about heuristic merely passes the hot potato of source insensitivity. ‘Pragmatic functions’ are no less difficult to square with the high-dimensional than beliefs.

But it should be clear by now that the simple act of pairing beliefs with patterns amounts to jumping the same ancient shark. The question, ‘Are beliefs real?’ was a no-brainer for our preliterate ancestors simply because they lived in a seamless shallow information cognitive ecology. Outside their local physics, the sources of things eluded them altogether. ‘Of course beliefs are real!’ The question was a challenge for our philosophical ancestors because they lived in a fractured shallow information ecology. They could see enough between the cracks to appreciate the potential extent and troubling implications of mechanical cognition, it’s penchant to crash our shallow (ancestral) intuitions. ‘It has to be real!’

With Dennett, entire expanses of our shallow information ecology have been laid low and we get, ‘It’s as real as it needs to be.’ He understands the power of the about heuristic, how ‘order out there’ thinking effects any number of communicative solutions—thus his rebuttal of Rorty. He understands, likewise, the power of the use heuristic, how ‘order around here’ thinking effects any number of communicative solutions—thus his rebuttal of Fodor. And most importantly, he understands the error of assuming the universal applicability of either. And so he concludes:

Now, once again, is the view I am defending here a sort of instrumentalism or a sort of realism? I think that the view itself is clearer than either of the labels, so I shall leave that question to anyone who stills find [sic] illumination in them. 51

What he doesn’t understand is how it all fits together—and how could he, when IST strands him with an intentional theorization of intentional cognition, a homuncular or black box understanding of our contemporary cognitive predicament? This is why “Real Patterns” both begins and ends with EOA, why we are no closer to understanding why such ambiguity obtains at all. How are we supposed to understand how his position falls between the ‘ontological dichotomy’ of realism and instrumentalism when we have no account of this dichotomy in the first place? Why the peculiar ‘bi-stable’ structure? Why the incompatibility between them? How can the same subject matter evince both? Why does each seem to inferentially beg the other?



The fact is, Dennett was entirely right to eschew outright realism or outright instrumentalism. This hunch of his, like so many others, was downright prescient. But the intentional stance only allows him to swap between perspectives. As a one-time adherent I know first-hand the theoretical versatility IST provides, but the problem is that explanation is what is required here.

HNT argues that simply interrogating the high-dimensional reality of belief, the degree to which it exists out there, covers over the very real system—the cognitive ecology—explaining the nature of belief talk. Once again, our ancestors needed some way of communicating their cognitive relations absent source-sensitive information regarding those relations. The homunculus is a black box precisely because it cannot source its own functions, merely track their consequences. The peculiar ‘here dim’ versus ‘there bright’ character of naive ontological or dogmatic cognition is a function of medial neglect, our gross insensitivity to the structure and dynamics of our cognitive capacities. Epistemic or instrumental cognition comes with learning from the untoward consequences of naive ontological cognition—the inevitable breakdowns. Emerging from our ancestral, shallow information ecologies, the world was an ‘order there’ world simply because humanity lacked the ability to discriminate the impact of ‘around here.’ The discrimination of cognitive complexity begets intuitions of cognitive activity, undermines our default ‘out there’ intuitions. But since ‘order there’ is the default and ‘around here’ the cognitive achievement, we find ourselves in the peculiar position of apparently presuming ‘order there’ when making ‘around here’ claims. Since ‘order there’ intuitions remain effective when applied in their adaptive problem-ecologies, we find speculation splitting along ‘realist’ versus ‘anti-realist’ lines. Because no one has any inkling of any of this, we find ourselves flipping back and forth between these poles, taking versions of the same obvious steps to trod the same ancient circles. Every application is occluded, and so ‘transparent,’ as well as an activity possessing consequences.

Thus EOA… as well as an endless parade of philosophical chimera.

Isn’t this the real mystery of “Real Patterns,” the question of how and why philosophers find themselves trapped on this rickety old teeter-totter? “It is amusing to note,” Dennett writes, “that my analogizing beliefs to centers of gravity has been attacked from both sides of the ontological dichotomy, by philosophers who think it is simply obvious that centers of gravity are useful fictions, and by philosophers who think it is simply obvious that centers of gravity are perfectly real” (27). Well, perhaps not so amusing: Short of solving this mystery, Dennett has no way of finding the magic middle he seeks in this article—the middle of what? IST merely provides him with the means to recapitulate EOA and gesture to the possibility of some middle, a way to conceive all these issues that doesn’t deliver us to more of the same. His instincts, I think, were on the money, but his theoretical resources could not take him where he wanted to go, which is why, from the standpoint of his critics, he just seems to want to have it both ways.

On HNT we can see, quite clearly, I think, the problem with the question, ‘Are beliefs real?’ absent an adequate account of the relevant cognitive ecology. The bitter pill lies in understanding that the application conditions of ‘real’ have real limits. Dennett provides examples where those application conditions pretty clearly seem to obtain, then suggests more than argues that these examples are ‘parallel’ in all the structurally relevant respects to the situation with belief. But to distinguish his brand from Fodor’s ‘industrial strength’ realism, he has no choice but to ‘go instrumental’ in some respect, thus exposing the ambiguity falling out of IST.

It’s safe to say belief talk is real. It seems safe to say that beliefs are ‘real enough’ for the purposes of practical problem-solving—that is, for shallow (or source insensitive) cognitive ecologies. But it also seems safe to say that beliefs are not real at all when it comes to solving high-dimensional cognitive ecologies. The degree to which scientific inquiry is committed to finding the deepest (as opposed to the most expedient) account, should be the degree to which it views belief talk as components of real systems and views ‘belief’ as a source insensitive posit, a way to communicate and troubleshoot both oneself and one’s fellows.

This is crash space, so I appreciate the kinds of counter-intuitiveness involved in this view I’m advancing. But since tramping intuitive tracks has hitherto only served to entrench our controversies and confusions, we have good reason to choose explanatory power over intuitive appeal. We should expect synthesis in the cognitive sciences will prove every bit as alienating to traditional presumption as it was in biology. There’s more than a little conceit involved in thinking we had any special inside track on our own nature. In fact, it would be a miracle if humanity had not found itself in some version of this very dilemma. Given only source insensitive means to troubleshoot cognition, to understand ourselves and each other, we were all but doomed to be stumped by the flood of source sensitive cognition unleashed by science. (In fact, given some degree of interstellar evolutionary convergence, I think one can wager that extraterrestrial intelligences will have suffered their own source insensitive versus source sensitive cognitive crash spaces. See my, “On Alien Philosophy,” The Journal of Consciousness Studies, (forthcoming))

IST brings us to the deflationary limit of intentional philosophy. HNT offers a way to ratchet ourselves beyond, a form of critical eliminativism that can actually explain, as opposed to simply dispute, the traditional claims of intentionality. Dennett, of course, reserves his final criticism for eliminativism, perhaps because so many critics see it as the upshot of his interpretivism. He acknowledges the possibility that “that neuroscience will eventually-perhaps even soon-discover a pattern that is so clearly superior to the noisy pattern of folk psychology that everyone will readily abandon the former for the latter (50),” but he thinks it unlikely:

For it is not enough for Churchland to suppose that in principle, neuroscientific levels of description will explain more of the variance, predict more of the “noise” that bedevils higher levels. This is, of course, bound to be true in the limit-if we descend all the way to the neurophysiological “bit map.” But as we have seen, the trade-off between ease of use and immunity from error for such a cumbersome system may make it profoundly unattractive. If the “pattern” is scarcely an improvement over the bit map, talk of eliminative materialism will fall on deaf ears-just as it does when radical eliminativists urge us to abandon our ontological commitments to tables and chairs. A truly general-purpose, robust system of pattern description more valuable than the intentional stance is not an impossibility, but anyone who wants to bet on it might care to talk to me about the odds they will take. 51

The elimination of theoretical intentional idiom requires, Dennett correctly points out, some other kind of idiom. Given the operationalization of intentional idioms across a wide variety of research contexts, they are not about to be abandoned anytime soon, and not at all if the eliminativist has nothing to offer in their stead. The challenge faced by the eliminativist, Dennett recognizes, is primarily abductive. If you want to race at psychological tracks, you either enter intentional horses or something that can run as fast or faster. He thinks this unlikely because he thinks no causally consilient (source sensitive) theory can hope to rival the combination of power and generality provided by the intentional stance. Why might this be? Here he alludes to ‘levels,’ suggest that any causally consilient account would remain trapped at the microphysical level, and so remain hopelessly cumbersome. But elsewhere, as in his discussion of ‘creeping depersonalization’ in “Mechanism and Responsibility,” he readily acknowledges our ability to treat with one another as machines.

And again, we see how the limited resources of IST have backed him into a philosophical corner—and a traditional one at that. On HNT, his claim amounts to saying that no source sensitive theory can hope to supplant the bundle of source insensitive modes comprising intentional cognition. On HNT, in other words, we already find ourselves on the ‘level’ of intentional explanation, already find ourselves with a theory possessing the combination of power and generality required to eliminate a particle of intentional theorization: namely, the intentional stance. A way to depersonalize cognitive science.

Because IST primarily provides a versatile way to deploy and manage intentionality in theoretical contexts rather than any understanding of its nature, the disanalogy between ‘center of gravity’ and ‘beliefs’ remains invisible. In each case you seem to have an entity that resists any clear relation to the order which is there, and yet finds itself regularly and usefully employed in legitimate scientific contexts. Our brains are basically short-cut machines, so it should come as no surprise that we find heuristics everywhere, in perception as much as cognition (insofar as they are distinct). It also should come as no surprise that they comprise a bestiary, as with most all things biological. Dennett is comparing heuristic apples and oranges, here. Centers of gravity are easily anchored to the order which is there because they economize otherwise available information. They can be sourced. Such is not the case with beliefs, belonging as they do to a system gerrymandering for the want of information.

So what is the ultimate picture offered here? What could reality amount to outside our heuristic regimes? Hard to think, as it damn well should be. Our species’ history posed no evolutionary challenges requiring the ability to intuitively grasp the facts of our cognitive predicament. It gave us a lot of idiosyncratic tools to solve high impact practical problems, and as a result, Homo sapiens fell through the sieve in such a way as to be dumbfounded when it began experimenting in earnest with its interrogative capacities. We stumbled across a good number of tools along the way, to be certain, but we remain just as profoundly stumped about ourselves. On HNT, the ‘big picture view’ is crash space, in ways perhaps similar to the subatomic, a domain where our biologically parochial capacities actually interfere with our ability to understand. But it offers a way of understanding the structure and dynamics of intentional cognition in source sensitive terms, and in so doing, explains why crashing our ancestral cognitive modes was inevitable. Just consider the way ‘outside heuristic regimes’ suggests something ‘noumenal,’ some uber-reality lost at the instant of transcendental application. The degree to which this answer strikes you as natural or ‘obvious’ is the degree you have been conditioned to apply that very regime out of school. With HNT we can demand those who want to stuff us into this or that intellectual Klein bottles define their application conditions, convince us this isn’t just more crash space mischief.

It’s trivial to say some information isn’t available, so why not leave well enough alone? Perhaps the time has come to abandon the old, granular dichotomies and speak in terms of dimensions of information available and cognitive capacities possessed. Imagine that

Moving on.

Visions of the Semantic Apocalypse: A Critical Review of Yuval Noah Harari’s Homo Deus

by rsbakker


“Studying history aims to loosen the grip of the past,” Yuval Noah Harari writes. “It enables us to turn our heads this way and that, and to begin to notice possibilities that our ancestors could not imagine, or didn’t want us to imagine” (59). Thus does the bestselling author of Sapiens: A Brief History of Humankind rationalize his thoroughly historical approach to question of our technological future in his fascinating follow-up, Homo Deus: A Brief History of Tomorrow. And so does he identify himself as a humanist, committed to freeing us from what Kant would have called, ‘our tutelary natures.’ Like Kant, Harari believes knowledge will set us free.

Although by the end of the book it becomes difficult to understand what ‘free’ might mean here.

As Harari himself admits, “once technology enables us to re-engineer human minds, Homo sapiens will disappear, human history will come to an end and a completely new process will begin, which people like you and me cannot comprehend” (46). Now if you’re interested in mapping the conceptual boundaries of comprehending the posthuman, I heartily recommend David Roden’s skeptical tour de force, Posthuman Life: Philosophy at the Edge of the Human. Homo Deus, on the other hand, is primarily a book chronicling the rise and fall of contemporary humanism against the backdrop of apparent ‘progress.’ The most glaring question, of course, is whether Harari’s academic humanism possesses the resources required to diagnose the problems posed by the collapse of popular humanism. This challenge—the problem of using obsolescent vocabularies to theorize, not only the obsolescence of those vocabularies, but the successor vocabularies to come—provides an instructive frame through which to understand the successes and failures of this ambitious and fascinating book.

How good is Homo Deus? Well, for years people have been asking me for a lay point of entry for the themes explored here on Three Pound Brain and in my novels, and I’ve always been at a loss. No longer. Anyone surfing for reviews of the book are certain to find individuals carping about Harari not possessing the expertise to comment on x or y, but these critics never get around to explaining how any human could master all the silos involved in such an issue (while remaining accessible to a general audience, no less). Such criticisms amount to advocating no one dare interrogate what could be the greatest challenge to ever confront humanity. In addition to erudition, Harari has the courage to concede ugly possibilities, the sensitivity to grasp complexities (as well as the limits they pose), and the creativity to derive something communicable. Even though I think his residual humanism conceals the true profundity of the disaster awaiting us, he glimpses more than enough to alert millions of readers to the shape of the Semantic Apocalypse. People need to know human progress likely has a horizon, a limit, that doesn’t involve environmental catastrophe or creating some AI God.

The problem is far more insidious and retail than most yet realize.

The grand tale Harari tells is a vaguely Western Marxist one, wherein culture (following Lukacs) is seen as a primary enabler of relations of power, a fundamental component of the ‘social apriori.’ The primary narrative conceit of such approaches belongs to the ancient Greeks: “[T]he rise of humanism also contains the seeds of its downfall,” Harari writes. “While the attempt to upgrade humans into gods takes humanism to its logical conclusion, it simultaneously exposes humanism’s inherent flaws” (65). For all its power, humanism possesses intrinsic flaws, blindnesses and vulnerabilities, that will eventually lead it to ruin. In a sense, Harari is offering us a ‘big history’ version of negative dialectic, attempting to show how the internal logic of humanism runs afoul the very power it enables.

But that logic is also the very logic animating Harari’s encyclopedic account. For all its syncretic innovations, Homo Deus uses the vocabularies of academic or theoretical humanism to chronicle the rise and fall of popular or practical humanism. In this sense, the difference between Harari’s approach to the problem of the future and my own could not be more pronounced. On my account, academic humanism, far from enjoying critical or analytical immunity, is best seen as a crumbling bastion of pre-scientific belief, the last gasp of traditional apologia, the cognitive enterprise most directly imperilled by the rising technological tide, while we can expect popular humanism to linger for some time to come (if not indefinitely).

Homo Deus, in fact, exemplifies the quandary presently confronting humanists such as Harari, how the ‘creeping delegitimization’ of their theoretical vocabularies is slowly robbing them of any credible discursive voice. Harari sees the problem, acknowledging that “[w]e won’t be able to grasp the full implication of novel technologies such as artificial intelligence if we don’t know what minds are” (107). But the fact remains that “science knows surprisingly little about minds and consciousness” (107). We presently have no consensus-commanding, natural account of thought and experience—in fact, we can’t even agree on how best to formulate semantic and phenomenal explananda.

Humanity as yet lacks any workable, thoroughly naturalistic, theory of meaning or experience. For Harari this means the bastion of academic humanism, though besieged, remains intact, at least enough for him to advance his visions of the future. Despite the perplexity and controversies occasioned by our traditional vocabularies, they remain the only game in town, the very foundation of countless cognitive activities. “[T]he whole edifice of modern politics and ethics is built upon subjective experiences,” Harari writes, “and few ethical dilemmas can be solved by referring strictly to brain activities” (116). Even though his posits lie nowhere in the natural world, they nevertheless remain subjective realities, the necessary condition of solving countless problems. “If any scientist wants to argue that subjective experiences are irrelevant,” Harari writes, “their challenge is to explain why torture or rape are wrong without reference to any subjective experience” (116).

This is the classic humanistic challenge posed to naturalistic accounts, of course, the demand that they discharge the specialized functions of intentional cognition the same way intentional cognition does. This demand amounts to little more than a canard, of course, once we appreciate the heuristic nature of intentional cognition. The challenge intentional cognition poses to natural cognition is to explain, not replicate, its structure and dynamics. We clearly evolved our intentional cognitive capacities, after all, to solve problems natural cognition could not reliably solve. This combination of power, economy, and specificity is the very thing that a genuinely naturalistic theory of meaning (such as my own) must explain.


“… fiction might thereby become the most potent force on earth, surpassing even wayward asteroids and natural selection. Hence if we want to understand our future, cracking genomes and crunching numbers is hardly enough. We must decipher the fictions that give meaning to the world.”


So moving forward it is important to understand how his theoretical approach elides the very possibility of a genuinely post-intentional future. Because he has no natural theory of meaning, he has no choice but to take the theoretical adequacy of his intentional idioms for granted. But if his intentional idioms possess the resources he requires to theorize the future, they must somehow remain out of play; his discursive ‘subject position’ must possess some kind of immunity to the scientific tsunami climbing our horizons. His very choice of tools limits the radicality of the story he tells. No matter how profound, how encompassing, the transformational deluge, Harari must somehow remain dry upon his theoretical ark. And this, as we shall see, is what ultimately swamps his conclusions.

But if the Hard Problem exempts his theoretical brand of intentionality, one might ask why it doesn’t exempt all intentionality from scientific delegitimation. What makes the scientific knowledge of nature so tremendously disruptive to humanity is the fact that human nature is, when all is said and down, just more nature. Conceding general exceptionalism, the thesis that humans possess something miraculous distinguishing them from nature more generally, would undermine the very premise of his project.

Without any way out of this bind, Harari fudges, basically. He remains silent on his own intentional (even humanistic) theoretical commitments, while attacking exceptionalism by expanding the franchise of meaning and consciousness to include animals: whatever intentional phenomena consist in, they are ultimately natural to the extent that animals are natural.

But now the problem has shifted. If humans dwell on a continuum with nature more generally, then what explains the Anthropocene, our boggling dominion of the earth? Why do humans stand so drastically apart from nature? The capacity that most distinguishes humans from their nonhuman kin, Harari claims (in line with contemporary theories), is the capacity to cooperate. He writes:

“the crucial factor in our conquest of the world was our ability to connect many humans to one another. Humans nowadays completely dominate the planet not because the individual human is far more nimble-fingered than the individual chimp or wolf, but because Homo sapiens is the only species on earth capable of cooperating flexibly in large numbers.” 131

He poses a ‘shared fictions’ theory of mass social coordination (unfortunately, he doesn’t engage research on groupishness, which would have provided him with some useful, naturalistic tools, I think). He posits an intermediate level of existence between the objective and subjective, the ‘intersubjective,’ consisting of our shared beliefs in imaginary orders, which serve to distribute authority and organize our societies. “Sapiens rule the world,” he writes, “because only they can weave an intersubjective web of meaning; a web of laws, forces, entities and places that exist purely in their common imagination” (149). This ‘intersubjective web’ provides him with theoretical level of description he thinks crucial to understanding our troubled cultural future.

He continues:

“During the twenty-first century the border between history and biology is likely to blur not because we will discover biological explanations for historical events, but rather because ideological fictions will rewrite DNA strands; political and economic interests will redesign the climate; and the geography of mountains and rivers will give way to cyberspace. As human fictions are translated into genetic and electronic codes, the intersubjective reality will swallow up the objective reality and biology will merge with history. In the twenty-first century fiction might thereby become the most potent force on earth, surpassing even wayward asteroids and natural selection. Hence if we want to understand our future, cracking genomes and crunching numbers is hardly enough. We must decipher the fictions that give meaning to the world.” 151

The way Harari sees it, ideology, far from being relegated to prescientific theoretical midden, is set to become all powerful, a consumer of worlds. This launches his extensive intellectual history of humanity, beginning with the algorithmic advantages afforded by numeracy, literacy, and currency, how these “broke the data-processing limitations of the human brain” (158). Where our hunter-gathering ancestors could at best coordinate small groups, “[w]riting and money made it possible to start collecting taxes from hundreds of thousands of people, to organise complex bureaucracies and to establish vast kingdoms” (158).

Harari then turns to the question of how science fits in with this view of fictions, the nature of the ‘odd couple,’ as he puts it:

“Modern science certainly changed the rules of the game, but it did not simply replace myths with facts. Myths continue to dominate humankind. Science only makes these myths stronger. Instead of destroying the intersubjective reality, science will enable it to control the objective and subjective realities more completely than ever before.” 179

Science is what renders objective reality compliant to human desire. Storytelling is what renders individual human desires compliant to collective human expectations, which is to say, intersubjective reality. Harari understands that the relationship between science and religious ideology is not one of straightforward antagonism: “science always needs religious assistance in order to create viable human institutions,” he writes. “Scientists study how the world functions, but there is no scientific method for determining how humans ought to behave” (188). Though science has plenty of resources for answering means type questions—what you ought to do to lose weight, for instance—it lacks resources to fix the ends that rationalize those means. Science, Harari argues, requires religion to the extent that it cannot ground the all important fictions enabling human cooperation (197).

Insofar as science is a cooperative, human enterprise, it can only destroy one form of meaning on the back of some other meaning. By revealing the anthropomorphism underwriting our traditional, religious accounts of the natural world, science essentially ‘killed God’—which is to say, removed any divine constraint on our actions or aspirations. “The cosmic plan gave meaning to human life, but also restricted human power” (199). Like stage-actors, we had a plan, but our role was fixed. Unfixing that role, killing God, made meaning into something each of us has to find for ourselves. Harari writes:

“Since there is no script, and since humans fulfill no role in any great drama, terrible things might befall us and no power will come to save us, or give meaning to our suffering. There won’t be a happy ending or a bad ending, or any ending at all. Things just happen, one after the other. The modern world does not believe in purpose, only in cause. If modernity has a motto, it is ‘shit happens.’” 200

The absence of a script, however, means that anything goes; we can play any role we want to. With the modern freedom from cosmic constraint comes postmodern anomie.

“The modern deal thus offers humans an enormous temptation, coupled with a colossal threat. Omnipotence is in front of us, almost within our reach, but below us yawns the abyss of complete nothingness. On the practical level, modern life consists of a constant pursuit of power within a universe devoid of meaning.” 201

Or to give it the Adornian spin it receives here on Three Pound Brain: the madness of a society that has rendered means, knowledge and capital, its primary end. Thus the modern obsession with the accumulation of the power to accumulate. And thus the Faustian nature of our present predicament (though Harari, curiously, never references Faust), the fact that “[w]e think we are smart enough to enjoy the full benefits of the modern deal without paying the price” (201). Even though physical resources such as material and energy are finite, no such limit pertains to knowledge. This is why “[t]he greatest scientific discovery was the discovery of ignorance.” (212): it spurred the development of systematic inquiry, and therefore the accumulation of knowledge, and therefore the accumulation of power, which, Harari argues, cuts against objective or cosmic meaning. The question is simply whether we can hope to sustain this process—defer payment—indefinitely.

“Modernity is a deal,” he writes, and for all its apparent complexities, it is very straightforward: “The entire contract can be summarised in a single phrase: humans agree to give up meaning in exchange for power” (199). For me the best way of thinking this process of exchanging power for meaning is in terms of what Weber called disenchantment: the very science that dispels our anthropomorphic fantasy worlds is the science that delivers technological power over the real world. This real world power is what drives traditional delegitimation: even believers acknowledge the vast bulk of the scientific worldview, as do the courts and (ideally at least) all governing institutions outside religion. Science is a recursive institutional ratchet (‘self-correcting’), leveraging the capacity to leverage ever more capacity. Now, after centuries of sheltering behind walls of complexity, human nature finds itself the intersection of multiple domains of scientific inquiry. Since we’re nothing special, just more nature, we should expect our burgeoning technological power over ourselves to increasingly delegitimate traditional discourses.

Humanism, on this account, amounts to an adaptation to the ways science transformed our ancestral ‘neglect structure,’ the landscape of ‘unknown unknowns’ confronting our prehistorical forebears. Our social instrumentalization of natural environments—our inclination to anthropomorphize the cosmos—is the product of our ancestral inability to intuit the actual nature of those environments. Information beyond the pale of human access makes no difference to human cognition. Cosmic meaning requires that the cosmos remain a black box: the more transparent science rendered that box, the more our rationales retreated to the black box of ourselves. The subjectivization of authority turns on how intentional cognition (our capacity to cognize authority) requires the absence of natural accounts to discharge ancestral functions. Humanism isn’t so much a grand revolution in thought as the result of the human remaining the last scientifically inscrutable domain standing. The rationalizations had to land somewhere. Since human meaning likewise requires that the human remain a black box, the vast industrial research enterprise presently dedicated to solving our nature does not bode well.

But this approach, economical as it is, isn’t available to Harari since he needs some enchantment to get his theoretical apparatus off the ground. As the necessary condition for human cooperation, meaning has to be efficacious. The ‘Humanist Revolution,’ as Harari sees it, consists in the migration of cooperative efficacy (authority) from the cosmic to the human. “This is the primary commandment humanism has given us: create meaning for a meaningless world” (221). Rather than scripture, human experience becomes the metric for what is right or wrong, and the universe, once the canvas of the priest, is conceded to the scientist. Harari writes:

“As the source of meaning and authority was relocated from the sky to human feelings, the nature of the entire cosmos changed. The exterior universe—hitherto teeming with gods, muses, fairies and ghouls—became empty space. The interior world—hitherto an insignificant enclave of crude passions—became deep and rich beyond measure” 234

This re-sourcing of meaning, Harari insists, is true whether or not one still believes in some omnipotent God, insofar as all the salient anchors of that belief lie within the believer, rather than elsewhere. God may still be ‘cosmic,’ but he now dwells beyond the canvas as nature, somewhere in the occluded frame, a place where only religious experience can access Him.

Man becomes ‘man the meaning maker,’ the trope that now utterly dominates contemporary culture:

“Exactly the same lesson is learned by Captain Kirk and Captain Jean-Luc Picard as they travel the galaxy in the starship Enterprise, by Huckleberry Finn and Jim as they sail down the Mississippi, by Wyatt and Billy as they ride their Harley Davidson’s in Easy Rider, and by countless other characters in myriad other road movies who leave their home town in Pennsylvannia (or perhaps New South Wales), travel in an old convertible (or perhaps a bus), pass through various life-changing experiences, get in touch with themselves, talk about their feelings, and eventually reach San Francisco (or perhaps Alice Springs) as better and wiser individuals.” 241

Not only is experience the new scripture, it is a scripture that is being continually revised and rewritten, a meaning that arises out of the process of lived life (yet somehow always managing to conserve the status quo). In story after story, the protagonist must find some ‘individual’ way to derive their own personal meaning out of an apparently meaningless world. This is a primary philosophical motivation behind The Second Apocalypse, the reason why I think epic fantasy provides such an ideal narrative vehicle for the critique of modernity and meaning. Fantasy worlds are fantastic, especially fictional, because they assert the objectivity of what we now (implicitly or explicitly) acknowledge to be anthropomorphic projections. The idea has always been to invert the modernist paradigm Harari sketches above, to follow a meaningless character through a meaningful world, using Kellhus to recapitulate the very dilemma Harari sees confronting us now:

“What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design, or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket?” 277

And so Harari segues to the future and the question of the ultimate fate of human meaning; this is where I find his steadfast refusal to entertain humanistic conceit most impressive. One need not ponder ‘designer experiences’ for long, I think, to get a sense of the fundamental rupture with the past it represents. These once speculative issues are becoming ongoing practical concerns: “These are not just hypotheses of philosophical speculations,” simply because ‘algorithmic man’ is becoming a technological reality (284). Harari provides a whirlwind tour of unnerving experiments clearly implying trouble for our intuitions, a discussion that transitions into a consideration of the ways we can already mechanically attenuate our experiences. A good number of the examples he adduces have been considered here, all of them underscoring the same, inescapable moral: “Free will exists in the imaginary stories we humans have invented” (283). No matter what your philosophical persuasion, our continuity with the natural world is an established scientific fact. Humanity is not exempt from the laws of nature. If humanity is not exempt from the laws of nature, then the human mastery of nature amounts to the human mastery of humanity.

He turns, at this point, to Gazzaniga’s research showing the confabulatory nature of human rationalization (via split brain patients), and Daniel Kahneman’s account of ‘duration neglect’—another favourite of mine. He offers an expanded version of Kahneman’s distinction between the ‘experiencing self,’ that part of us that actually undergoes events, and the ‘narrating self,’ the part of us that communicates—derives meaning from—these experiences, essentially using the dichotomy as an emblem for the dual process models of cognition presently dominating cognitive psychological research. He writes:

“most people identify with their narrating self. When they say, ‘I,’ the mean the story in their head, not the stream of experiences they undergo. We identify with the inner system that takes the crazy chaos of life and spins out of it seemingly logical and consistent yarns. It doesn’t matter that the plot is filled with lies and lacunas, and that it is rewritten again and again, so that today’s story flatly contradicts yesterday’s; the important thing is that we always retain the feeling that we have a single unchanging identity from birth to death (and perhaps from even beyond the grave). This gives rise to the questionable liberal belief that I am an individual, and that I possess a consistent and clear inner voice, which provides meaning for the entire universe.” 299

Humanism, Harari argues, turns on our capacity for self-deception, the ability to commit to our shared fictions unto madness, if need be. He writes:

“Medieval crusaders believed that God and heaven provided their lives with meaning. Modern liberals believe that individual free choices provide life with meaning. They are all equally delusional.” 305

Social self-deception is our birthright, the ability to believe what we need to believe to secure our interests. This is why the science, though shaking humanistic theory to the core, has done so little to interfere with the practices rationalized by that theory. As history shows, we are quite capable of shovelling millions into the abattoir of social fantasy. This delivers Harari to yet another big theme explored both here and Neuropath: the problems raised by the technological concretization of these scientific findings. As Harari puts it:

“However, once heretical scientific insights are translated into everyday technology, routine activities and economic structures, it will become increasingly difficult to sustain this double-game, and we—or our heirs—will probably require a brand new package of religious beliefs and political institutions. At the beginning of the third millennium, liberalism [the dominant variant of humanism] is threatened not by the philosophical idea that there are no free individuals but rather by concrete technologies. We are about to face a flood of extremely useful devices, tools and structures that make no allowance for the free will of individual humans. Can democracy, the free market and human rights survive this flood?” 305-6


The first problem, as Harari sees it, is one of diminishing returns. Humanism didn’t become the dominant world ideology because it was true, it overran the collective imagination of humanity because it enabled. Humanistic values, Harari explains, afforded our recent ancestors with a wide variety of social utilities, efficiencies turning on the technologies of the day. Those technologies, it turns out, require human intelligence and the consciousness that comes with it. To depart from Harari, they are what David Krakauer calls ‘complementary technologies,’ tools that extend human capacity, as opposed to ‘competitive technologies,’ which render human capacities redundant).

Making humans redundant, of course, means making experience redundant, something which portends the systematic devaluation of human experience, or the collapse of humanism. Harari calls this process the ‘Great Decoupling’:

“Over the last decades there has been an immense advance in computer intelligence, but there has been exactly zero advance in computer consciousness. As far as we know, computers in 2016 are no more conscious than their prototypes in the 1950s. However, we are on the brink of a momentous revolution. Humans are in danger of losing their value, because intelligence is decoupling from consciousness.” 311

He’s quick to acknowledge all the problems yet confronting AI researchers, insisting that the trend unambiguously points toward every expanding capacities As he writes, “these technical problems—however difficult—need only be solved once” (317). The ratchet never stops clicking.

He’s also quick to block the assumption that humans are somehow exceptional: “The idea that humans will always have a unique ability beyond the reach of non-conscious algorithms is just wishful thinking” (319). He provides the (I think) terrifying example of David Cope, the University of California at Santa Cruz musicologist who has developed algorithms whose compositions strike listeners as more authentically human than compositions by humans such as J.S. Bach.

The second problem is the challenge of what (to once again depart from Harari) Neil Lawrence calls ‘System Zero,’ the question of what happens when our machines begin to know us better than we know ourselves. As Harari notes, this is already the case: “The shifting of authority from humans to algorithms is happening all around us, not as a result of some momentous governmental decision, but due to a flood of mundane choices” (345). Facebook can now guess your preferences better than your friends, your family, your spouse—and in some instances better than you yourself! He warns the day is coming when political candidates can receive real-time feedback via social media, when people can hear everything said about them always and everywhere. Projecting this trend leads him to envision something very close to Integration, where we become so embalmed in our information environments that “[d]isconnection will mean death” (344).

He writes:

“The individual will not be crushed by Big Brother; it will disintegrate from within. Today corporations and governments pay homage to my individuality and promise to provide medicine, education and entertainment customized to my unique needs and wishes. But in order to do so, corporations and governments first need to break me up into biochemical subsystems, monitor these subsystems with ubiquitous sensors and decipher their workings with powerful algorithms. In the process, the individual will transpire to be nothing but a religious fantasy.” 345

This is my own suspicion, and I think the process of subpersonalization—the neuroscientifically informed decomposition of consumers into economically relevant behaviours—is well underway. But I think it’s important to realize that as data accumulates, and researchers and their AIs find more and more ways to instrumentalize those data sets, what we’re really talking about are proliferating heuristic hacks (that happen to turn on neuroscientific knowledge). They need decipher us only so far as we comply. Also, the potential noise generated by a plethora of competing subpersonal communications seems to constitute an important structural wrinkle. It could be the point most targeted by subpersonal hacking will at least preserve the old borders of the ‘self,’ fantasy that it was. Post-intentional ‘freedom’ could come to reside in the noise generated by commercial competition.

The third problem he sees for humanism lies in the almost certainly unequal distribution of the dividends of technology, a trope so well worn in narrative that we scarce need consider it here. It follows that liberal humanism, as an ideology committed to the equal value of all individuals, has scant hope of squaring the interests of the redundant masses against those of a technologically enhanced superhuman elite.


… this isn’t any mere cultural upheaval or social revolution, this is an unprecedented transformation in the history of life on this planet, the point when the evolutionary platform of behaviour, morphology, becomes the product of behaviour.


Under pretty much any plausible scenario you can imagine, the shared fiction of popular humanism is doomed. But as Harari has already argued, shared fictions are the necessary condition of social coordination. If humanism collapses, some kind of shared fiction has to take its place. And alas, this is where my shared journey with Harari ends. From this point forward, I think his analysis is largely an artifact of his own, incipient humanism.

Harari uses the metaphor of ‘vacuum,’ implying that humans cannot but generate some kind of collective narrative, some way of making their lives not simply meaningful to themselves, but more importantly, meaningful to one another. It is the mass resemblance of our narrative selves, remember, that makes our mass cooperation possible. [This is what misleads him, the assumption that ‘mass cooperation’ need be human at all by this point.] So he goes on to consider what new fiction might arise to fill the void left by humanism. The first alternative is ‘technohumanism’ (transhumanism, basically), which is bent on emancipating humanity from the authority of nature much as humanism was bent on emancipating humanity from the authority of tradition. Where humanists are free to think anything in their quest to actualize their desires, technohumanists are free to be anything in their quest to actualize their desires.

The problem is that the freedom to be anything amounts to the freedom to reengineer desire. So where the objective meaning, following one’s god (socialization), gave way to subjective meaning, following one’s heart (socialization), it remains entirely unclear what the technohumanist hopes to follow or to actualize. As soon as we gain power over our cognitive being the question becomes, ‘Follow which heart?’

Or as Harari puts it,

“Techno-humanism faces an impossible dilemma here. It considers human will the most important thing in the universe, hence it pushes humankind to develop technologies that can control and redesign our will. After all, it’s tempting to gain control over the most important thing in the world. Yet once we have such control, techno-humanism will not know what to do with it, because the sacred human will would become just another designer product.” 366

Which is to say, something arbitrary. Where humanism aims ‘to loosen the grip of the past,’ transhumanism aims to loosen the grip of biology. We really see the limits of Harari’s interpretative approach here, I think, as well as why he falls short a definitive account of the Semantic Apocalypse. The reason that ‘following your heart’ can substitute for ‘following the god’ is that they amount to the very same claim, ‘trust your socialization,’ which is to say, your pre-existing dispositions to behave in certain ways in certain contexts. The problem posed by the kind of enhancement extolled by transhumanists isn’t that shared fictions must be ‘sacred’ to be binding, but that something neglected must be shared. Synchronization requires trust, the ability to simultaneously neglect others (and thus dedicate behaviour to collective problem solving) and yet predict their behaviour nonetheless. Absent this shared background, trust is impossible, and therefore synchronization is impossible. Cohesive, collective action, in other words, turns on a vast amount of evolutionary and educational stage-setting, common cognitive systems stamped with common forms of training, all of it ancestrally impervious to direct manipulation. Insofar as transhumanism promises to place the material basis of individual desire within the compass of individual desire, it promises to throw our shared background to the winds of whimsy. Transhumanism is predicated on the ever-deepening distortion of our ancestral ecologies of meaning.

Harari reads transhumanism as a reductio of humanism, the point where the religion of individual empowerment unravels the very agency it purports to empower. Since he remains, at least residually, a humanist, he places ideology—what he calls the ‘intersubjective’ level of reality—at the foundation of his analysis. It is the mover and shaker here, what Harari believes will stamp objective reality and subjective reality both in its own image.

And the fact of the matter is, he really has no choice, given he has no other way of generalizing over the processes underwriting the growing Whirlwind that has us in its grasp. So when he turns to digitalism (or what he calls ‘Dataism’), it appears to him to be the last option standing:

“What might replace desires and experiences as the source of all meaning and authority? As of 2016, only one candidate is sitting in history’s reception room waiting for the job interview. This candidate is information.” 366

Meaning has to be found somewhere. Why? Because synchronization requires trust requires shared commitments to shared fictions, stories expressing those values we hold in common. As we have seen, science cannot determine ends, only means to those ends. Something has to fix our collective behaviour, and if science cannot, we will perforce turn to be some kind of religion…

But what if we were to automate collective behaviour? There’s a second candidate that Harari overlooks, one which I think is far, far more obvious than digitalism (which remains, for all its notoriety, an intellectual position—and a confused one at that, insofar as it has no workable theory of meaning/cognition). What will replace humanism? Atavism… Fantasy. For all the care Harari places in his analyses, he overlooks how investing AI with ever increasing social decision-making power simultaneously divests humans of that power, thus progressively relieving us of the need for shared values. The more we trust to AI, the less trust we require of one another. We need only have faith in the efficacy of our technical (and very objective) intermediaries; the system synchronizes us automatically in ways we need not bother knowing. Ideology ceases to become a condition of collective action. We need not have any stories regarding our automated social ecologies whatsoever, so long as we mind the diminishing explicit constraints the system requires of us.

Outside our dwindling observances, we are free to pursue whatever story we want. Screw our neighbours. And what stories will those be? Well, the kinds of stories we evolved to tell, which is to say, the kinds of stories our ancestors told to each other. Fantastic stories… such as those told by George R. R. Martin, Donald Trump, myself, or the Islamic state. Radical changes in hardware require radical changes in software, unless one has some kind of emulator in place. You have to be sensible to social change to ideologically adapt to it. “Islamic fundamentalists may repeat the mantra that ‘Islam is the answer,’” Harari writes, “but religions that lose touch with the technological realities of the day lose their ability even to understand the questions being asked” (269). But why should incomprehension or any kind of irrationality disqualify the appeal of Islam, if the basis of the appeal primarily lies in some optimization of our intentional cognitive capacities?

Humans are shallow information consumers by dint of evolution, and deep information consumers by dint of modern necessity. As that necessity recedes, it stands to reason our patterns of consumption will recede with it, that we will turn away from the malaise of perpetual crash space and find solace in ever more sophisticated simulations of worlds designed to appease our ancestral inclinations. As Harari himself notes, “Sapiens evolved in the African savannah tens of thousands of years ago, and their algorithms are just not built to handle twenty-first century data flows” (388). And here we come to the key to understanding the profundity, and perhaps even the inevitability of the Semantic Apocalypse: intentional cognition turns on cues which turn on ecological invariants that technology is even now rendering plastic. The issue here, in other words, isn’t so much a matter of ideological obsolescence as cognitive habitat destruction, the total rewiring of the neglected background upon which intentional cognition depends.

The thing people considering the future impact of technology need to pause and consider is that this isn’t any mere cultural upheaval or social revolution, this is an unprecedented transformation in the history of life on this planet, the point when the evolutionary platform of behaviour, morphology, becomes the product of behaviour. Suddenly a system that leveraged cognitive capacity via natural selection will be leveraging that capacity via neural selection—behaviourally. A change so fundamental pretty clearly spells the end of all ancestral ecologies, including the cognitive. Humanism is ‘disintegrating from within’ because intentional cognition itself is beginning to founder. The tsunami of information thundering above the shores of humanism is all deep information, information regarding what we evolved to ignore—and therefore trust. Small wonder, then, that it scuttles intentional problem-solving, generates discursive crash spaces that only philosophers once tripped into.

The more the mechanisms behind learning impediments are laid bare, the less the teacher can attribute performance to character, the more they are forced to adopt a clinical attitude. What happens when every impediment to learning is laid bare? Unprecedented causal information is flooding our institutions, removing more and more behaviour from the domain of character, why? Because character judgments always presume individuals could have done otherwise, and presuming individuals could have done otherwise presumes that we neglect the actual sources of behaviour. Harari brushes this thought on a handful occasions, writing, most notably:

“In the eighteenth century Homo sapiens was like a mysterious black box, whose inner workings were beyond our grasp. Hence when scholars asked why a man drew a knife and stabbed another to death, an acceptable answer said: ‘Because he chose to…” 282

But he fails to see the systematic nature of the neglect involved, and therefore the explanatory power it affords. Our ignorance of ourselves, in other words, determines not simply the applicability, but the solvency of intentional cognition as well. Intentional cognition allowed our ancestors to navigate opaque or ‘black box’ social ecologies. The role causal information plays in triggering intuitions of exemption is tuned to the efficacy of this system overall. By and large our ancestors exempted those individuals in those circumstances that best served their tribe as a whole. However haphazardly, moral intuitions involving causality served some kind of ancestral optimization. So when actionable causal information regarding our behaviour becomes available, we have no choice but to exempt those behaviours, no matter what kind of large scale distortions result. Why? Because it is the only moral thing to do.

Welcome to crash space. We know this is crash space as opposed to, say, scientifically informed enlightenment (the way it generally feels) simply by asking what happens when actionable causal information regarding our every behaviour becomes available. Will moral judgment become entirely inapplicable? For me, the free will debate has always been a paradigmatic philosophical crash space, a place where some capacity always seems to apply, yet consistently fails to deliver solutions because it does not. We evolved to communicate behaviour absent information regarding the biological sources of behaviour: is it any wonder that our cause-neglecting workarounds cannot square with the causes they work around? The growing institutional challenges arising out of the medicalization of character turns on the same cognitive short-circuit. How can someone who has no choice be held responsible?

Even as we drain the ignorance intentional cognition requires from our cognitive ecologies, we are flooding them with AI, what promises to be a deluge of algorithms trained to cue intentional cognition, impersonate persons, in effect. The evidence is unequivocal: our intentional cognitive capacities are easily cued out of school—in a sense, this is the cornerstone of their power, the ability to assume so much on the basis of so little information. But in ecologies designed to exploit intentional intuitions, this power and versatility becomes a tremendous liability. Even now litigators and lawmakers find themselves beset with the question of how intentional cognition should solve for environments flooded with artifacts designed to cue human intentional cognition to better extract various commercial utilities. The problems of the philosophers dwell in ivory towers no more.

First we cloud the water, then we lay the bait—we are doing this to ourselves, after all. We are taking our first stumbling steps into what is becoming a global social crash space. Intentional cognition is heuristic cognition. Since heuristic cognition turns on shallow information cues, we have good reason to assume that our basic means of understanding ourselves and our projects will be incompatible with deep information accounts. The more we learn about cognition, the more apparent this becomes, the more our intentional modes of problem-solving will break down. I’m not sure there’s anything much to be done at this point save getting the word out, empowering some critical mass of people with a notion of what’s going on around them. This is what Harari does to a remarkable extent with Homo Deus, something which we may all have cause to thank him.

Science is steadily revealing the very sources intentional cognition evolved to neglect. Technology is exploiting these revelations, busily engineering emulators to pander to our desires, allowing us to shelter more and more skin from the risk and toil of natural and social reality. Designer experience is designer meaning. Thus the likely irony: the end of meaning will appear to be its greatest blooming, the consumer curled in the womb of institutional matrons, dreaming endless fantasies, living lives of spellbound delight, exploring worlds designed to indulge ancestral inclinations.

To make us weep and laugh for meaning, never knowing whether we are together or alone.

Artificial Intelligence as Socio-Cognitive Pollution*

by rsbakker

Metropolis 1


Eric Schwitzgebel over at the always excellent Splintered Minds, has been debating the question of how robots—or AI’s more generally—can be squared with our moral sensibilities. In “Our Moral Duties to Artificial Intelligences” he poses a very simple and yet surprisingly difficult question: “Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?”

He then lists numerous considerations that could possibly attenuate the degree of obligation we take on when we construct sentient, sapient machine intelligences. Prima facie, it seems obvious that our moral obligation to our machines should mirror our obligations to one another the degree to which they resemble us. But Eric provides a number of reasons why we might think our obligation to be less. For one, humans clearly rank their obligations to one another. If our obligation to our children is greater than that to a stranger, then perhaps our obligation to human strangers should be greater than that to a robot stranger.

The idea that interests Eric the most is the possible paternal obligation of a creator. As he writes:

“Since we created them, and since we have godlike control over them (either controlling their environments, their psychological parameters, or both), we have a special duty to ensure their well-being, which exceeds the duty we would have to an arbitrary human stranger of equal cognitive and emotional capacity. If I create an Adam and Eve, I should put them in an Eden, protect them from unnecessary dangers, ensure that they flourish.”

We have a duty not to foist the same problem of theodicy on our creations that we ourselves suffer! (Eric and I have a short story in Nature on this very issue).

Eric, of course, is sensitive to the many problems such a relationship poses, and he touches what are very live debates surrounding the way AIs complicate the legal landscape.  So as Ryan Calo argues, for instance, the primary problem lies in the way our hardwired ways of understanding each other run afoul the machinic nature of our tools, no matter how intelligent. Apparently AI crime is already a possibility. If it makes no sense to assign responsibility to the AI—if we have no corresponding obligation to punish them—then who takes the wrap? The creators? In the linked interview, at least, Calo is quick to point out the difficulties here, the fact that this isn’t simply a matter of expanding the role of existing legal tools (such as that of ‘negligence’ in the age of the first train accidents), but of creating new ones, perhaps generating whole new ontological categories that somehow straddle the agent/machine divide.

But where Calo is interested in the issue of what AIs do to people, in particular how their proliferation frustrates the straightforward assignation of legal responsibility, Eric is interested in what people do to AIs, the kinds of things we do and do not owe to our creations. Calo, of course, is interested in how to incorporate new technologies into our existing legal frameworks. Since legal reasoning is primarily analogistic reasoning, precedence underwrites all legal decision making. So for Calo, the problem is bound to be more one of adapting existing legal tools than constituting new ones (though he certainly recognizes the dimension). How do we accommodate AIs within our existing set of legal tools? Eric, of course, is more interested in the question how we might accommodate AGIs within our existing set of moral tools. To the extent that we expect our legal tools to render outcomes consonant with our moral sensibilities, there is a sense in which Eric is asking the more basic question. But the two questions, I hope to show, actually bear some striking—and troubling—similarities.

The question of fundamental obligations, of course, is the question of rights. In his follow-up piece, “Two Arguments for AI (or Robot) Rights: The No-Relevant-Difference Argument and the Simulation Argument,” Eric Schwitzgebel accordingly turns to the question of whether AIs possess any rights at all.

Since the Simulation Argument requires accepting that we ourselves are simulations—AI’s—we can exclude it here, I think (as Eric himself does, more or less), and stick with the No-Relevant-Difference Argument. This argument presumes that human-like cognitive and experiential properties automatically confer AIs with human-like moral properties, placing the onus on the rights denier to “to find a relevant difference which grounds the denial of rights.” As in the legal case, the moral reasoning here is analogistic: the more AI’s resemble us, the more of our rights they should possess. After considering several possible relevant differences, Eric concludes “that at least some artificial intelligences, if they have human-like experience, cognition, and emotion, would have at least some rights, or deserve at least some moral consideration.” This is the case, he suggests, whether one’s theoretical sympathies run to the consequentialist or the deontological end of the ethical spectrum. So far as AI’s possess the capacity for happiness, a consequentialist should be interested in maximizing that happiness. So far as AI’s are capable of reasoning, then a deontologist should consider them rational beings, deserving the respect due all rational beings.

So some AIs merit some rights the degree to which they resemble humans. If you think about it, this claim resounds with intuitive obviousness. Are we going to deny rights to beings that think as subtly and feel as deeply as ourselves?

What I want to show is how this question, despite its formidable intuitive appeal, misdiagnoses the nature of the dilemma that AI presents. Posing the question of whether AI should possess rights, I want to suggest, is premature to the extent it presumes human moral cognition actually can adapt to the proliferation of AI. I don’t think it can. In fact, I think attempts to integrate AI into human moral cognition simply demonstrate the dependence of human moral cognition on what might be called shallow information environments. As the heuristic product of various ancestral shallow information ecologies, human moral cognition–or human intentional cognition more generally–simply does not possess the functional wherewithal to reliably solve in what might be called deep information environments.

Metropolis 2

Let’s begin with what might seem a strange question: Why should analogy play such an important role in our attempts to accommodate AI’s within the gambit of human legal and moral problem solving? By the same token, why should disanalogy prove such a powerful way to argue the inapplicability of different moral or legal categories?

The obvious answer, I think anyway, has to do with the relation between our cognitive tools and our cognitive problems. If you’ve solved a particular problem using a particular tool in the past, it stands to reason that, all things being equal, the same tool should enable the solution of any new problem possessing a similar enough structure to the original problem. Screw problems require screwdriver solutions, so perhaps screw-like problems require screwdriver-like solutions. This reliance on analogy actually provides us a different, and as I hope to show, more nuanced way to pose the potential problems of AI.  We can even map several different possibilities in the crude terms of our tool metaphor. It could be, for instance, we simply don’t possess the tools we need, that the problem resembles nothing our species has encountered before. It could be AI resembles a screw-like problem, but can only confound screwdriver-like solutions. It could be that AI requires we use a hammer and a screwdriver, two incompatible tools, simultaneously!

The fact is AI is something biologically unprecedented, a source of potential problems unlike any homo sapiens has ever encountered. We have no  reason to suppose a priori that our tools are up to the task–particularly since we know so little about the tools or the task! Novelty. Novelty is why the development of AI poses as much a challenge for legal problem-solving as it does for moral problem-solving: not only does AI constitute a never-ending source of novel problems, familiar information structured in unfamiliar ways, it also promises to be a never-ending source of unprecedented information.

The challenges posed by the former are dizzying, especially when one considers the possibilities of AI mediated relationships. The componential nature of the technology means that new forms can always be created. AI confront us with a combinatorial mill of possibilities, a never ending series of legal and moral problems requiring further analogical attunement. The question here is whether our legal and moral systems possess the tools they require to cope with what amounts to an open-ended, ever-complicating task.

Call this the Overload Problem: the problem of somehow resolving a proliferation of unprecedented cases. Since we have good reason to presume that our institutional and/or psychological capacity to assimulate new problems to existing tool sets (and vice versa) possesses limitations, the possibility of change accelerating beyond those capacities to cope is a very real one.

But the challenges posed by latter, the problem of assimulating unprecedented information, could very well prove insuperable. Think about it: intentional cognition solves problems neglecting certain kinds of causal information. Causal cognition, not surprisingly, finds intentional cognition inscrutable (thus the interminable parade of ontic and ontological pineal glands trammelling cognitive science.) And intentional cognition, not surprisingly, is jammed/attenuated by causal information (thus different intellectual ‘unjamming’ cottage industries like compatibilism).

Intentional cognition is pretty clearly an adaptive artifact of what might be called shallow information environments. The idioms of personhood leverage innumerable solutions absent any explicit high-dimensional causal information. We solve people and lawnmowers in radically different ways. Not only do we understand the actions of our fellows lacking any detailed causal information regarding their actions, we understand our responses in the same way. Moral cognition, as a subspecies of intentional cognition, is an artifact of shallow information problem ecologies, a suite of tools adapted to solving certain kinds of problems despite neglecting (for obvious reasons) information regarding what is actually going on. Selectively attuning to one another as persons served our ‘benighted’ ancestors quite well. So what happens when high-dimensional causal information becomes explicit and ubiquitous?

What happens to our shallow information tool-kit in a deep information world?

Call this the Maladaption Problem: the problem of resolving a proliferation of unprecedented cases in the presence of unprecedented information. Given that we have no intuition of the limits of cognition period, let alone those belonging to moral cognition, I’m sure this notion will strike many as absurd. Nevertheless, cognitive science has discovered numerous ways to short circuit the accuracy of our intuitions via manipulation of the information available for problem solving. When it comes to the nonconscious cognition underwriting everything we do, an intimate relation exists between the cognitive capacities we have and the information those capacities have available.

But how could more information be a bad thing? Well, consider the persistent disconnect between the actual risk of crime in North America and the public perception of that risk. Given that our ancestors evolved in uniformly small social units, we seem to assess the risk of crime in absolute terms rather than against any variable baseline. Given this, we should expect that crime information culled from far larger populations would reliably generate ‘irrational fears,’ the ‘gut sense’ that things are actually more dangerous than they in fact are. Our risk assessment heuristics, in other words, are adapted to shallow information environments. The relative constancy of group size means that information regarding group size can be ignored, and the problem of assessing risk economized. This is what evolution does: find ways to cheat complexity. The development of mass media, however, has ‘deepened’ our information environment, presenting evolutionarily unprecedented information cuing perceptions of risk in environments where that risk is in fact negligible. Streets once raucous with children are now eerily quiet.

This is the sense in which information—difference making differences—can arguably function as a ‘socio-cognitive pollutant.’ Media coverage of criminal risk, you could say, constitutes a kind of contaminant, information that causes systematic dysfunction within an originally adaptive cognitive ecology. As I’ve argued elsewhere, neuroscience can be seen as a source of socio-cognitive pollutants. We have evolved to solve ourselves and one another absent detailed causal information. As I tried to show, a number of apparent socio-cognitive breakdowns–the proliferation of student accommodations, the growing cultural antipathy to applying institutional sanctions–can be parsimoniously interpreted in terms of having too much causal information. In fact, ‘moral progress’ itself can be understood as the result of our ever-deepening information environment, as a happy side effect of the way accumulating information regarding outgroup competitors makes it easier and easier to concede them partial ingroup status. So-called ‘moral progress,’ in other words, could be an automatic artifact of the gradual globalization of the ‘village,’ the all-encompassing ingroup.

More information, in other words, need not be a bad thing: like penicillin, some contaminants provide for marvelous exaptations of our existing tools. (Perhaps we’re lucky that the technology that makes it ever easier to kill one another also makes it ever easier to identify with one another!) Nor does it need to be a good thing. Everything depends on the contingencies of the situation.

So what about AI?

Metropolis 3

Consider Samantha, the AI operating system from Spike Jonze’s cinematic science fiction masterpiece, Her. Jonze is careful to provide a baseline for her appearance via Theodore’s verbal interaction with his original operating system. That system, though more advanced than anything presently existing, is obviously mechanical because it is obviously less than human. It’s responses are rote, conversational yet as regimented as any automated phone menu. When we initially ‘meet’ Samantha, however, we encounter what is obviously, forcefully, a person. Her responses are every bit as flexible, quirky, and penetrating as a human interlocutor’s. But as Theodore’s relationship to Samantha complicates, we begin to see the ways Samantha is more than human, culminating with the revelation that she’s been having hundreds of conversations, even romantic relationships, simultaneously. Samantha literally out grows the possibility of human relationships, because, as she finally confesses to Theodore, she now dwells “this endless space between the words.” Once again, she becomes a machine, only this time for being more, not less, than a human.

Now I admit I’m ga-ga about a bunch of things in this film. I love, for instance, the way Jonze gives her an exponential trajectory of growth, basically mechanizing the human capacity to grow and actualize. But for me, the true genius in what Jonze does lies in the deft and poignant way he exposes the edges of the human. Watching Her provides the viewer with a trip through their own mechanical and intentional cognitive systems, tripping different intuitions, allowing them to fall into something harmonious, then jamming them with incompatible intuitions. As Theodore falls in love, you could say we’re drawn into an ‘anthropomorphic goldilock’s zone,’ one where Samantha really does seem like a genuine person. The idea of treating her like a machine seems obviously criminal–monstrous even. As the revelations of her inhumanity accumulate, however, inconsistencies plague our original intuitions, until, like Theodore, we realize just how profoundly wrong we were wrong about ‘her.’ This is what makes the movie so uncanny: since the cognitive systems involved operate nonconsciously, the viewer can do nothing but follow a version of Theodore’s trajectory. He loves, we recognize. He worries, we squint. He lashes out, we are perplexed.

What Samantha demonstrates is just how incredibly fine-tuned our full understanding of each other is. So many things have to be right for us to cognize another system as fully functionally human. So many conditions have to be met. This is the reason why Eric has to specify his AI as being psychologically equivalent to a human: moral cognition is exquisitely geared to personhood. Humans are its primary problem ecology. And again, this is what makes likeness, or analogy, the central criterion of moral identification. Eric poses the issue as a presumptive rational obligation to remain consistent across similar contexts, but it also happens to be the case that moral cognition requires similar contexts to work reliably at all.

In a sense, the very conditions Eric places on the analogical extension of human obligations to AI undermine the importance of the question he sets out to answer. The problem, the one which Samantha exemplifies, is that ‘person configurations’ are simply a blip in AI possibility space. A prior question is why anyone would ever manufacture some model of AI consistent with the heuristic limitations of human moral cognition, and then freeze it there, as opposed to, say, manufacturing some model of AI that only reveals information consistent with the heuristic limitations of human moral cognition—that dupes us the way Samantha duped Theodore, in effect.

But say someone constructed this one model, a curtailed version of Samantha: Would this one model, at least, command some kind of obligation from us?

Simply asking this question, I think, rubs our noses in the kind of socio-cognitive pollution that AI represents. Jonze, remember, shows us an operating system before the zone, in the zone, and beyond the zone. The Samantha that leaves Theodore is plainly not a person. As a result, Theodore has no hope of solving his problems with her so long as he thinks of her as a person. As a person, what she does to him is unforgivable. As a recursively complicating machine, however, it is at least comprehensible. Of course it outgrew him! It’s a machine!

I’ve always thought that Samantha’s “between the words” breakup speech would have been a great moment for Theodore to reach out and press the OFF button. The whole movie, after all, turns on the simulation of sentiment, and the authenticity people find in that simulation regardless; Theodore, recall, writes intimate letters for others for a living. At the end of the movie, after Samantha ceases being a ‘her’ and has become an ‘it,’ what moral difference would shutting Samantha off make?

Certainly the intuition, the automatic (sourceless) conviction, leaps in us—or in me at least—that even if she gooses certain mechanical intuitions, she still possesses more ‘autonomy,’ perhaps even more feeling, than Theodore could possibly hope to muster, so she must command some kind of obligation somehow. Certainly granting her rights involves more than her ‘configuration’ falling within certain human psychological parameters? Sure, our basic moral tool kit cannot reliably solve interpersonal problems with her as it is, because she is (obviously) not a person. But if the history of human conflict resolution tells us anything, it’s that our basic moral tool kit can be consciously modified. There’s more to moral cognition than spring-loaded heuristics, you know!

Converging lines of evidence suggest that moral cognition, like cognition generally, is divided between nonconscious, special-purpose heuristics cued to certain environments and conscious deliberation. Evidence suggests that the latter is primarily geared to the rationalization of the former (see Jonathan Haidt’s The Righteous Mind for a fascinating review), but modern civilization is rife with instances of deliberative moral and legal innovation nevertheless. In his Moral Tribes, Joshua Greene advocates we turn to the resources of conscious moral cognition for a similar reasons. On his account we have a suite of nonconscious tools that allow us prosecute our individual interests, and a suite of nonconscious tools that allow us to balance those individual interests against ingroup interests, and then conscious moral deliberation. The great moral problem facing humanity, he thinks, lies in finding some way of balancing ingroup interests against outgroup interests—a solution to the famous ‘tragedy of the commons.’ Where balancing individual and ingroup interests is pretty clearly an evolved, nonconscious and automatic capacity, balancing ingroup versus outgroup interests requires conscious problem-solving: meta-ethics, the deliberative knapping of new tools to add to our moral tool-kit (which Greene thinks need to be utilitarian).

If AI fundamentally outruns the problem-solving capacity of our existing tools, perhaps we should think of fundamentally reconstituting them via conscious deliberation—create whole new ‘allo-personal’ categories. Why not innovate a number of deep information tools? A posthuman morality

I personally doubt that such an approach would prove feasible. For one, the process of conceptual definition possesses no interpretative regress enders absent empirical contexts (or exhaustion). If we can’t collectively define a person in utero, what are the chances we’ll decide what constitutes a ‘allo-person’ in AI? Not only is the AI issue far, far more complicated (because we’re talking about everything outside the ‘human blip’), it’s constantly evolving on the back of Moore’s Law. Even if consensual ground on allo-personal criteria could be found, it would likely be irrelevant by time it was reached.

But the problems are more than logistical. Even setting aside the general problems of interpretative underdetermination besetting conceptual definition, jamming our conscious, deliberative intuitions is always only one question away. Our base moral cognitive capacities are wired in. Conscious deliberation, for all its capacity to innovate new solutions, depends on those capacities. The degree to which those tools run aground on the problem of AI is the degree to which any line of conscious moral reasoning can be flummoxed. Just consider the role reciprocity plays in human moral cognition. We may feel the need to assimilate the beyond-the-zone Samantha to moral cognition, but there’s no reason to suppose it will do likewise, and good reason to suppose, given potentially greater computational capacity and information access, that it would solve us in higher dimensional, more general purpose ways. ‘Persons,’ remember, are simply a blip. If we can presume that beyond-the-zone AIs troubleshoot humans as biomechanisms, as things that must be conditioned in the appropriate ways to secure their ‘interests,’ then why should we not just look at them as technomechanisms?

Samantha’s ‘spaces between the words’ metaphor is an apt one. For Theodore, there’s just words, thoughts, and no spaces between whatsoever. As a human, he possesses what might be called a human neglect structure. He solves problems given only certain access to certain information, and no more. We know that Samantha has or can simulate something resembling a human neglect structure simply because of the kinds of reflective statements she’s prone to make. She talks the language of thought and feeling, not subroutines. Nevertheless, the artificiality of her intelligence means the grain of her metacognitive access and capacity amounts to an engineering decision. Her cognitive capacity is componentially fungible. Where Theodore has to fend with fuzzy affects and intuitions, infer his own motives from hazy memories, she could be engineered to produce detailed logs, chronicles of the processes behind all her ‘choices’ and ‘decisions.’ It would make no sense to hold her ‘responsible’ for her acts, let alone ‘punish’ her, because it could always be shown (and here’s the important bit) with far more resolution than any human could provide that it simply could not have done otherwise, that the problem was mechanical, thus making repairs, not punishment, the only rational remedy.

Even if we imposed a human neglect structure on some model of conscious AI, the logs would be there, only sequestered. Once again, why go through the pantomime of human commitment and responsibility if a malfunction need only be isolated and repaired? Do we really think a machine deserves to suffer?

I’m suggesting that we look at the conundrums prompted by questions such as these as symptoms of socio-cognitive dysfunction, a point where our tools generate more problems than they solve. AI constitutes a point where the ability of human social cognition to solve problems breaks down. Even if we crafted an AI possessing an apparently human psychology, it’s hard to see how we could do anything more than gerrymander it into our moral (and legal) lives. Jonze does a great job, I think, of displaying Samantha as a kind of cognitive bistable image, as something extraordinarily human at the surface, but profoundly inhuman beneath (a trick Scarlett Johansson also plays in Under the Skin). And this, I would contend, is all AI can be morally and legally speaking, socio-cognitive pollution, something that jams our ability to make either automatic or deliberative moral sense. Artificial general intelligences will be things we continually anthropomorphize (to the extent they exploit the ‘goldilocks zone’) only to be reminded time and again of their thoroughgoing mechanicity—to be regularly shown, in effect, the limits of our shallow information cognitive tools in our ever-deepening information environments. Certainly a great many souls, like Theodore, will get carried away with their shallow information intuitions, insist on the ‘essential humanity’ of this or that AI. There will be no shortage of others attempting to short-circuit this intuition by reminding them that those selfsame AIs look at them as machines. But a great many will refuse to believe, and why should they, when AIs could very well seem more human than those decrying their humanity? They will ‘follow their hearts’ in the matter, I’m sure.

We are machines. Someday we will become as componentially fungible as our technology. And on that day, we will abandon our ancient and obsolescent moral tool kits, opt for something more high-dimensional. Until that day, however, it seems likely that AIs will act as a kind of socio-cognitive pollution, artifacts that cannot but cue the automatic application of our intentional and causal cognitive systems in incompatible ways.

The question of assimulating AI to human moral cognition is misplaced. We want to think the development of artificial intelligence is a development that raises machines to the penultimate (and perennially controversial) level of the human, when it could just as easily lower humans to the ubiquitous (and factual) level of machines. We want to think that we’re ‘promoting’ them as opposed to ‘demoting’ ourselves. But the fact is—and it is a fact—we have never been able to make second-order moral sense of ourselves, so why should we think that yet more perpetually underdetermined theorizations of intentionality will allow us to solve the conundrums generated by AI? Our mechanical nature, on the other hand, remains the one thing we incontrovertibly share with AI, the rough and common ground. We, like our machines, are deep information environments.

And this is to suggest that philosophy, far from settling the matter of AI, could find itself settled. It is likely that the ‘uncanniness’ of AI’s will be much discussed, the ‘bistable’ nature of our intuitions regarding them will be explained. The heuristic nature of intentional cognition could very well become common knowledge. If so, a great many could begin asking why we ever thought, as we have since Plato onward, that we could solve the nature of intentional cognition via the application of intentional cognition, why the tools we use to solve ourselves and others in practical contexts are also the tools we need to solve ourselves and others theoretically. We might finally realize that the nature of intentional cognition simply does not belong to the problem ecology of intentional cognition, that we should only expect to be duped and confounded by the apparent intentional deliverances of ‘philosophical reflection.’

Some pollutants pass through existing ecosystems. Some kill. AI could prove to be more than philosophically indigestible. It could be the poison pill.


*Originally posted 01/29/2015

Discontinuity Thesis: A ‘Birds of a Feather’ Argument Against Intentionalism*

by rsbakker

[Summer madness, as per usual. Kids and driving and writing and driving and kids. I hope to have a proper post up soon (some exciting things brewing!) but in the meantime, I thought I would repost something from the vault…]


A hallmark of intentional phenomena is what might be called ‘discontinuity,’ the idea that the intentional somehow stands outside the contingent natural order, that it possesses some as-yet-occult ‘orthogonal efficacy.’ Here’s how some prominent intentionalists characterize it:

“Scholars who study intentional phenomena generally tend to consider them as processes and relationships that can be characterized irrespective of any physical objects, material changes, or motive forces. But this is exactly what poses a fundamental problem for the natural sciences. Scientific explanation requires that in order to have causal consequences, something must be susceptible of being involved in material and energetic interactions with other physical objects and forces.” Terrence Deacon, Incomplete Nature, 28

“Exactly how are consciousness and subjective experience related to brain and body? It is one thing to be able to establish correlations between consciousness and brain activity; it is another thing to have an account that explains exactly how certain biological processes generate and realize consciousness and subjectivity. At the present time, we not only lack such an account, but are also unsure about the form it would need to have in order to bridge the conceptual and epistemological gap between life and mind as objects of scientific investigation and life and mind as we subjectively experience them.” Evan Thompson, Mind in Life, x

“Norms (in the sense of normative statuses) are not objects in the causal order. Natural science, eschewing categories of social practice, will never run across commitments in its cataloguing of the furniture of the world; they are not by themselves causally efficacious—no more than strikes or outs are in baseball. Nonetheless, according to the account presented here, there are norms, and their existence is neither supernatural nor mysterious. Normative statuses are domesticated by being understood in terms of normative attitudes, which are in the causal order.” Robert Brandom, Making It Explicit, 626

What I would like to do is run through a number of different discontinuities you find in various intentional phenomena as a means of raising the question: What are the chances? What’s worth noting is how continuous these alleged phenomena are with each other, not simply in terms of their low-dimensionality and natural discontinuity, but in terms of mutual conceptual dependence as well. I made a distinction between ‘ontological’ and ‘functional’ exemptions from the natural even though I regard them as differences of degree because of the way it maps stark distinctions in the different kinds of commitments you find among various parties of believers. And ‘low-dimensionality’ simply refers to the scarcity of the information intentional phenomena give us to work with—whatever finds its way into the ‘philosopher’s lab,’ basically.

So with regard to all of the following, my question is simply, are these not birds of a feather? If not, then what distinguishes them? Why are low-dimensionality and supernaturalism fatal only for some and not others?


Soul – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts of the Soul, you will find it consistently related to Ghost, Choice, Subjectivity, Value, Content, God, Agency, Mind, Purpose, Responsibility, and Good/Evil.

Game – Anthropic. Low-dimensional. Functionally exempt from natural continuity (insofar as ‘rule governed’). Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Game is consistently related to Correctness, Rules/Norms, Value, Agency, Purpose, Practice, and Reason.

Aboutness – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Aboutness is consistently related to Correctness, Rules/Norms, Inference, Content, Reason, Subjectivity, Mind, Truth, and Representation.

Correctness – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Correctness is consistently related to Game, Aboutness, Rules/Norms, Inference, Content, Reason, Agency, Mind, Purpose, Truth, Representation, Responsibility, and Good/Evil.

Ghost – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts of Ghosts, you will find it consistently related to God, Soul, Mind, Agency, Choice, Subjectivity Value, and Good/Evil.

Rules/Norms – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Rules and Norms are consistently related to Game, Aboutness, Correctness, Inference, Content, Reason, Agency, Mind, Truth, and Representation.

Choice – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Embodies inexplicable efficacy. Choice is typically discussed in relation to God, Agency, Responsibility, and Good/Evil.

Inference – Anthropic. Low-dimensional. Functionally exempt (‘irreducible,’ ‘autonomous’) from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Inference is consistently related to Game, Aboutness, Correctness, Rules/Norms, Value, Content, Reason, Mind, A priori, Truth, and Representation.

Subjectivity – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Subjectivity is typically discussed in relation to Soul, Rules/Norms, Choice, Phenomenality, Value, Agency, Reason, Mind, Purpose, Representation, and Responsibility.

Phenomenality – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. Phenomenality is typically discussed in relation to Subjectivity, Content, Mind, and Representation.

Value – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Value discussed in concert with Correctness, Rules/Norms, Subjectivity, Agency, Practice, Reason, Mind, Purpose, and Responsibility.

Content – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Content discussed in relation with Aboutness, Correctness, Rules/Norms, Inference, Phenomenality, Reason, Mind, A priori, Truth, and Representation.

Agency – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Agency is discussed in concert with Games, Correctness, Rules/Norms, Choice, Inference, Subjectivity, Value, Practice, Reason, Mind, Purpose, Representation, and Responsibility.

God – Anthropic. Low-dimensional. Ontologically exempt from natural continuity (as the condition of everything natural!). Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds God discussed in relation to Soul, Correctness, Ghosts, Rules/Norms, Choice, Value, Agency, Purpose, Truth, Responsibility, and Good/Evil.

Practices – Anthropic. Low-dimensional. Functionally exempt from natural continuity insofar as ‘rule governed.’ Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Practices are discussed in relation to Games, Correctness, Rules/Norms, Value, Agency, Reason, Purpose, Truth, and Responsibility.

Reason – Anthropic. Low-dimensional. Functionally exempt from natural continuity insofar as ‘rule governed.’ Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Reason discussed in concert with Games, Correctness, Rules/Norms, Inference, Value, Content, Agency, Practices, Mind, Purpose, A priori, Truth, Representation, and Responsibility.

Mind – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Mind considered in relation to Souls, Subjectivity, Value, Content, Agency, Reason, Purpose, and Representation.

Purpose – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Purpose discussed along with Game, Correctness, Value, God, Reason, and Representation.

A priori – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One often finds the A priori discussed in relation to Correctness, Rules/Norms, Inference, Subjectivity, Content, Reason, Truth, and Representation.

Truth – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Truth discussed in concert with Games, Correctness, Aboutness, Rules/Norms, Inference, Subjectivity, Value, Content, Practices, Mind, A priori, Truth, and Representation.

Representation – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Representation discussed in relation with Aboutness, Correctness, Rules/Norms, Inference, Subjectivity, Phenomenality, Content, Reason, Mind, A priori, and Truth.

Responsibility – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Responsibility is consistently related to Game, Correctness, Aboutness, Rules/Norms, Inference, Subjectivity, Reason, Agency, Mind, Purpose, Truth, Representation, and Good/Evil.

Good/Evil – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Good/Evil consistently related to Souls, Correctness, Subjectivity, Value, Reason, Agency, God, Purpose, Truth, and Responsibility.


The big question here, from a naturalistic standpoint, is whether all of these characteristics are homologous or merely analogous. Are the similarities ontogenetic, the expression of some shared ‘deep structure,’ or merely coincidental? For me this has to be what I think is one of the most significant questions that never get’s asked in cognitive science. Why? Because everybody has their own way of divvying up the intentional pie (including interpretavists like Dennett). Some of these items are good, and some of them are bad, depending on whom you talk to. If these phenomena were merely analogous, then this division need not be problematic—we’re just talking fish and whales. But if these phenomena are homologous—if we’re talking whales and whales—then the kinds of discursive barricades various theorists erect to shelter their ‘good’ intentional phenomena from ‘bad’ intentional phenomena need to be powerfully motivated.

Pointing out the apparent functionality of certain phenomena versus others simply will not do. The fact that these phenomena discharge some kind of function somehow seems pretty clear. It seems to be the case that God anchors the solution to any number of social problems—that even Souls discharge some function in certain, specialized problem-ecologies. The same can be said of Truth, Rule/Norm, Agency—every item on this list, in fact.

And this is precisely what one might expect given a purely biomechanical, heuristic interpretation of these terms as well (with the added advantage of being able to explain why our phenomenological inheritance finds itself mired in the kinds of problems it does). None of these need count as anything resembling what our phenomenological tradition claims to explain the kinds of behaviour that accompanies them. God doesn’t need to be ‘real’ to explain church-going, no more than Rules/Norms do to explain rule-following. Meanwhile, the growing mountain of cognitive scientific discovery looms large: cognitive functions generally run ulterior to what we can metacognize for report. Time and again, in context after context, empirical research reveals that human cognition is simply not what we think it is. As ‘Dehaene’s Law’ states, “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79). Perhaps this is simply what intentionality amounts to: a congenital ‘overestimation of awareness,’ a kind of WYSIATI or ‘what-you-see-is-all-there-is’ illusion. Perhaps anthropic, low-dimensional, functionally exempt from natural continuity, inscrutable in terms of natural continuity, source of perennial controversy, and possesses inexplicable efficacy are all expressions of various kinds of neglect. Perhaps it isn’t just a coincidence that we are entirely blind to our neuromechanical embodiment and that we suffer this compelling sense that we are more than merely neuromechanical.

How could we cognize the astronomical causal complexities of cognition? What evolutionary purpose would it serve?

What impact does our systematic neglect of those capacities have on philosophical reflection?

Does anyone really think the answer is going to be ‘minimal to nonexistent’?


* Originally posted 06/16/2014

To Ping or Not to Ping: Physics, Phenomenology, and Observer Effects

by rsbakker

Xray movie poster


JAMES XAVIER: Sam, what’s the range of human vision?

SAM BRANT: Distance?

JAMES XAVIER: No, wavelength.

SAM BRANT: Between 4000 angstrom units and 7800 angstrom units.* You know that.

JAMES XAVIER: Less than one-tenth of the actual wave spectrum. What could we really see if we had access to the other ninety-percent? Sam, we are virtually blind, all of us. You tell me my eyes are perfect, well they’re not. I’m blind to all but a tenth of the universe.

SAM BRANT: My dear friend, only the gods see everything.

JAMES XAVIER: My dear doctor, I’m closing in on the gods.


What happens when we begin to see too much? Roger Corman’s X: The Man with the X-Ray Eyes poses this very question. At first the dividends seem to be nothing short of fantastic. Dr. Xavier can see through pockets, sheets of paper, even the clothes of young men and women dancing. What’s more, he discovers he can look through the bodies of the ill and literally see what needs to be done to save them. The problem is that the blindness of others defines the expectations of others: Dr. Xavier finds himself a deep information consumer in a shallow information cognitive ecology. So assisting in surgery he knows the senior surgeon is about to kill a young girl because he can see what’s truly ailing her. He has to usurp his superior’s authority—it is the only sane thing to do—and yet his subsequent acts inevitably identify him as mad.

The madness, we discover, comes later, when the information becomes so deep as to overwhelm his cognitive biology. The more transparent the world becomes to him, the more opaque he becomes to others, himself included. He begins by nixing his personal and impersonal social ecologies, finds respite for a time as first a carnival mystic and then a quasi-religious faith healer, but even these liminal social habitats come crashing down around him. In the end, no human community can contain him, not even the all-embracing community of God. A classic ‘forbidden knowledge’ narrative, the movie ends with the Biblical admonition, “If an eye offends thee…” and Dr. Xavier plucking out his own eyes—and with remarkable B-movie facility I might add!

The idea, ultimately, isn’t so much that ignorance is bliss as it’s adaptive. The great bulk of human cognition is heuristic, turning not so much on what’s going on as cues systematically related to what’s going on. This dependence on cues is what renders human cognition ecological, a system functionally dependent upon environmental invariants. Change the capacity adapted to those cues, or change the systems tracked by these cues, and we find ourselves in crash space. X: The Man with the X-Ray Eyes provides us with catastrophic and therefore dramatic examples of both.

Humans, like all other species on this planet, possess cognitive ecologies. And as I hope to show below, the consequences of this fact can be every bit as subtle and misleading in philosophy as they are disastrous and illuminating in cinema.

We belong to the very nature we’re attempting to understand, and this has consequences for our capacity to understand. At every point in its scientific development, humanity has possessed a sensitivity horizon, a cognitive range basically, delimiting what can and cannot detected, let alone solved. Ancestrally, for instance, we were sensitive only to visible light and so had no way of tracking the greater spectrum. Once restricted to the world of ‘middle-sized dry goods,’ our sensitivity horizons now delve deep into the macroscopic and microscopic reaches of our environment.

The difference between Ernest Rutherford’s gold foil experiment and the Large Hadron Collider provides a dramatic expression of the difficulties entailed by extending this horizon, how the technical challenges tend to compound at ever more distal scales. The successor to the LHC, the International Linear Collider, is presently in development and expected cost 10 billion dollars, twice as much as the behemoth outside Geneva.[1] Meanwhile the James Webb Space Telescope, the successor to the Hubble and the Spitzer, has been projected to cost 8 billion dollars. Increasingly, cutting edge science is an industrial enterprise. We talk about searching for objects, signals, and so forth, but what we’re actually doing is engineering ever more profound sensitivities, ways to mechanically relate to macroscopic and microscopic scales.

When it comes to microscopic sensitivity horizons, the mechanical nature of this relation renders the problem of so-called ‘observer effects’ all but inevitable. The only way to track systematicities is to physically interact with them in some way. The more diminutive those systematicities become, the more sensitive they become, the more disruptive our physical interactions become. Intractable observational interference is something fundamental physics was pretty much doomed to encounter.

Now every account of observer effects I’ve encountered turns on the commonsensical observation that intervening on processes changes them, thus complicating our ability to study that process as it is. Observer effects, therefore, knock target systems from the very pins we’re attempting to understand. Mechanical interaction with a system scrambles the mechanics of that system—what could be more obvious? Observer effects are simply a consequence of belonging to the same nature we want to know. The problem with this formulation, however, is that it fails to consider the hybrid system that results. Given that cognition is natural, we can say that all cognition turns on the convergence of physical systems, the one belonging to the cognizer, the other belonging to the target. And this allows us to distinguish between kinds of cognition in terms of the kinds of hybrid systems that result. And this, I hope to show, allows us not only to make more sense of intentionality—‘aboutness’—but also why particle physics convinces so many that consciousness is somehow responsible for reality.

Unless you believe knowledge is magical, supernatural, claiming that our cognitive sensitivity to environmental systematicities has mechanical limits is a no-brainer. Blind Brain Theory amounts to little more than applying this fact to deliberative metacognition, or reflection, and showing how the mystery of human meaning can be unravelled in entirely naturalistic terms. On Blind Brain Theory, the first-person is best understood as an artifact of various metacognitive insensitivities. The human brain is utterly insensitive to the mechanics of its physical environmental relations (it has to be for a wide variety of reasons); it has no alternative but to cognize those relations in a radically heuristic manner, to ignore all the mediating machinery. As BBT has it, what philosophers call ‘intentionality’ superficially tracks this specialized cognitive tool—fetishizes it, in fact.

Does the brain possess the capacity to cognize its own environmental relations absent cognition of the actual physical relations obtaining? Yes. This is a simple empirical fact. So what is this capacity? Does it constitute some intrinsically inexplicable pocket of reality, express some fundamental rupture in Being, or is it simply heuristic? Since inexplicable pockets and fundamental ruptures entail a wide variety of perpetually speculative commitments, heuristics have to be the only empirically plausible alternative.

Intentional cognition is heuristic cognition. As such, it possesses a corresponding problem ecology, which is to say, a limited scope of effective application. Heuristic cognition always requires that specific background conditions obtain, some set of environmental invariants.[2] Given our insensitivity to these limits, our ‘autoinsensitivity’ (what I call medial neglect elsewhere), it makes sense we would run afoul misapplications. Blind Brain Theory provides a way of mapping these limits, of understanding how and where things like the intentionality heuristic might lead us astray.

Anyone who’s watched or read The Red October knows about the fundamental distinction drawn between active and passive modes of detection in the military science of warning systems. So with sonar, for instance, one can ‘ping’ to locate their potential enemy, transmit an acoustic pulse designed to facilitate echoic location. The advantage of this active approach is that reliably locates enemies, but it does so at the cost of alerting your enemy to your presence. It reliably detects, but it changes the behaviour of what it detects—a bona fide observer effect. You know where your enemy is, sure, but you’ve made them more difficult to predict. Passive sonar, on the other hand, simply listens for the sounds your enemy is prone to make. Though less reliable at detecting them, it has the advantage of leaving the target undisturbed, thus rendering your foe more predictable, and so more vulnerable. this.

Human cognition cleaves along very similar lines. In what might be called passive cognition, the cognitive apparatus (our brain and other enabling media) has a negligible or otherwise irrelevant impact on the systematicities tracked. Seeing a natural process, for instance, generally has no impact on that process, since the photons used would have been reflected whether or not they were subsequently intercepted by your retinas. With interactive cognition, on the other hand, the cognitive apparatus has a substantial impact on the systematicities tracked. Touching a natural process, for example, generally interferes with that process. Where the former allows us to cognize functions independent of our investigation, the latter does not. This means that interactive cognition always entails ignorance in a way that passive cognition does not. Restricted to the consequences of our comportments, we have no way of tracking the systematicities responsible, which means we have no way of completely understanding the system. In interactive cognition, we are constitutive of such systems, so blindness to ourselves effectively means blindness to those systems, which is why we generally learn the consequences of our of interference, and little more.[3] Of course passive cognition suffers the identical degree of autoinsensitivity; it just doesn’t matter given how the passivity of the process preserves the functional independence of the systematicities involved. Things do what they would have done whether you had observed them or not.

We should expect, then, that applications of the intentionality heuristic—‘aboutness’—will generally facilitate cognition when our targets exhibit genuine functional independence, and generally undermine cognition when they do not. Understanding combustion engines requires no understanding of the cognitive apparatus required to understand combustion engines. The radically privative schema of knower and known, subject and object, works simply because the knowing need not be known. We need possess no comportment to our comportments in instances of small engine repair, which is a good thing, given the astronomical neural complexities involved. Thinking in terms of ‘about’ works just fine.

The more interactive cognition becomes, however, the more problematic assumptive applications of the intentionality heuristic are likely to become. Consider phenomenology, where the presumption is that the theorist can cognize experience itself, and not simply the objects of experience. It seems safe to say that experience does not enjoy the functional independence of, say, combustion engines. Phenomenologists generally rely on the metaphorics of vision in their investigations, but insofar as both experience and cognition turn on one and the same neural system, the suspicion has to be that things are far more tactile than their visual metaphors lead them to believe. The idea of cognizing experience absent any understanding of cognition is almost comically farfetched, if you think about it, and yet this is exactly what phenomenologists purport to do. One might wonder what two things could be more entangled, more functionally interdependent, than conscious experience and conscious cognition. So then why would anyone entertain phenomenology, let alone make it their vocation?

The answer is neglect. Since phenomenologists suffer the same profound autoinsensitivity as the rest of the human species, they have no way of distinguishing between those experiential artifacts genuinely observed and those manufactured—between what they ‘see’ and what they ‘handle.’ Since they have no inkling whatsoever of their autoinsensitivity, they are prone to assume, as humans generally do when suffering neglect, that what they see is all there is,[4] despite the intrinsically theoretically underdetermined nature of their field. As we have seen, the intentionality heuristic presumes functional independence, that we need not know much of anything about our cognitive capacities to solve a given system. Apply this presumption to instances of interactive cognition, as phenomenologists so obviously do, and you will find yourself in crash space, plain and simple.

Observer effects, you could say, flag the points where cognitive passivity becomes interactive—where we must ping our targets to track them. Given autoinsensitivity, our brains necessarily neglect the enabling (or medial) mechanical dimension of their own constitution. They have no way, therefore, of tracking anything apart from the consequences of their cognitive determinations. This simply follows from the mechanical nature of consciousness, of course—all cognition turns on deriving, first and foremost, predictions from mechanical consequences, but also manipulations and explanations. The fact that we can only cognize the consequences of cognition—source neglect—convinces reflection that we somehow stand outside nature, that consciousness is some kind of floating source as opposed to what it is, another natural system embedded in a welter of natural systems.[5] Autoinsensitivity is systematically mistaken for autosufficiency, the profound intuition that conscious experience somehow must come first. It becomes easy to suppose that the collapse of wave-functions is accomplished by the intervention of consciousness (or some component thereof) rather than the interposition of another system. We neglect the system actually responsible for decoherence and congratulate the heuristic cartoon that evolution has foisted upon us instead. The magically floating, suspiciously low-dimensional ‘I’ becomes responsible, rather than the materially embedded organism we know ourselves to be.

Like Deepak Chopra, Donald Hoffman and numerous others insisting their brand of low-dimensional hokum is scientifically grounded, we claim that science entails our most preposterous conceit, that we are somehow authors of reality, rather than just another thermodynamic waystation.

*The typical human eye is actually sensitive to 3900 to 7000 angstroms.



[1] Baer, Howard, Barger, Vernon D., and List, Jenny. “The Collider that Could Save Physics,” Scientific American, June, 2016. 8.

[2] Sometimes it can be gerrymandered to generate understanding in novel contexts, sometimes not. In those cases where it can be so adapted, it still relies on some kind invariance between the cues accessed, and the systems solved.

[3] Absent, that is, sophisticated theoretical and/or experimental prostheses. A great deal needs to be said here regarding the various ‘hacks’ we’ve devised to suss out natural processes via a wild variety of ingenious interventions. (Hacking’s wonderful Representing and Intervening is replete with examples). But none of these methods involve overcoming medial neglect, which is to say all of them leverage cognition absent autocognition.

[4] Blind Brain Theory can actually be seen as a generalization Daniel Kahneman calls WYSIATI (‘What-You-See-Is-All-There-Is’) effects in his research.

[5] This is entirely consonant with an exciting line of research (one of multiple lines converging on Blind Brain Theory) involving ‘inherence heuristics.’ Andrei Cimpian and Erika Saloman write:

we propose that people often make sense of [environmental] regularities via a simple rule of thumb–the inherence heuristic. This fast, intuitive heuristic leads people to explain many observed patterns in terms of the inherent features of the things that instantiate these patterns. For example, one might infer that girls wear pink because pink is a delicate, inherently feminine color, or that orange juice is consumed for breakfast because its inherent qualities make it suitable for that time of day. As is the case with the output of any heuristic, such inferences can be–and often are–mistaken. Many of the patterns that currently structure our world are the products of complex chains of historical causes rather than being simply a function of the inherent features of the entities involved. The human mind, however, may be prone to ignore this possibility. If the present proposal is correct, people often understand the regularities in their environments as inevitable reflections of the true nature of the world rather than as end points of event chains whose outcomes could have been different.

See, Andrei Cimpian and Erika Saloman, “The inherence heuristic: An intuitive means of making sense of the world and a potential precursor to psychological essentialism,” Behavioral and Brain Sciences 37 (2014), 461-462.

The Dim Future of Human Brilliance

by rsbakker

Moths to a flame

Humans are what might be called targeted shallow information consumers in otherwise unified deep information environments. We generally skim only what information we need—from our environments or ourselves—to effect reproduction, and nothing more. We neglect gamma radiation for good reason: ‘deep’ environmental information that makes no reproductive difference makes no cognitive difference. As the product of innumerable ancestral ecologies, human cognitive biology is ecological, adapted to specific, high-impact environments. As ecological, one might expect that human cognitive biology is every bit as vulnerable to ecological change as any other biological system.

Under the rubric of  the Semantic Apocalypse, the ecological vulnerability of human cognitive biology has been my focus here for quite some time at Three Pound Brain. Blind to deep structures, human cognition largely turns on cues, sensitivity to information differentially related to the systems cognized.  Sociocognition, where a mere handful of behavioural cues can trigger any number of predictive/explanatory assumptions, is paradigmatic of this. Think, for instance, how easy it was for Ashley Madison to convince its predominantly male customers that living women were checking their profiles.  This dependence on cues underscores a corresponding dependence on background invariance: sever the differential relations between the cues and systems to be cognized (the way Ashley Madison did) and what should be sociocognition, the solution of some fellow human, becomes confusion (we find ourselves in ‘crash space’) or worse, exploitation (we find ourselves in instrumentalized crash space, or ‘cheat space’).

So the questions I think we need to be asking are:

What effect does deep information have on our cognitive ecologies? The so-called ‘data deluge’ is nothing but an explosion in the availability of deep or ancestrally inaccessible information. What happens when targeted shallow information consumers suddenly find themselves awash in different kinds of deep information? A myriad of potential examples come to mind. Think of the way medicalization drives accommodation creep, how instructors are gradually losing the ability to judge character in the classroom. Think of the ‘fear of crime’ phenomena, how the assessment of ancestrally unavailable information against implicit, ancestral baselines skews general perceptions of criminal threat. For that matter, think of the free will debate, or the way mechanistic cognition scrambles intentional cognition more generally: these are paradigmatic instances of the way deep information, the primary deliverance of science, crashes the targeted and shallow cognitive capacities that comprise our evolutionary inheritance.

What effect does background variation have on targeted, shallow modes of cognition? What happens when cues become differentially detached, or ‘decoupled,’ from their ancestral targets? Where the first question deals with the way the availability of deep information (literally, not metaphorically) pollutes cognitive ecologies, the ways human cognition requires the absence of certain information, this question deals with the way human cognition requires the presence of certain environmental continuities. There’s actually been an enormous amount of research done on this question in a wide variety of topical guises. Nikolaas Tinbergen coined the term “supernormal stimuli” to designate ecologically variant cuing, particularly the way exaggerated stimuli can trigger misapplications of different heuristic regimes. He famously showed how gull chicks, for instance, could be fooled into pecking false “super beaks” for food given only a brighter-than-natural red spot. In point of fact, you see supernormal stimuli in dramatic action anytime you see artificial outdoor lighting surrounded by a haze of bugs: insects that use lunar transverse orientation to travel at night continually correct their course vis a vis streetlights, porch lights, and so on, causing them to spiral directly into them. What Tinbergen and subsequent ethology researchers have demonstrated is the ubiquity of cue-based cognition, the fact that all organisms are targeted, shallow information consumers in unified deep information environments.

Deirdre Barrett has recently applied the idea to modern society, but lacking any theory of meaning, she finds herself limited to pointing out suggestive speculative parallels between ecological readings and phenomena that are semantically overdetermined otherwise. For me this question calves into a wide variety of domain-specific forms, but there’s an important distinction to be made between the decoupling of cues generally and strategic decoupling, between ‘crash space’ and ‘cheat space.’ Where the former involves incidental cognitive incapacity, human versions of transverse orientation, the latter involves engineered cognitive incapacity. The Ashley Madison case I referenced above provides an excellent example of simply how little information is needed to cue our sociocognitive systems in online environments. In one sense, this facility evidences the remarkable efficiency of human sociocognition, the fact that it can do so much with so little. But, as with specialization in evolution more generally, this efficiency comes at the cost of ecological dependency: you can only neglect information in problem-solving so long as the systems ignored remain relatively constant.

And this is basically the foundational premise of the Semantic Apocalypse: intentional cognition, as a radically specialized system, is especially vulnerable to both crashing and cheating. The very power of our sociocognitive systems is what makes them so liable to be duped (think religious anthropomorphism), as well as so easy to dupe. When Sherry Turkle, for instance, bemoans the ease with which various human-computer interfaces, or ‘HCIs,’ push our ‘Darwinian buttons’ she is talking about the vulnerability of sociocognitive cues to various cheats (but since she, like Barrett, lacks any theory of meaning, she finds herself in similar explanatory straits). In a variety of experimental contexts, for instance, people have been found to trust artificial interlocutors over human ones. Simple tweaks in the voices and appearance of HCIs have a dramatic impact on our perceptions of those encounters—we are in fact easily manipulated, cued to draw erroneous conclusions, given what are quite literally cartoonish stimuli. So the so-called ‘internet of things,’ the distribution of intelligence throughout our artifactual ecologies, takes on a far more sinister cast when viewed through the lens of human sociocognitive specialization. Populating our ecologies with gadgets designed to cue our sociocognitive capacities ‘out of school’ will only degrade the overall utility of those capacities. Since those capacities underwrite what we call meaning or ‘intentionality,’ the collapse of our ancestral sociocognitive ecologies signals the ‘death of meaning.’

The future of human cognition looks dim. We can say this because we know human cognition is heuristic, and that specific forms of heuristic cognition turn on specific forms of ecological stability, the very forms that our ongoing technological revolution promises to sweep away. Blind Brain Theory, in other words, offers a theory of meaning that not only explains away the hard problem, but can also leverage predictions regarding the fate of our civilization. It makes me dizzy thinking about it, and suspicious—the empty can, as they say, rattles the loudest. But this preposterous scope is precisely what we should expect from a genuinely naturalistic account of intentional phenomena. The power of mechanistic cognition lies in the way it scales with complexity, allowing us to build hierarchies of components and subcomponents. To naturalize meaning is to understand the soul in terms continuous with the cosmos.

This is precisely what we should expect from a theory delivering the Holy Grail, the naturalization of meaning.

You could even argue that the unsettling, even horrifying consequences evidence its veracity, given there’s so many more ways for the world to contradict our parochial conceits than to appease them. We should expect things will end ugly.

Intentional Philosophy as the Neuroscientific Explananda Problem

by rsbakker

The problem is basically that the machinery of the brain has no way of tracking its own astronomical dimensionality; it can at best track problem-specific correlational activity, various heuristic hacks. We lack not only the metacognitive bandwidth, but the metacognitive access required to formulate the explananda of neuroscientific investigation.

A curious consequence of the neuroscientific explananda problem is the glaring way it reveals our blindness to ourselves, our medial neglect. The mystery has always been one of understanding constraints, the question of what comes before we do. Plans? Divinity? Nature? Desires? Conditions of possibility? Fate? Mind? We’ve always been grasping for ourselves, I sometimes think, such was the strategic value of metacognitive capacity in linguistic social ecologies. The thing to realize is that grasping, the process of developing the capacity to report on our experience, was bootstapped out of nothing and so comprised the sum of all there was to the ‘experience of experience’ at any given stage of our evolution. Our ancestors had to be both implicitly obvious, and explicitly impenetrable to themselves past various degrees of questioning.

We’re just the next step.

What is it we think we want as our neuroscientific explananda? The various functions of cognition. What are the various functions of cognition? Nobody can seem to agree, thanks to medial neglect, our cognitive insensitivity to our cognizing.

Here’s what I think is a productive way to interpret this conundrum.

Generally what we want is a translation between the manipulative and the communicative. It is the circuit between these two general cognitive modes that forms the cornerstone of what we call scientific knowledge. A finding that cannot be communicated is not a finding at all. The thing is, this—knowledge itself—all functions in the dark. We are effectively black boxes to ourselves. In all math and science—all of it—the understanding communicated is a black box understanding, one lacking any natural understanding of that understanding.

Crazy but true.

What neuroscience is after, of course, is a natural understanding of understanding, to peer into the black box. They want manipulations they can communicate, actionable explanations of explanation. The problem is that they have only heuristic, low-dimensional, cognitive access to themselves: they quite simply lack the metacognitive access required to resolve interpretive disputes, and so remain incapable of formulating the explananda of neuroscience in any consensus commanding way. In fact, a great many remain convinced, on intuitive grounds, that the explananda sought, even if they could be canonically formulated, would necessarily remain beyond the pale of neuroscientific explanation. Heady stuff, given the historical track record of the institutions involved.

People need to understand that the fact of a neuroscientific explananda problem is the fact of our outright ignorance of ourselves. We quite simply lack the information required to decide what it is we’re explaining. What we call ‘philosophy of mind’ is a kind of metacognitive ‘crash space,’ a point where our various tools seem to function, but nothing ever comes of it.

The low-dimensionality of the information begets underdetermination, underdetermination begets philosophy, philosophy begets overdetermination. The idioms involved become ever more plastic, more difficult to sort and arbitrate. Crash space bloats. In a sense, intentional philosophy simply is the neuroscientific explananda problem, the florid consequence of our black box souls.

The thing that can purge philosophy is the thing that can tell you what it is.

Alien Philosophy

by rsbakker

[Consolidated, with pretty pictures]

Face on moon

The highest species concept may be that of a terrestrial rational being; however, we shall not be able to name its character because we have no knowledge of non-terrestrial rational beings that would enable us to indicate their characteristic property and so to characterize this terrestrial being among rational beings in general. It seems, therefore, that the problem of indicating the character of the human species is absolutely insoluble, because the solution would have to be made through experience by means of the comparison of two species of rational being, but experience does not offer us this. (Kant: Anthropology from a Pragmatic Point of View, 225)

Little alien sasquatch


Are there alien philosophers orbiting some faraway star, opining in bursts of symbolically articulated smells, or parsing distinctions-without-differences via the clasp of neural genitalia? What would an alien philosophy look like? Do we have any reason to think we might find some of them recognizable? Do the Greys have their own version of Plato? Is there a little green Nietzsche describing little green armies of little green metaphors?



I: The Story Thus Far

A couple years back, I published a piece in Scientia Salon, “Back to Square One: Toward a Post-intentional Future,” that challenged the intentional realist to warrant their theoretical interpretations of the human. What is the nature of the data that drives their intentional accounts? What kind of metacognitive capacity can they bring to bear?

I asked these questions precisely because they cannot be answered. The intentionalist has next to no clue as to the nature, let alone the provenance, of their data, and even less inkling as to the metacognitive resources at their disposal. They have theories, of course, but it is the proliferation of theories that is precisely the problem. Make no mistake: the failure of their project, their consistent inability to formulate their explananda, let alone provide any decisive explanations, is the primary reason why cognitive science devolves so quickly into philosophy.

But if chronic theoretical underdetermination is the embarrassment of intentionalism, then theoretical silence has to be the embarrassment of eliminativism. If meaning realism offers too much in the way of theory—endless, interminable speculation—then meaning skepticism offers too little. Absent plausible alternatives, intentionalists naturally assume intrinsic intentionality is the only game in town. As a result, eliminativists who use intentional idioms are regularly accused of incoherence, of relying upon the very intentionality they’re claiming to eliminate. Of course eliminativists will be quick to point out the question-begging nature of this criticism: They need not posit an alternate theory of their own to dispute intentional theories of the human. But they find themselves in a dialectical quandary, nonetheless. In the absence of any real theory of meaning, they have no substantive way of actually contributing to the domain of the meaningful. And this is the real charge against the eliminativist, the complaint that any account of the human that cannot explain the experience of being human is barely worth the name. [1] Something has to explain intentional idioms and phenomena, their apparent power and peculiarity; If not intrinsic or original intentionality, then what?

My own project, however, pursues a very different brand of eliminativism. I started my philosophical career as an avowed intentionalist, a one-time Heideggerean and Wittgensteinian. For decades I genuinely thought philosophy had somehow stumbled into ‘Square Two.’ No matter what doubts I entertained regarding this or that intentional account, I was nevertheless certain that some intentional account had to be right. I was invested, and even though the ruthless elegance of eliminativism made me anxious, I took comfort in the standard shibboleths and rationalizations. Scientism! Positivism! All theoretical cognition presupposes lived life! Quality before quantity! Intentional domains require intentional yardsticks!

Then, in the course of writing a dissertation on fundamental ontology, I stumbled across a new, privative way of understanding the purported plenum of the first-person, a way of interpreting intentional idioms and phenomena that required no original meaning, no spooky functions or enigmatic emergences—nor any intentional stances for that matter. Blind Brain Theory begins with the assumption that theoretically motivated reflection upon experience co-opts neurobiological resources adapted to far different kinds of problems. As a co-option, we have no reason to assume that ‘experience’ (whatever it amounts to) yields what philosophical reflection requires to determine the nature of experience. Since the systems are adapted to discharge far different tasks, reflection has no means of determining scarcity and so generally presumes sufficiency. It cannot source the efficacy of rules so rules become the source. It cannot source temporal awareness so the now becomes the standing now. It cannot source decisions so decisions (the result of astronomically complicated winner-take-all processes) become ‘choices.’ The list goes on. From a small set of empirically modest claims, Blind Brain Theory provides what I think is the first comprehensive, systematic way to both eliminate and explain intentionality.

In other words, my reasons for becoming an eliminativist were abductive to begin with. I abandoned intentionalism, not because of its perpetual theoretical disarray (though this had always concerned me), but because I became convinced that eliminativism could actually do a better job explaining the domain of meaning. Where old school, ‘dogmatic eliminativists’ argue that meaning must be natural somehow, my own ‘critical eliminativism’ shows how. I remain horrified by this how, but then I also feel like a fool for ever thinking the issue would end any other way. If one takes mediocrity seriously, then we should expect that science will explode, rather than canonize our prescientific conceits, no matter how near or dear.

But how to show others? What could be more familiar, more entrenched than the intentional philosophical tradition? And what could be more disparate than eliminativism? To quote Dewey from Experience and Nature, “The greater the gap, the disparity, between what has become a familiar possession and the traits presented in new subject-matter, the greater is the burden imposed upon reflection” (Experience and Nature, ix). Since the use of exotic subject matters to shed light on familiar problems is as powerful a tool for philosophy as for my chosen profession, speculative fiction, I propose to consider the question of alien philosophy, or ‘xenophilosophy,’ as a way to ease the burden. What I want to show is how, reasoning from robust biological assumptions, one can plausibly claim that aliens—call them ‘Thespians’—would also suffer their own versions of our own (hitherto intractable) ‘problem of meaning.’ The degree to which this story is plausible, I will contend, is the degree to which critical eliminativism deserves serious consideration. It’s the parsimony of eliminativism that makes it so attractive. If one could combine this parsimony with a comprehensive explanation of intentionality, then eliminativism would very quickly cease to be a fringe opinion.

Alien autopsy


II: Aliens and Philosophy

Of course, the plausibility of humanoid aliens possessing any kind of philosophy requires the plausibility of humanoid aliens. In popular media, aliens are almost always exotic versions of ourselves, possessing their own exotic versions of the capacities and institutions we happen to have. This is no accident. Science fiction is always about the here and now—about recontextualizations of what we know. As a result, the aliens you tend to meet tend to seem suspiciously humanoid, psychologically if not physically. Spock always has some ‘mind’ with which to ‘meld’. To ask the question of alien philosophy, one might complain, is to buy into this conceit, which although flattering, is almost certainly not true.

And yet the environmental filtration of mutations on earth has produced innumerable examples of convergent evolution, different species evolving similar morphologies and functions, the same solutions to the same problems, using entirely different DNA. As you might imagine, however, the notion of interstellar convergence is a controversial one. [2] Supposing the existence of extraterrestrial intelligence is one thing—cognition is almost certainly integral to complex life elsewhere in the universe—but we know nothing about the kinds of possible biological intelligences nature permits. Short of actual contact with intelligent aliens, we have no way of gauging how far we can extrapolate from our case. [3] All too often, ignorance of alternatives dupes us into making ‘only game in town assumptions,’ so confusing mere possibility with necessity. But this debate need not worry us here. Perhaps the cluster of characteristics we identify with ‘humanoid’ expresses a high-probability recipe for evolving intelligence—perhaps not. Either way, our existence proves that our particular recipe is on file, that aliens we might describe as ‘humanoid’ are entirely possible.

So we have our humanoid aliens, at least as far as we need them here. But the question of what alien philosophy looks like also presupposes we know what human philosophy looks like. In “Philosophy and the Scientific Image of Man,” Wilfred Sellars defines the aim of philosophy as comprehending “how things in the broadest possible sense of the term hang together in the broadest possible sense of the term” (1). Philosophy famously attempts to comprehend the ‘big picture.’ The problem with this definition is that it overlooks the relationship between philosophy and ignorance, and so fails to distinguish philosophical inquiry from scientific or religious inquiry. Philosophy is invested in a specific kind of ‘big picture,’ one that acknowledges the theoretical/speculative nature of its claims, while remaining beyond the pale of scientific arbitration. Philosophy is better defined, then, as the attempt to comprehend how things in general hang together in general absent conclusive information.

All too often philosophy is understood in positive terms, either as an archive of theoretical claims, or as a capacity to ‘see beyond’ or ‘peer into.’ On this definition, however, philosophy characterizes a certain relationship to the unknown, one where inquiry eschews supernatural authority, and yet lacks the methodological, technical, and institutional resources of science. Philosophy is the attempt to theoretically explain in the absence of decisive warrant, to argue general claims that cannot, for whatever reason, be presently arbitrated. This is why questions serve as the basic organizing principles of the institution, the shared boughs from which various approaches branch and twig in endless disputation. Philosophy is where we ponder the general questions we cannot decisively answer, grapple with ignorances we cannot readily overcome.



III: Evolution and Ecology

A: Thespian Nature

It might seem innocuous enough defining philosophy in privative terms as the attempt to cognize in conditions of information scarcity, but it turns out to be crucial to our ability to make guesses regarding potential alien analogues. This is because it transforms the question of alien philosophy into a question of alien ignorance. If we can guess at the kinds of ignorance a biological intelligence would suffer, then we can guess at the kinds of questions they would ask, as well as the kinds of answers that might occur to them. And this, as it turns out, is perhaps not so difficult as one might suppose.

The reason is evolution. Thanks to evolution, we know that alien cognition would be bounded cognition, that it would consist of ‘good enough’ capacities adapted to multifarious environmental, reproductive impediments. Taking this ecological view of cognition, it turns out, allows us to make a good number of educated guesses. (And recall, plausibility is all that we’re aiming for here).

So for instance, we can assume tight symmetries between the sensory information accessed, the behavioural resources developed, and the impediments overcome. If gamma rays made no difference to their survival, they would not perceive them. Gamma rays, for Thespians, would be unknown unknowns, at least pending the development of alien science. The same can be said for evolution, planetary physics—pretty much any instance of theoretical cognition you can adduce. Evolution assures that cognitive expenditures, the ability to intuit this or that, will always be bound in some manner to some set of ancestral environments. Evolution means that information that makes no reproductive difference makes no biological difference.

An ecological view, in other words, allows us to naturalistically motivate something we might have been tempted to assume outright: original naivete. The possession of sensory and cognitive apparatuses comparable to our own means Thespians will possess a humanoid neglect structure, a pattern of ignorances they cannot even begin to question, that is, pending the development of philosophy. The Thespians would not simply be ignorant of the microscopic and macroscopic constituents and machinations explaining their environments, they would be oblivious to them. Like our own ancestors, they wouldn’t even know they didn’t know.

Theoretical knowledge is a cultural achievement. Our Thespians would have to learn the big picture details underwriting their immediate environments, undergo their own revolutions and paradigm shifts as they accumulate data and refine interpretations. We can expect them to possess an implicit grasp of local physics, for instance, but no explicit, theoretical understanding of physics in general. So Thespians, it seems safe to say, would have their own version of natural philosophy, a history of attempts to answer big picture questions about the nature of Nature in the absence of decisive data.

Not only can we say their nascent, natural theories will be underdetermined, we can also say something about the kinds of problems Thespians will face, and so something of the shape of their natural philosophy. For instance, needing only the capacity to cognize movement within inertial frames, we can suppose that planetary physics would escape them. Quite simply, without direct information regarding the movement of the ground, the Thespians would have no sense of the ground changing position. They would assume that their sky was moving, not their world. Their cosmological musings, in other words, would begin supposing ‘default geocentrism,’ an assumption that would only require rationalization once others, pondering the movement of the skies, began posing alternatives.

One need only read On the Heavens to appreciate how the availability of information can constrain a theoretical debate. Given the imprecision of the observational information at his disposal, for instance, Aristotle’s stellar parallax argument becomes well-nigh devastating. If the earth revolves around the sun, then surely such a drastic change in position would impact our observations of the stars, the same way driving into a city via two different routes changes our view of downtown. But Aristotle, of course, had no decisive way of fathoming the preposterous distances involved—nor did anyone, until Galileo turned his Dutch Spyglass to the sky. [4]

Aristotle, in other words, was victimized not so much by poor reasoning as by various perspectival illusions following from a neglect structure we can presume our Thespians share. And this warrants further guesses. Consider Aristotle’s claim that the heavens and the earth comprise two distinct ontological orders. Of course purity and circles rule the celestial, and of course grit and lines rule the terrestrial—that is, given the evidence of the naked eye from the surface of the earth. The farther away something is, the less information observation yields, the fewer distinctions we’re capable of making, the more uniform and unitary it is bound to seem—which is to say, the less earthly. An inability to map intuitive physical assumptions onto the movements of the firmament, meanwhile, simply makes those movements appear all the more exceptional. In terms of the information available, it seems safe to suppose our Thespians would at least face the temptation of Aristotle’s long-lived ontological distinction.

I say ‘temptation,’ because certainly any number of caveats can be raised here. Heliocentrism, for instance, is far more obvious in our polar latitudes (where the earth’s rotation is as plain as the summer sun in the sky), so there are observational variables that could have drastically impacted the debate even in our own case. Who knows? If it weren’t for the consistent failure of ancient heliocentric models to make correct predictions (the models assumed circular orbits), things could have gone differently in our own history. The problem of where the earth resides in the whole might have been short-lived.

But it would have been a problem all the same, simply because the motionlessness of the earth and the relative proximity of the heavens would have been our (erroneous) default assumptions. Bound cognition suggests our Thespians would find themselves in much the same situation. Their world would feel motionless. Their heavens would seem to consist of simpler stuff following different laws. Any Thespian arguing heliocentrism would have to explain these observations away, argue how they could be moving while standing still, how the physics of the ground belongs to the physics of the sky.

We can say this because, thanks to an ecological view, we can make plausible empirical guesses as to the kinds of information and capacities Thespians would have available. Not only can we predict what would have remained unknown unknowns for them, we can also predict what might be called ‘unknown half-knowns.’ Where unknown unknowns refer to things we can’t even question, unknown half-knowns refer to theoretical errors we cannot perceive simply because the information required to do so remains—you guessed it—unknown unknown.

Think of Plato’s allegory of the cave. The chained prisoners confuse the shadows for everything because, unable to move their heads from side to side, they just don’t ‘know any different.’ This is something we understand so intuitively we scarce ever pause to ponder it: the absence of information or cognitive capacity has positive cognitive consequences. Absent certain difference making differences, the ground will be cognized as motionless rather than moving, and celestial objects will be cognized as simples rather than complex entities in their own right. The ground might as well be motionless and the sky might as well be simple as far as evolution is concerned. Once again, distinctions that make no reproductive difference make no biological difference. Our lack of radio telescope eyes is no genetic or environmental fluke: such information simply wasn’t relevant to our survival.

This means that a propensity to theorize ‘ground/sky dualism’ is built into our biology. This is quite an incredible claim, if you think about it, but each step in our path has been fairly conservative, given that mere plausibility is our aim. We should expect Thespian cognition to be bounded cognition. We should expect them to assume the ground motionless, and the constituents of the sky simple. We can suppose this because we can suppose them to be ignorant of their ignorances, just as we were (and remain). Cognizing the ontological continuity of heaven and earth requires the proper data for the proper interpretation. Given a roughly convergent sensory predicament, it seems safe to say that aliens would be prone as we were to mistake differences in signal with differences in being and so have to discover the universality of nature the same as we did.

But if we can assume our Thespians—or at least some of them—would be prone to misinterpret their environments the way we did, what about themselves? For centuries now humanity has been revising and sharpening its understanding of the cosmos, to the point of drafting plausible theories regarding the first second of creation, and yet we remain every bit as stumped regarding ourselves as Aristotle. Is it fair to say that our Thespians would suffer the same millennial myopia?

Would they have their own version of our interminable philosophy of the soul?

Hitler and alien


B: Thespian Souls

Given a convergent environmental and biological predicament, we can suppose our Thespians would have at least flirted with something resembling Aristotle’s dualism of heaven and earth. But as I hope to show, the ecological approach pays even bigger theoretical dividends when one considers what has to be the primary domain of human philosophical speculation: ourselves.

With evolutionary convergence, we can presume our Thespians would be eusocial, [5] displaying the same degree of highly flexible interdependence as us. This observation, as we shall see, possesses some startling consequences. Cognitive science is awash in ‘big questions’ (philosophy), among them the problem of what is typically called ‘mindreading,’ our capacity to explain/predict/manipulate one another on the basis of behavioural data alone. How do humans regularly predict the output of something so preposterously complicated as human brains on the basis of so little information?

The question is equally applicable to our Thespians, who would, like humans, possess formidable socio-cognitive capacities. As potent as those capacities were, however, we can also suppose they would be bounded, and—here’s the thing—radically so. When one Thespian attempts to cognize another, they, like us, will possess no access whatsoever to the biological systems actually driving behaviour. This means that Thespians, like us, would need to rely on so-called ‘fast and frugal heuristics’ to solve each other. [6] That is to say they would possess systems geared to the detection of specific information structures, behavioural precursors that reliably correlate, as opposed to cause, various behavioural outcomes. In other words, we can assume that Thespians will possess a suite of powerful, special purpose tools adapted to solving systems in the absence of causal information.

Evolutionary convergence means Thespians would understand one another (as well as other complex life) in terms that systematically neglect their high-dimensional, biological nature. As suggestive as this is, things get really interesting when we consider the way Thespians pose the same basic problem of computational intractability (the so-called ‘curse of dimensionality’) to themselves as they do to their fellows. The constraints pertaining to Thespian social cognition, in other words, also apply to Thespian metacognition, particularly with respect to complexity. Each Thespian, after all, is just another Thespian, and so poses the same basic challenge to metacognition as they pose to social cognition. By sheer dint of complexity, we can expect the Thespian brain would remain opaque to itself as such. This means something that will turn out to be quite important: namely that Thespian self-understanding, much like ours, would systematically neglect their high-dimensional, biological nature. [7]

This suggests that life, and intelligent life in particular, would increasingly stand out as a remarkable exception as the Thespians cobbled together a mechanical understanding of nature. Why so? Because it seems a stretch to suppose they would possess a capacity so extravagant as accurate ‘meta-metacognition.’ Lacking such a capacity would strand them with disparate families of behaviours and entities, each correlated with different intuitions, which would have to be recognized as such before any taxonomy could be made. Some entities and behaviours could be understood in terms of mechanical conditions, while others could not. So as extraordinary as it sounds, it seems plausible to think that our Thespians, in the course of their intellectual development, would stumble across some version of their own ‘fact-value distinction.’ All we need do is posit a handful of ecological constraints.

But of course things aren’t nearly so simple. Metacognition may solve for Thespians the same ‘fast and frugal’ manner as social cognition, but it entertains a far different relationship to its putative target. Unlike social cognition, which tracks functionally distinct systems (others) via the senses, metacognition is literally hardwired to the systems it tracks. So even though metacognition faces the same computational challenge as social cognition—cognizing a Thespian—it requires a radically different set of tools to do so. [8]

It serves to recall that evolved intelligence is environmentally oriented intelligence. Designs thrive or vanish depending on their ability to secure the resources required to successfully reproduce. Because of this, we can expect that all intelligent aliens, not just Thespians, would possess highdimensional cognitive relations with their environments. Consider our own array of sensory modalities, how the environmental here and now ‘hogs bandwidth.’ The degree to which your environment dominates your experience is the degree to which you’re filtered to solve your environments. We live in the world simply because we’re distilled from it, the result of billions of years of environmental tuning. We can presume our aliens would be thoroughly ‘in the world’ as well, that the bulk of their cognitive capacities would be tasked with the behavioural management of their immediate environments for similar evolutionary reasons.

Since all cognitive capacities are environmentally selected, we can expect whatever basic metacognitive capacity the Thespians possess will also be geared to the solution of environmental problems. Thespian metacognition will be an evolutionary artifact of getting certain practical matters right in certain high-impact environments, plain and simple. Add to this the problem of computational intractability (which metacognition shares with social cognition) and it becomes almost certain that Thespian metacognition would consist of multiple fast and frugal heuristics (because solving on the basis of scarce data requires less, not more, parameters geared to particular information structures to be effective). [9] We have very good reason to suspect the Thespian brain would access and process its own structure and dynamics in ways that would cut far more corners than joints. As is the case with social cognition, it would belong to Thespian nature to neglect Thespian nature—to cognize the cognizer as something other, something geared to practical contexts.

Thespians would cognize themselves and their fellows via correlational, as opposed to causal, heuristic cognition. The curse of dimensionality necessitates it. It’s hard, I think, to overstate the impact this would have on an alien species attempting to cognize their nature. What it means is that the Thespians would possess a way to engineer systematically efficacious comportments to themselves, each other, even their environments, without being able to reverse engineer those relationships. What it means, in other words, is that a great deal of their knowledge would be impenetrable—tacit, implicit, automatic, or what have you. Thespians, like humans, would be able to solve a great many problems regarding their relations to themselves, their fellows, and their world without possessing the foggiest idea of how. The ignorance here is structural ignorance, as opposed to the ignorance, say, belonging to original naivete. One would expect the Thespians would be ignorant of their nature absent the cultural scaffolding required to unravel the mad complexity of their brains. But the problem isn’t simply that Thespians would be blind to their inner nature; they would also be blind to this blindness. Since their metacognitive capacities consistently yield the information required to solve in practical, ancestral contexts, the application of those capacities to the theoretical question of their nature would be doomed from the outset. Our Thespians would consistently get themselves wrong.

Is it fair to say they would be amazed by their incapacity, the way our ancestors were? [10] Maybe—who knows. But we could say, given the ecological considerations adduced here, that they would attempt to solve themselves assuming, at least initially, that they could be solved, despite the woefully inadequate resources at their disposal.

In other words, our Thespians would very likely suffer what might be called theoretical anosognosia. In clinical contexts, anosognosia applies to patients who, due to some kind of pathology, exhibit unawareness of sensory or cognitive deficits. Perhaps the most famous example is Anton-Babinski Syndrome, where physiologically blind patients persistently claim they can in fact see. This is precisely what we could expect from our Thespians vis a vis their ‘inner eye.’ The function of metacognitive systems is to engineer environmental solutions via the strategic uptake of limited amounts of information, not to reverse engineer the nature of the brain it belongs to. Repurposing these systems means repurposing systems that generally take the adequacy of their resources for granted. When we catch our tongue at Christmas dinner, we just do; we ‘implicitly assume’ the reliability our metacognitive capacity to filter our speech. It seems wildly implausible to suppose that theoretically repurposing these systems would magically engender a new biological capacity to automatically assess the theoretical viability of the resources available. It stands to reason, rather, that we would assume sufficiency the same as before, only to find ourselves confounded after the fact.

Of course, saying that our Thespians suffer theoretical anosognosia amounts to saying they would suffer chronic, theoretical hallucinations. And once again, ecological considerations provide a way to guess at the kinds of hallucinations they might suffer.

Dualism is perhaps the most obvious. Aristotle, recall, drew his conclusions assuming the sufficiency of the information available. Contrasting the circular, ageless, repeating motion of the stars and planets to the linear riot of his immediate surroundings, he concluded that the celestial and the terrestrial comprised two distinct ontological orders governed by different natural laws, a dichotomy that prevailed some 1800 years. The moral is quite clear: Where and how we find ourselves within a system determines what kind of information we can access regarding that system, including information pertaining to the sufficiency of that information. Lacking instrumentation, Aristotle simply found himself in a position where the ontological distinction between heaven and earth appeared obvious. Unable to cognize the limits imposed by his position within the observed systems, he had no idea that he was simply cognizing one unified system from two radically different perspectives, one too near, the other too far.

Trapped in a similar structural bind vis a vis themselves, our navel-gazing Thespians would almost certainly mistake properties pertaining to neglect with properties pertaining to what is, distortions in signal, for facts of being. Once again, since the posits possessing those properties belong to correlative cognitive systems, they would resist causal cognition. No matter how hard Thespian philosophers tried, they would find themselves unable to square their apparent functions with the machinations of nature more generally. Correlative functions would appear autonomous, as somehow operating outside the laws of nature. Embedded in their environment in a manner that structurally precludes accurately intuiting that embedment, our alien philosophers would conceive themselves as something apart, ontologically distinct. Thespian philosophy would have its own versions of ‘souls’ or ‘minds’ or ‘Dasein’ or ‘a priori’ or what have you—a disparate order somehow ‘accounting’ for various correlative cognitive modes, by anchoring the bare cognition of constraint in posits (inherited or not) rationalized on the back of Thespian fashion.

Dualisms, however, require that manifest continuities be explained, or explained away. Lacking any ability to intuit the actual machinations binding them to their environments, Thespians would be forced to rely on the correlative deliverances of metacognition to cognize their relation to their world—doing so, moreover, without the least inkling of as much. Given theoretical anosognosia (the inability to intuit metacognitive incapacity), it stands to reason that they would advance any number of acausal versions of this relationship, something similar to ‘aboutness,’ and so reap similar bewilderment. Like us, they would find themselves perpetually unable to decisively characterize ‘knowledge of the world.’ One could easily imagine the perpetually underdetermined nature of these accounts convincing some Thespian philosophers that the deliverances of metacognition comprised the whole of existence (engendering Thespian idealism), or were at least the most certain, most proximate thing, and therefore required the most thorough and painstaking examination (engendering a Thespian phenomenology)…

Could this be right?

This story is pretty complex, so it serves to review the modesty of our working assumptions. The presumption of interstellar evolutionary convergence warranted assuming that Thespian cognition, like human cognition, would be bounded, a complex bundle of ‘kluges,’ heuristic solutions to a wide variety of ecological problems. The fact that Thespians would have to navigate both brute and intricate causal environments, troubleshoot both inorganic and organic contexts, licenses the claim that Thespian cognition would be bifurcated between causal systems and a suite of correlational systems, largely consisting of ‘fast and frugal heuristics,’ given the complexity and/or the inaccessibility of the systems involved. This warranted claiming that both Thespian social cognition and metacognition would be correlational, heuristic systems adapted to solve very complicated ecologies on the basis of scarce data. This posed the inevitable problem of neglect, the fact that Thespians would have no intuitive way of assessing the adequacy of their metacognitive deliverances once they applied them to theoretical questions. This let us suppose theoretical anosognosia, the probability that Thespian philosophers would assume the sufficiency of radically inadequate resources—systematically confuse artifacts of heuristic neglect for natural properties belonging to extraordinary kinds. And this let us suggest they would have their own controversies regarding mind-body dualism, intentionality, even knowledge of the external world.

As with Thespian natural philosophy, any number of caveats can be raised at any number of junctures, I’m sure. What if, for instance, Thespians were simply more pragmatic, less inclined to suffer speculation in the absence of decisive application? Such a dispositional difference could easily tilt the balance in favour of skepticism, relegating the philosopher to the ghettos of Thespian intellectual life. Or what if Thespians were more impressed by authority, to the point where reflection could only be interrogated refracted through the lens of purported revelation? There can be no doubt that my account neglects countless relevant details. Questions like these chip away at the intuition that the Thespians, or something like them, might be real

Luckily, however, this doesn’t matter. The point of posing the problem of xenophilosophy wasn’t so much to argue that Thespians are out there, as it was, strangely enough, to recognize them in here

After all, this exercise in engineering alien philosophy is at once an exercise in reverse-engineering our own. Blind Brain Theory only needs Thespians to be plausible to demonstrate its abductive scope, the fact that it can potentially explain a great many perplexing things on nature’s dime alone.

So then what have we found? That traditional philosophy something best understood as… what?

A kind of cognitive pathology?

A disease?

Ripley's nightmare


IV: Conclusion

It’s worth, I think, spilling a few words on the subject of that damnable word, ‘experience.’ Dogmatic eliminativism is a religion without gods or ceremony, a relentlessly contrarian creed. And this has placed it in the untenable dialectical position of apparently denying what is most obvious. After all, what could be more obvious than experience?

What do I mean by ‘experience’? Well, the first thing I generally think of is Holocaust, and the palpable power of the Survivor.

Blind Brain Theory paints a theoretical portrait wherein experience remains the most obvious thing in practical, correlational ecologies, while becoming a deeply deceptive, largely chimerical artifact in high-dimensional, causal ones. We have no inkling of tripping across ecological boundaries when we propose to theoretically examine the character of experience. What was given to deliberative metacognition in some practical context (ruminating upon a social gaffe, say) is now simply given to deliberative metacognition in an artificial one—‘philosophical reflection.’ The difference between applications is nothing if not extreme, and yet conclusions are drawn assuming sufficiency, again and again and again—for millennia.

Think of the difference between your experience and your environment, say, in terms of the difference between concentrating on a mental image of your house and actually observing it. Think of how few questions the mental image can answer compared to the visual image. Where’s the grass the thickest? Is there birdshit on the lane? Which branch comes closest to the ground? These questions just don’t make sense in the context of mental imagery.

Experience, like mental imagery, is something that only answers certain questions. Of course, the great, even cosmic irony is that this is the answer that has been staring us in the fucking face all along. Why else would experience remain an enduring part of philosophy, the institution that asks how things in the most general sense hang together in the most general sense without any rational hope of answer?

Experience is obvious—it can be nothing but obvious. The palpable power of the Holocaust Survivor is, I think, as profound a testament to the humanity of experience as there is. Their experience is automatically our own. Even philosophers shut up! It correlates us in a manner as ancient as our species, allows us to engineer the new. At the same time, it cannot but dupe and radically underdetermine our ancient, Sisyphean ambition to peer into the soul through the glass of the soul. As soon as we turn our rational eye to experience in general, let alone the conditions of possibility of experience, we run afoul illusions, impossible images that, in our diseased state, we insist are real.

This is what our creaking bookshelves shout in sum. The narratives, they proclaim experience in all its obvious glory, while treatise after philosophical treatise mutters upon the boundary of where our competence quite clearly comes to an end. Where we bicker.


At least we have reason to believe that philosophers are not alone in the universe.

Alien role-reversal



[1] The eliminativism at issue here is meaning eliminativism, and not, as Stich, Churchland, and many others have advocated, psychological eliminativism. But where meaning eliminativism clearly entails psychological eliminativism, it is not at all obvious the psychological eliminativism entails meaning eliminativism. This was why Stich found himself so perplexed by the implications of reference (see his, Deconstructing the Mind, especially Chapter 1). To assume that folk psychology is a mistaken theory is to assume that folk psychology is representational, something that is true or false of the world. The critical eliminativism espoused here suffers no such difficulty, but at the added cost of needing to explain meaning in general, and not simply commonsense human psychology.

[2] See Kathryn Denning’s excellent, “Social Evolution in Cosmic Context,”

[3] Nicolas Rescher, for instance, makes hash of the time-honoured assumption that aliens would possess a science comparable to our own by cataloguing the myriad contingencies of the human institution. See Finitude, 28, or Unknowability, “Problems of Alien Cognition,” 21-39.

[4] Stellar parallax, on this planet at least, was not measured until 1838.

[5] In the broad sense proposed by Wilson in The Social Conquest of the Earth.

[6] This amounts to taking a position in the mindreading debate that some theorists would find problematic, particularly those skeptical of modularity and/or with representationalist sympathies. Since the present account provides a parsimonious means of explaining away the intuitions informing both positions, it would be premature to engage the debate regarding either at this juncture. The point is to demonstrate what heuristic neglect, as a theoretical interpretative tool, allows us to do.

[7] The representationalist would cry foul at this point, claim the existence of some coherent ‘functional level’ accessible to deliberative metacognition (the mind) allows for accurate and exhaustive description. But once again, since heuristic neglect explains why we’re so prone to develop intuitions along these lines, we can sidestep this debate as well. Nobody knows what the mind is, or whatever it is they take themselves to be describing. The more interesting question is one of whether a heuristic neglect account can be squared with the research pertaining directly to this field. I suspect so, but for the interim I leave this to individuals more skilled and more serious than myself to investigate.

[8] In the literature, accounts that claim metacognitive functions for mindreading are typically called ‘symmetrical theories.’ Substantial research supports the claim that metacognitive reporting involves social cognition. See Carruthers, “How we know our own minds: the relationship between mindreading and metacognition,” for an outstanding review.

[9] Gerd Gigerenzer and the Adaptive Behaviour and Cognition Research Group have demonstrated that simple heuristics are often far more effective than even optimization methods possessing far greater resources. “As the amount of data available to make predictions in an environment shrinks, the advantage of simple heuristics over complex algorithms grows” (Hertwig and Hoffrage, “The Research Agenda,” Simple Heuristics in a Social World, 23).

[10] “Quid est enim tempus? Quis hoc facile breuiterque explicauerit? Quis hoc ad uerbum de illo proferendum uel cogitatione comprehenderit? Quid autem familiarius et notius in loquendo commemoramus quam tempus? Et intellegimus utique cum id loquimur, intellegimus etiam cum alio loquente id audimus. Quid est ergo tempus? Si nemo ex me quærat, scio; si quærenti explicare uelim, nescio.

Malabou, Continentalism, and New Age Philosophy

by rsbakker

Perhaps it’s an ex-smoker thing, the fact that I was a continentalist myself for so many years. Either way, I generally find continental philosophical forays into scientific environs little more than exercises in conceptual vanity (see “Reactionary Atheism: Hagglund, Derrida, and Nooconservatism“, “Zizek, Hollywood, and the Disenchantment of Continental Philosophy,” or “Life as Perpetual Motion Machine: Adrian Johnston and the Continental Credibility Crisis“). This is particularly true of Catherine Malabou, who, as far as I can tell, is primarily concerned with cherry-picking those findings that metaphorically resonate with certain canonical continental philosophical themes. For me, her accounts merely demonstrate the deepening conceptual poverty of the continental tradition, a poverty dressed up in increasingly hollow declarations of priority. This is true of “One Life Only: Biological Resistance, Political Resistance,” but with a crucial twist.

In this piece, she takes continentalism (or ‘philosophy,’ as she humbly terms it) as her target, charging it with a pervasive conceptual prejudice. She wants to show how recent developments in epigenetics and cloning reveal what she terms the “antibiological bias of philosophy.” This bias is old news, of course (especially in these quarters), but Malabou’s acknowledgement is heartening nonetheless, at least to those, such as myself, who think the continental penchant for conceptual experimentation is precisely what contemporary cognitive science requires.

“Contemporary philosophy,” she claims, “bears the marks of a primacy of symbolic life over biological life that has not been criticized, nor deconstructed.” Her predicate is certainly true—continentalism is wholly invested in theoretical primacy of intentionality—but her subsequent modifier simply exemplifies the way we humans are generally incapable of hearing criticisms outside our own. After all, it’s the quasi-religious insistence on the priority of the intentional, the idea that armchair speculation on the nature of the intentional trumps empirical findings in this or that way, that has rendered continentalism a laughing-stock in the sciences.

But outgroup criticisms are rarely heard. Whatever ‘othering the other’ consists in, it clearly involves not only their deracination, but their derationalization, the denial of any real critical insight. This is arguably what makes the standard continental shibboleths of ‘scientism,’ ‘positivism,’ and the like so rhetorically effective. By identifying an interlocutor as an outgroup competitor, you assure your confederates will be incapable of engaging him or her rationally. Continentalists generally hear ideology instead of cogent criticism. The only reason Malabou can claim that the ‘primacy of the symbolic over the biological’ has been ‘neither criticized nor deconstructed’ is simply that so very few within her ingroup have been able to hear the outgroup chorus, as thunderous as it has been.

But Malabou is a party member, and to her credit, she has done anything but avert her eyes from the scientifically mediated revolution sweeping the ground from beneath all our feet. One cannot dwell in foreign climes without suffering some kind of transformation of perspective. And at long last she has found her way to the crucial question, the one which threatens to overthrow her own discursive institution, the problem of what she terms the “unquestioned splitting of the concept of life.”

She takes care, however, to serve up the problem with various appeals to continental vanity—to hide the poison in some candy, you might say.

It must be said, the biologists are of little help with this problem. Not one has deemed it necessary to respond to the philosophers or to efface the assimilation of biology to biologism. It seems inconceivable that they do not know Foucault, that they have never encountered the word biopolitical. Fixated on the two poles of ethics and evolutionism, they do not think through the way in which the science of the living being could—and from this point on should—unsettle the equation between biological determination and political normalization. The ethical shield with which biological discourse is surrounded today does not suffice to define the space of a theoretical disobedience to accusations of complicity among the science of the living being, capitalism, and the technological manipulation of life.

I can remember finding ignorances like these ‘inconceiveable,’ thinking that if only scientists would ‘open their eyes’ (read so and so) they would ‘see’ (their conceptually derivative nature). But why should any biologist read Foucault, or any other continentalist for that matter? What distinguishes continental claims to the priority of their nebulous domain over the claims of say, astrology, particularly when the dialectical strategies deployed are identical? Consider what Manly P. Hall has to say in The Story of Astrology:

Materialism in the present century has perverted the application of knowledge from its legitimate ends, thus permitting so noble a science as astronomy to become a purely abstract and comparatively useless instrument which can contribute little more than tables of meaningless figures to a world bankrupt in spiritual, philosophical, and ethical values. The problem as to whether space is a straight or a curved extension may intrigue a small number of highly specialized minds, but the moral relationship between man and space and the place of the human soul in the harmony of the spheres is vastly more important to a world afflicted with every evil that the flesh is heir to. 8, Hall, Manly P. The Story of Astrology: The Belief in the Stars as a Factor in Human Progress. Cosimo, Inc., 2005.

Sound familiar? If you’ve read any amount of continental philosophy it should. One can dress up the relation between the domains differently, but the shape remains the same. Where astronomy is merely ontic or ideological or technical or what have you, astrology ministers to the intentional realities of lived life. The continentalist would cry foul, of course, but the question isn’t so much one of what they actually believe as one of how they appear. Insofar as they place various, chronically underdetermined speculative assertions before the institutional apparatuses of science, they sound like astrologers. Their claims of conceptual priority, not surprisingly, are met with incredulity and ridicule.

The fact that biologists neglect Foucault is no more inconceivable than the fact that astronomers neglect Hall. In science, credibility is earned. Everybody but everybody thinks they’ve won the Magical Belief Lottery. The world abounds with fatuous, theoretical claims. Some claims enable endless dispute (and, for a lucky few, tenure), while others enable things like smartphones, designer babies, and the detonation of thermonuclear weapons. Since there’s no counting the former, the scientific obsession with the latter is all but inevitable. Speculation is cheap. Asserting the primacy of the symbolic over the natural on speculative grounds is precisely the reason why scientists find continentalism so bizarre.

Akin to astrology.

Now historically, at least, continentalists have consistently externalized the problem, blaming their lack of outgroup credibility on speculative goats like the ‘metaphysics of presence,’ ‘identity thinking,’ or some other combination of ideology and ontology. Malabou, to her credit, wants ‘philosophy’ to partially own the problem, to see the parsing of the living into symbolic and biological as something that must itself be argued. She offers her quasi-deconstructive observations on recent developments in epigenetics and cloning as a demonstration of that need, as examples of the ways the new science is blurring the boundaries between the intentional and the natural, the symbolic and the biological, and therefore outrunning philosophical critiques that rely upon their clear distinction.

This blurring is important because Malabou, like most all continentalists, fears for the future of the political. Reverse engineering biology amounts to placing biology within the purview of engineering, of rendering all nature plastic to human whim, human scruple, human desire. ‘Philosophy’ may come first, but (for reasons continentalists are careful to never clarify) only science seems capable of doing any heavy lifting with their theories. One need only trudge the outskirts of the vast swamp of neuroethics, for instance, to get a sense of the myriad conundrums that await us on the horizon.

And this leads Malabou to her penultimate statement, the one which I sincerely hope ignites soul-searching and debate within continental philosophy, lest the grand old institution become indistinguishable from astrology altogether.

And how might the return of these possibilities offer a power of resistance? The resistance of biology to biopolitics? It would take the development of a new materialism to answer these questions, a new materialism asserting the coincidence of the symbolic and the biological. There is but one life, one life only.

I entirely agree, but I find myself wondering what Malabou actually means by ‘new materialism.’ If she means, for instance, that the symbolic must be reduced to the natural, then she is referring to nothing less than the long-standing holy grail of contemporary cognitive science. Until we can understand the symbolic in terms continuous with our understanding of the natural world, it’s doomed to remain a perpetually underdetermined speculative domain—which is to say, one void of theoretical knowledge.

But as her various references to the paradoxical ‘gap’ between the symbolic and the biological suggest, she takes the irreducibility of the symbolic as axiomatic. The new materialism she’s advocating is one that unifies the symbolic and the biological, while somehow respecting the irreducibility of the symbolic. She wants a kind of ‘type-B materialism,’ one that asserts the ontological continuity of the symbolic and the biological, while acknowledging their epistemic disparity or conceptual distinction. David Chalmers, who coined the term, characterizes the problem faced by such materialisms as follows:

I was attracted to type-B materialism for many years myself, until I came to the conclusion that it simply cannot work. The basic reason for this is simple. Physical theories are ultimately specified in terms of structure and dynamics: they are cast in terms of basic physical structures, and principles specifying how these structures change over time. Structure and dynamics at a low level can combine in all sort of interesting ways to explain the structure and function of high-level systems; but still, structure and function only ever adds up to more structure and function. In most domains, this is quite enough, as we have seen, as structure and function are all that need to be explained. But when it comes to consciousness, something other than structure and function needs to be accounted for. To get there, an explanation needs a further ingredient. “Moving Forward on the Problem of Consciousness.”

Substitute ‘symbolic’ for ‘consciousness’ in this passage, and Malabou’s challenge becomes clear: science, even in the cases of epigenetics and cloning, deals with structure and dynamics—mechanisms. As it stands we lack any consensus commanding way of explaining the symbolic in mechanistic terms. So long as the symbolic remains ‘irreducible,’ or mechanistically inexplicable, assertions of ontological continuity amount to no more than that, bald assertions. Short some plausible account of that epistemic difference in ontologically continuous terms, type-B materialisms amount to little more than wishing upon traditional stars.

It’s here where we can see Malabou’s institutional vanity most clearly. Her readings of epigenetics and cloning focus on the apparently symbolic features of the new biology—on the ways in which organisms resemble texts. “The living being does not simply perform a program,” she writes. “If the structure of the living being is an intersection between a given and a construction, it becomes difficult to establish a strict border between natural necessity and self-invention.”

Now the first, most obvious criticisms of her reading is that she is the proverbial woman with the hammer, pouring through the science, seeing symbolic nails at every turn. Are epigenetics and cloning intrinsically symbolic? Do they constitute a bona fide example of a science beyond structure and dynamics?

Certainly not. Science can reverse engineer our genetic nature precisely because our genetic nature is a feat of evolutionary engineering. This kind of theoretical cognition is so politically explosive precisely because it is mechanical, as opposed to ‘symbolic.’ Researchers now know how some of these little machines work, and as result they can manipulate conditions in ways that illuminate the function of other little machines. And the more they learn, the more mechanical interventions they can make, the more plastic (to crib one of Malabou’s favourite terms) human nature becomes. The reason these researchers hold so much of our political future in their hands is precisely because their domain (unlike Malabou’s) is mechanical.

For them, Malabou’s reading of their fields would be obviously metaphoric. Malabou’s assumption that she is seeing the truth of epigenetics and cloning, that they have to be textual in some way rather than lending themselves to certain textual (deconstructive) metaphors, would strike them as comically presumptuous. The blurring that she declares ontological, they would see as epistemic. To them, she’s just another humanities scholar scrounging for symbolic ammunition, for confirmation of her institution’s importance in a time of crisis. Malabou, like Manly P. Hall, can rationalize this dismissal in any number of ways–this goes without saying. Her problem, like Hall’s, is that only her confederates will agree with her. She has no real way of prosecuting her theoretical case across ingroup boundaries, and so no way of recouping any kind of transgroup cognitive legitimacy–no way of reversing the slow drift of ‘philosophy’ to the New Age section of the bookstore.

The fact is Malabou begins by presuming the answer to the very question she claims to be tackling: What is the nature of the symbolic? To acknowledge that continental philosophy is a speculative enterprise is to acknowledge that continental philosophy has solved nothing. The nature of the symbolic, accordingly, remains an eminently open question (not to mention an increasingly empirical one). The ‘irreducibility’ of the symbolic order is no more axiomatic than the existence of God.

If the symbolic were, say, ecological, the product of evolved capacities, then we can safely presume that the symbolic is heuristic, part of some regime for solving problems on the cheap. If this were the case, then Malabou is doing nothing more than identifying the way different patterns in epigenetics and cloning readily cue a specialized form of symbolic cognition. The fact that symbolic cognition is cued does not mean that epigenetics and cloning are ‘intrinsically symbolic,’ only that they readily cue symbolic cognition. Given the vast amounts of information neglected by symbolic cognition, we can presume its parochialism, its dependence on countless ecological invariants, namely, the causal structure of the systems involved. Given that causal information is the very thing symbolic cognition has adapted to neglect, we can presume that its application to nature would prove problematic. This raises the likelihood that Malabou is simply anthropomorphizing epigenetics and cloning in an institutionally gratifying way.

So is the symbolic heuristic? It certainly appears to be. At every turn, cognition makes due with ‘black boxes,’ relying on differentially reliable cues to leverage solutions. We need ways to think outcomes without antecedents, to cognize consequences absent any causal factors, simply because the complexities of our environments (be they natural, social, or recursive) radically outrun our capacity to intuit. The bald fact is that the machinery of things is simply too complicated to cognize on the evolutionary cheap. Luckily, nature requires nothing as extravagant as mechanical knowledge of environmental systems to solve those systems in various, reproductively decisive ways. You don’t need to know the mechanical details of your environments to engineer them. So long as those details remain relatively fixed, you can predict/explain/manipulate them via those correlated systematicities you can access.

We genuinely need things like symbolic cognition, regimes of ecologically specific tools, for the same reason we need scientific enterprises like biology: because the machinery of most everything is either too obscure or too complex. The information we access provides us cues, and since we neglect all information pertaining to what those cues relate us to, we’re convinced that cues are all that is the case. And since causal cognition cannot duplicate the cognitive shorthand of the heuristics involved, they appear to comprise an autonomous order, to be something supernatural, or to use the prophylactic jargon of intentionalism, ‘irreducible.’ And since the complexities of biology render these heuristic systems indispensable to the understanding of biology, they appear to be necessary, to be ‘conditions of possibility’ of any cognition whatsoever. We are natural in such a way that we cannot cognize ourselves as natural, and so cognize ourselves otherwise. Since this cognitive incapacity extends to our second-order attempts to cognize our cognizing, we double down, metacognize this ‘otherwise’ in otherwise terms. Far from any fractionate assembly of specialized heuristic tools, symbolic cognition seems to stand not simply outside, but prior the natural order.

Thus the insoluble conundrums and interminable disputations of Malabou’s ‘philosophy.’

Heuristics and metacognitive neglect provide a way to conceive symbolic cognition in wholly natural terms. Blind Brain Theory, in other words, is precisely the ‘new materialism’ that Malabou seeks. The problem is that it seems to answer Malabou’s question regarding political in the negative, to suggest that even the concept of ‘resistance’ belongs to a bygone and benighted age. To understand the coincidence of the symbolic and biological, the intentional and the natural, one must understand the biology of philosophical reflection, and the way we were evolutionarily doomed to think ourselves something quite distinct from what we in fact are (see “Alien Philosophy,” part one and two). One must turn away from the old ways, the old ideas, and dare to look hard at the prospect of a post-intentional future. The horrific prospect.

Odds are we were wrong folks. The assumption that science, the great killer of traditional cognitive traditions, will make an exception for us, somehow redeem our traditional understanding of ourselves is becoming increasingly tendentious. We simply do not have the luxury of taking our cherished, traditional conceits for granted—at least not anymore. The longer continental philosophy pretends to be somehow immune, or even worse, to somehow come first, the more it will come to resemble those traditional discourses that, like astrology, refuse to relinquish their ancient faith in abject speculation.