Three Pound Brain

No bells, just whistling in the dark…

Month: October, 2013

Just Plain Crazy Enactive Cognition: A Review and Critical Discussion of Radicalizing Enactivism: Basic Minds without Content, by Dan Hutto and Erik Myin

by rsbakker

Mechanically the picture of how we are related to our environment is ontologically straightforward and astronomically complicated. Intentionally, the picture of how we are related to our environment is ontologically occult and surprisingly simple. Since the former is simply an extension of the scientific project into what was historically the black-box domain of the human, it is the latter that has been thrown into question. Pretty much all philosophical theories of consciousness and cognition break about how to conceive the relation between these two pictures. Very few embrace all apparent intentional phenomena,[1] but the vast majority of theorists embrace at least some—typically those they believe the most indispensible for cognition. Given the incompatibility of these with the mechanical picture they need some way to motivate their application.

But why bother? If the intentional resists explanation in natural terms, and if the natural explanation of cognition is our primary desideratum, then why not simply abandon the intentional? The answer to this question is complex, but the fact remains that any explanation of knowing, whether it involves ‘knowing how’ or ‘knowing that,’ has to explain the manifest intentionality of knowledge. No matter what one thinks of intentionality, any scientific account of cognition is going to have to explain it—at least to be convincing.

Why? Because explanation requires an explanandum, and the explanandum in this instance is, intuitively at least, intentional through and through. To naturally explain cognition, one must naturally explain correct versus incorrect cognition, because, for better or worse, this is the how cognition is implicitly conceived. The capacity to be right or wrong, true or false, is a glaring feature of all cognition, so much so that any explanation that fails to explain it pretty clearly fails to explain cognition.[2]

So despite the naturalistic inscrutability of intentionality, it nonetheless remains an ineliminable feature of cognition. We find ourselves in the apparent bind of having to naturalistically explain something that cannot be naturalistically explained to explain cognition. Thus what might be called the great Scandal of Cognitive Science: the lack of any consensus commanding definition, let alone explanation, of what cognition is. The naturalistic inscrutability versus the explanatory ineliminability of intentionality is the perennial impasse, the ‘Master Hard Problem,’ one might say, underwriting the aforementioned Scandal.

Radicalizing Enactivism: Basic Minds without Content, by Dan Hutto and Erik Myin, constitutes another attempt to finesse this decidedly uncomfortable situation. Both Hutto and Myin are proponents of the ‘enactive,’ or ‘embodied,’ cognitive research programme, an outlook that emphasizes understanding cognition, and even phenomenal consciousness, in environmentally holistic terms—as ‘wide’ or ‘extended.’ The philosophical roots of enactivism are various and deep,[3] but they all share a common antagonism to the representationalism that characterizes mainstream cognitive science. Once one defines cognition in terms of computations performed on representations, one has effectively sealed cognition inside the head. Where enactivists are prone to explicitly emphasize the continuity of cognition and behaviour, representationalists are prone to implicitly assume their discontinuity. Even though animal life so obviously depends on solving environments via behaviour, both in its evolutionary genesis and in its daily maintenance, representationalists generally think this behavioural solving of the world is the product of a prior cognitive solving of representations of the world. The wide cognition championed by the enactivist, therefore, requires the critique of representationalism.

This is the task that Hutto and Myin set themselves. As they write, “We will have succeeded if, having reached the end of the book, the reader is convinced that the idea of basic contentless minds cannot be cursorily dismissed; that it is a live option that deserves to be taken much more seriously than it is currently” (xi).

As much as I enjoyed the book, I’m not so sure they succeed. But I’ve been meaning to discuss the relation between embodied cognitive accounts and the Blind Brain Theory for quite some time and Radicalizing Enactivism presents the perfect opportunity to finally do so. I know of a few souls following Three Pound Brain who maintain enactivist sympathies. If you happen to be one of them, I heartily encourage you to chip in your two cents.

Without any doubt, the strength of Radicalizing Enactivism, and the reason it seems to have garnered so many positive reviews, lies in the lucid way Hutto and Myin organize their critique around what they call the ‘Hard Problem of Content’:

“Defenders of CIC [Cognition necessarily Involves Content] must face up to the Hard Problem of Content: that positing informational content is incompatible with explanatory naturalism. The root trouble is that Covariance doesn’t Constitute Content. If covariance is the only scientifically respectable notion of information that can do the work required by explanatory naturalists, it follows that informational content does not exist in nature—or at least it doesn’t exist independently from and prior to the existence of certain social practices. If informational content doesn’t exist in nature, then cognitive systems don’t literally traffic in informational content…” xv

The information they are referring to here is semantic information, or as Floridi puts it in his seminal The Philosophy of Information, “the kind of information that we normally take to be essential for epistemic purposes” (82). To say that cognition necessarily involves content is to say that cognition amounts to the manipulation of information about. The idea is as intuitive as can be: the senses soak up information about the world, which the brain first cognizes then practically utilizes. For most theorists, the truth of this goes without saying: the primary issue is one of the role truth plays in semantic information. For these theorists, the problem that Hutto and Myin alude to, the Hard Problem of Content, is more of a ‘going concern’ rather than a genuine controversy. But if anything this speaks to its intractability as opposed to its relevance. For Floridi, who calls it the Symbol Grounding Problem (following Harnad (1990)),  it remains “one of the most important open questions in the philosophy of information” (134). As it should, given that it is the question upon which the very possibility of semantic information depends.

The problem is one of explaining how information understood as covariance, which can be quantified and so rigorously operationalized, comes to possess the naturalistically mysterious property of ‘aboutness,’ and thus the equally mysterious property of ‘evaluability.’ As with the Hard Problem of Consciousness, many theoretical solutions have been proposed and all have been found wanting in some obvious respect.

Calling the issue ‘the Hard Problem of Content’ is both justified and rhetorically inspired, given the way it imports the obvious miasma of Consciousness Research into the very heart of Cognitive Science. Hutto and Myin wield it the way the hero wields a wooden stake in a vampire movie. They patiently map out the implicatures of various content dependent approaches, show how each of them cope with various challenges, then they finally hammer the Hard Problem of Content through their conceptual heart.

And yet, since this problem has always been a problem, there’s a sense in which Hutto and Myin are demanding that intentionalists bite a bullet (or stake) they have bitten long ago. This has the effect of rendering much of their argument rhetorical—at least it did for me. The problem isn’t that the intentionalists haven’t been able to naturalize intentionality in any remotely convincing way, the problem is that no one has—including Hutto and Myin!

And this, despite all the virtues of this impeccably written and fascinating book, has to be its signature weakness: the fact that Hutto and Myin never manage to engage, let alone surmount, the apparent ineliminability of the intentional. All they really do is exorcise content from what they call ‘basic’ cognition and perception, all the while conceding the ineliminability of content to language and ‘social scaffolding.’ The more general concession they make to explanatory ineliminability is actually explicit in their thesis “that there can be intentionally directed cognition and, even, perceptual experience without content” (x).

So if you read this book hoping to be illuminated as to the nature of the intentional, you will be disappointed. As much as Hutto and Myin would like to offer illumination regarding intentionality, all they really have is another strategic alternative in the end, a way to be less worried about the naturalistic inscrutability of content in particular rather than intentionality more generally. At turns, they come just short of characterizing Radical Enactive Cognition the way Churchill famously characterized democracy: as the least worst way to conceptualize cognition.

So in terms of the Master Hard Problem of naturalistic inscrutability versus explanatory ineliminability, they also find it necessary to bite the inscrutability bullet, only as softly as possible lest anyone hear. They are not interested in any thoroughgoing content skepticism, or what they call ‘Really Radical Enactive or Embodied Cognition’: “Some cognitive activity—plausibly, that associated with and dependent upon the mastery of language—surely involves content” ( xviii). Given that their Hard Problem of Content partitions the Master Problem along such narrow, and ultimately arbitrary, lines, it becomes difficult to understand why anyone should think their position ‘radical’ in any sense.

If they’re not interested in any thoroughgoing content skepticism, they’re even less interested in any thoroughgoing meaning skepticism. Thus the sense of conceptual opportunism that haunted my reading of the book: the failure to tackle the problem of intentionality as a whole lets them play fast and loose with the reader’s intuitions of explanatory ineliminability. Representational content, after all, is the traditional and still (despite the restlessness of graduate students around the world) canonical way of understanding ‘intentional directedness.’ Claiming that representational content runs afoul inscrutability amounts to pointing out the obvious. This means the problem lies in its apparent ineliminability. Pointing out that the representional mountain cannot be climbed simply begs the question of how one gets around it. Systematically avoiding this question lets Hutto and Myin have it both ways, to raise the problem of inscrutability where it serves their theoretical interests, all the while implicitly assuming the very ineliminability that justifies it.

One need only compare the way they hold Tyler Burge (2010) accountable the Hard Problem of Content in Chapter 6 with their attempt to circumvent the Hard Problem of Consciousness in Chapter 8. Burge accepts both inscrutability, the apparent inability to naturalize intentionality, and ineliminability, the apparent inability to explain cognition without intentionality. Like Bechtel, he thinks representational inscrutability is irrelevant insofar as cognitive science has successfully operationalized representations. Rather than offer a ‘straight solution’ to the Hard Problem of Content, Burge argues that we should set it aside, and allow science—and the philosophy concerned with it—to continue pursuing achievable goals.

Hutto and Myin complain:

“Without further argumentation, Burge’s proposal is profoundly philosophically unsatisfying. Even if we assume that contentful states of mind must exist because they are required by perceptual science, this does nothing to address deeply puzzling questions about how this could be so. It is, in effect, to argue from the authority of science. We are asked to believe in representational content even though none of the mysteries surrounding it are dealt with—and perhaps none of them may ever be dealt with. For example, how do the special kinds of natural norms of which Burge speaks come in being? What is their source, and what is their basis? How can representational contents qua representational contents cause, or bring about, other mental of physical events?” 116-117

But when it comes to the Hard Problem of Consciousness, however, Hutto and Myin find themselves whistling an argumentative tune that sounds eerily similar to Burge’s. Like Burge, they refuse to offer any ‘straight solutions,’ arguing that “[r]ather than presenting science and philosophy with an agenda of solving impossible problems, [their] approach liberates both science and philosophy to pursue goals they are able to achieve” (178). And since this is the last page of the book, no corresponding problem of ‘profound philosophical dissatisfaction’ ever arises.

The problem of Radicalizing Enactivism—and the reason why I think it will ultimately harden opinions against the enactivist programme—lies in its failure to assay the shape of what I’ve been calling the Master Problem of naturalistic inscrutability and explanatory ineliminability. The inscrutability of content is simply a small part of this larger problem, which involves, not only the inscrutability of intentionality more generally, but the all-important issue of ineliminability as well, the fact that various ‘intentional properties’ such as evaluability so clearly seem to belong to cognition. By focussing on the inscrutability of content to the exclusion of the Master Problem, they are able to play on specific anxieties due to inscrutability without running afoul more general scruples regarding ineliminability. They can eat their intentional cake and have it too.[4]

Personally, I’m inclined to agree with the more acerbic critics of so-called ‘radical,’ or anti representationalist, enactivism: it simply is not a workable position.[5] But I think I do understand its appeal, why, despite forcing its advocates to fudge and dodge the way they seem to do on what otherwise seem to be relatively straightforward issues, it nevertheless continues to grow in popularity. First and foremost, the problem of inscrutability has grown quite long in the tooth: after decades of pondering this problem, our greatest philosophical minds have only managed to deepen the mire. Add to this the successes of DST and situated AI, plus the simple observation that we humans are causally embedded in—‘coupled to’—our causal environments, and it becomes easy to see how mere paradigm fatigue can lapse into outright paradigm skepticism.

I think Hutto and Myin are right in insisting that representationalism has been played out, that it’s time to move on. The question is really only one of how far we have to move. I actually think this, the presentiment of needing to get away, to start anew, is why ‘radical’ has become such a popular modifier in embodied cognition circles. But I’m not sure it’s a modifier that any of these positions necessarily deserve. I say this because I’m convinced that answering the Master Problem of inscrutability versus ineliminability forces us to move far, far further than any philosopher (that I know of at least) has hitherto dared to go. The fact is Hutto and Myin remain intentionalists, plain and simple. To put it bluntly: if they count as ‘radical,’ then they better lock me up, because I’m just plain crazy![6]

If I’m right, the only way to drain the inscrutability swamp is to tackle the problem of inscrutability whole, which is to say, to tackle the Master Problem. So long as inscrutability remains a problem, the strategy of partitioning intentionality into ‘good’ and ‘bad,’ eliminable and ineliminable—the strategy that Hutto and Myin share with representationalists more generally—can only lead to a reorganization of the controversy. Perhaps one of these reorganizations will turn out to be the lucky winner—who can say?—but it’s important to see that Radical Enactive Cognition, despite its claims to the contrary, amounts to ‘more of the same’ in this crucial respect. All things being equal, it’s doomed to complicate as opposed to solve, insofar as it merely resituates (in this case, literally!) the problem of inscutability.

Now I’m an institutional outsider, which is rarely a good thing if you have a dramatic reconceptualization to sell. When matters become this complicated, professionalization allows us to sort the wheat from the chaff before investing time and effort in either. The problem, however, is that chaff seems to be all anyone has. What I’m calling the Scandal of Cognitive Science represents as clear an example of institutional failure as you will find in the sciences. Given that the problem of inscrutability turns on explicit judgments and implicit assumptions that have been institutionalized, there’s a sense in which hobbyists such as myself, individuals who haven’t been stamped by the conceptual prejudices of their supervisors, or shamed out of pursuing an unconventional line of reasoning by the embarrassed smiles of their peers, may actually have a kind of advantage.

Regardless, there are novel ways to genuinely radicalize this problem, and if they initially strike you as ‘crazy,’ it might just be because they are sane. The Scandal of Cognitive Science,
after all, is the fact that its members have no decisive means to judge one way or another! So, with this in mind, I want to introduce what might be called ‘Just Plain Crazy Enactice Cognition’ (JPCEC), an attempt to apply Hutto and Myin’s ultimately tendentious dialectical use of inscrutability across the board—to solve the Master Problem of naturalistic inscrutability and explanatory ineliminability, in effect. It can be done—I actually think cognitive scientists of the future will smirk and shake their heads, reviewing the twist we presently find ourselves in, but only because they will have internalized something similar to the decidedly alien view I’m about to introduce here.

For reasons that should become apparent, the best way to introduce Just Plain Crazy Enactice Cognition is to pick up where Hutto and Myin end their argument for Radical Enactive Cognition: the proposed solution to the Hard Problem of Consciousness they offer in Chapter 8. The Hard Problem of Consciousness, of course, is the problem of naturalistically explaining phenomenal properties in naturalistic terms of physical structures and dynamics. In accordance with their enactivism, Hutto and Myin hold that phenomenality is environmentally determined in certain important respects. Since ‘wide phenomenality’ is incompatible with qualia as normally understood, this entails qualia eliminativism, which warrants rejecting the explanatory gap—the Hard Problem of Consciousness. They adopt the Dennettian argument that the Hard Problem is impossible to solve given the definition of qualia as “intrinsically qualitative, logically private, introspectable, incomparable, ineffable, incorrigible entities of our mental acquaintance” (156). And since impossible questions warrant no answers they refuse to listen:

“What course do we recommend? Stick with [Radical Enactive Cognition] and take phenomenality to be nothing but forms of activities—perhaps only neural—that are associated with environment-involving interactions. If that is so, there are not two distinct relata—the phenomenal and the physical—standing in a relation other than identity. Lastly, come to see that such identities cannot, and need not be explained. If so, the Hard Problem totally disappears.” 169

When I first read this, I wrote ‘Wish It Away Strategy?’ in the margin. On my second reading, I wrote, ‘Whew! I’m glad consciousness isn’t a baffling mystery anymore!’

The first note was a product of ignorance; I simply didn’t know what was coming next. Hutto and Myin adopt a variant of the Type B Materialist response to the Hard Problem, admitting that there is an explanatory gap, while denying any ontological gap. Conscious experiences and brain-states are considered identical, though phenomenal and physical concepts we use to communicate them are systematically incompatible. It is the difference between the latter that fools us into imputing some kind of ontological difference between the former, giving license to innumerable, ultimately unanswerable questions. Ontological identity means there is no Hard Problem to be solved. Conceptual difference means that phenomenal vocabularies cannot be translated into physical vocabularies, that the phenomenal is ‘irreducible.’ As a result, the phenomenal character of experience cannot be physically explained—it is entirely natural, but utterly inexplicable in natural terms.

But Hutto and Myin share the standard objection against Type B Materialisms: their inability to justify their foundational identity claim.

“Standard Type B offerings therefore fail to face up to the root challenge of the Hard Problem—they fail to address worries about the intelligibility of making certain identity claims head on. They do nothing to make the making of such claims plausible. The punch line is that to make a credible case for phenemeno-physical identity claims it is necessary to deal with—to explain away—appearances of difference in a more satisfactory way than by offering mere stipulations.” 174

Short of some explanation of the apparent difference between conscious experiences and brain states, in other words,Type B approaches can only be ‘wish it away strategies.’ The question accordingly becomes one of motivating the identity of the phenomenal and the physical. Since Hutto and Myin think the naturalistic inscrutability of phenomenality renders standard scientific identification is impossible, they argue that the practical, everyday identity between the phenomenal and the physical we implicitly assume amply warrants the required identification. And as it turns out, this implicit everyday identity is extensive or wide:

“Enactivists foreground the ways in which environment-involving activities are required for understanding and conceiving of phenomenality. They abandon attempts to explain phenemeno-physical identities in deductive terms for attempts to motivate belief in such identities by reminding us of our common ways of thinking and talking about phenomenal experience. Continued hesitance to believe in such identities stems largely from the fact that experiences—even if understood as activities—are differently encountered by us: sometimes we live them through embodied activity and sometimes we get at them only descriptively.” 177

Thus the second comment I wrote reading the above passage!

What ‘motivates’ the enactive Type B materialist’s identity claim, in other words, is simply the identity we implicitly assume in our worldly engagements, an identity that dissolves because of differences intrinsic to the activity of theoretically engaging phenomenality.

I’m assuming that Hutto and Myin use ‘motivate,’ rather than ‘justify,’ simply because it remains entirely unclear why the purported assumption of identity implicit in embodied activity should trump the distinctions made by philosophical reflection. As a result, the force of this characterization is not so much inferential as it is redemptive. It provides an elegant enough way to rationalize giving up on the Hard Problem via assumptive identity, but little more. Otherwise it redeems the priority of lived life, and, one must assume, all the now irreducible intentional phenomena that go with it.

The picture they paint has curb appeal, no doubt about that. In terms of our Master Hard Problem, you could say that Radical Enactivism uses ‘narrow inscrutability’ to ultimately counsel (as opposed to argue) wide ineliminability. All we have to be is eliminativists about qualia and non-linguistic content, and the rest of the many-coloured first-person comes for free.

The problem—and it is a decisive one—is that redemption just ain’t a goal of naturalistic inquiry, no matter how speculative. Since our cherished, prescientific assumptions are overthrown more often than not, a theory’s ability to conserve those assumptions (as opposed to explain them) should warn us away, if anything. The rational warrant of Hutto and Myin’s recommendation lies entirely in assuming the epistemic priority of our implicit assumptions, and this, unfortunately, is slender warrant indeed, presuming, as it does, that when it comes to this one particular yet monumental issue—the identity of the physical and the phenomenal—we’re better philosophers when we don’t philosophize than when we do!

Not surprisingly, questions abound:

1) What, specifically, is the difference between ‘embodied encounters’ and ‘descriptive’ ones?

2) Why are the latter so prone to distort?

3) And if the latter are so prone to distort, to what extent is this description of ‘embodied activity’ potentially distorted?

4) What is the nature of the confounds involved?

5) Is there any way to puzzle through parts of this problem given what the sciences of the brain already know?

6) Is it possible to hypothesize what might be going on in the brain, such that we find ourselves in such straits?

As it turns out, these questions are not only where Radical Enactive Cognition ends, but also where Just Plain Crazy Enactive Cognition begins. Hutto and Myin can’t pose these questions because their ‘motivation’ consists in assuming we already implicitly know all that we need to know to skirt (rather than shirk) the Hard Problem of Consciousness. Besides, their recommendation is to abandon the attempt to naturalistically answer the question of the phenomeno-physical relation. Any naturalistic inquiry into the question of how theoretical reflection distorts the presumed ‘whole’ (‘integral,’ or ‘authentic’) nature of our implicit assumption would seem to require some advance, naturalistic understanding of just what is being distorted—and we have been told that no such understanding is possible.

This is where JPCEC begins, on the other hand, because it assumes that the question of inscrutability and ineliminability is itself an empirical one. Speculative recommendations such as Hutto and Myin’s only possess the intuitive force they do because we find it impossible to imagine how the intentional and the phenomenal could be rendered compatible with the natural. Given the conservative role that failures of imagination have played in science historically, JPCEC assumes the solution lies in the same kind of dogged reimagination that has proven so successful in the past. Given that the intentional and the phenomenal are simply ‘more nature,’ then the claim that they represent something so extraordinary, either ontologically or epistemologically, as to be somehow exempt from naturalistic cognition has to be thought extravagant in the extreme. Certainly it would be far more circumspect to presume that we simply don’t know.

And here is where Just Plain Crazy Enactive Cognition sets its first, big conceptual wedge: not only does it assume that we don’t know—that the hitherto baffling question of the first person is an open question—it asks the crucial question of why we don’t know. How is it that the very thing we once implicitly and explicitly assumed was the most certain, conscious experience, has become such a dialectical swamp?

The JPCEC approach is simple: Noting the role the scarcity of information plays in the underdetermination of scientific theory more generally, it approaches this question in these very terms. It asks, 1) What kind of information is available for deliberative, theoretical metacognition? 2) What kind of cognitive resources can be brought to bear on this information? And 3) Are either of these adequate to the kinds of questions theoreticians have been asking?

And this has a remarkable effect of turning contemporary Philosophy of Mind on its head. Historically, the problem has been one of explaining how physical structure and dynamics could engender the first-person in either its phenomenal or intentional guises. The problem, in other words, is traditionally cast in terms of accomplishment. How could neural structure and dynamics generate ‘what is it likeness’? How could causal systems generate normativity? The problem of inscrutability is simply a product of our perennial inability to answer these questions in any systematically plausible fashion.[7]

Just Plain Crazy Enactive Cognition inverts this approach. Rather than asking how the brain could possibly generate this or that apparent feature of the first-person, it asks how the brain could possibly cognize any such features in the first place. After all, it takes a tremendous amount of machinery to accurately, noninferentially cognize our environments in the brute terms we do: How much machinery would be required to accurately, noninferentially cognize the most complicated mechanism in the known universe?[8]

JPCEC, in other words, begins by asking what the brain likely can and cannot metacognize. And as it turns out, we can make a number of safe bets given what we already know. Taken together, these bets constitute what I call the Blind Brain Theory, or BBT, the systematic explanation of phenomenality and intentionality via human cognitive and metacognitive—this is the important part—incapacity.

Or in other words, neglect. The best way to explain the peculiarity of our phenomenal and intentional inklings is via a systematic account of the information (construed as systematic differences making systematic differences) that our brain cannot access or process.

So consider the unity of consciousness, the feature that most convinced Descartes to adopt dualism. Where the tradition wonders how the brain could accomplish such as thing, BBT asks how the brain could accomplish anything else. Distinctions require information. Flickering lights fuse in experience once their frequency surpasses our detection threshold. What looks like paint spilled on the sidewalk from a distance turns out to be streaming ants. Given that the astronomical complexity of the brain far and away outruns its ability to cognize complexity, the miracle, from the metacognitive standpoint, would be the high-dimensional intuition of the brain as an externally related multiplicity.

As it turns out, many of the perplexing features of the first-person can be understood in terms of information privation. Neglect provides a way to causally characterize the narrative granularity of the ‘mind,’ to naturalize intentionality and phenomenality, in effect. And in doing so it provides a parsimonious and comprehensive way to understand both naturalistic inscrutability and explanatory ineliminability. What I’ve been calling JPCEC, in other words, allows us to solve the Master Hard Problem.[9]

It turns on two core claims. First, it agrees with the consensus opinion that cognition and perception are heuristic, and second, it asserts that social cognition and metacognition in particular are radically heuristic.

To say that cognition and perception are heuristic is to say they exploit the structure of a given problem ecology to effect solutions in the absence of other relevant information. This much is widely accepted, though few have considered its consequences in any detail. If all cognition is heuristic, then all cognition possesses 1) a ‘problem ecology,’ as Todd and Gigerenzer term it (2012), some specific domain of reliability, and 2) a blind spot, an insensitivity, structural or otherwise, to information pertinent to the problem.

To understand the second core claim—the idea that social cognition and metacognition are radically heuristic—one has to appreciate that wider heuristic blind spots generally mean more narrow problem ecologies (though this need not always be the case). Given the astronomical complexity of the human brain—or any brain for that matter—we must presume that our heuristic repertoire for solving brains, whether belonging to others or belonging to ourselves, involves extremely wide neglect, which in turn implies very narrow problem ecologies. So if it turns out that metacognition is primarily adapted to things like refining practical skills, consuming the activities of the default mode, and regulating social performance, then it becomes a real question whether it possesses the cognitive and/or  informational resources required to solve the kinds of problems philosophers are prone to ponder. Philosophical reflection on the ‘nature of knowledge’ could be akin to using a screwdriver to tighten bolts! The fact that we generally have no metacognitive inkling of swapping between different cognitive tools whatsoever pretty clearly suggests it very well might be—at least when it comes to theorizing things such as ‘knowledge’![10]

At this point it’s worth noting how this way of conceiving cognition and perception amounts to a kind of ‘subpersonal enactivism.’ To say cognition is heuristic and fractionate is to say that cognition cannot be understood independent of environments, no more than a screw-driver can be understood independent of screws. It’s also worth noting how this simply follows from mechanistic paradigm of the natural sciences. Humans are just another organic component of their natural environments: emphasizing the heuristic, fractionate nature of cognition and perception allows us to investigate our ‘dynamic componency’ in a more detailed way, in terms of specific environments cuing specific heuristic systems cuing specific behaviours and so on.[11]

But if this subpersonal enactivism is so obvious—if ‘cognitive componency’ simply follows from the explanatory paradigm of the natural sciences—then why all the controversy? Why should ‘enactive’ or ‘embodied’ cognition even be a matter of debate? What motivates the opportunistic eliminativism of Radical Enactive Cognition, remember, is the way content has the tendency to ‘internalize’ cognition, to narrow it to the head. Once the environment is rolled up into the representational brain, trouble-shooting the environment becomes intracranial. So, if one can find some way around the apparent explanatory ineliminability of content, one can simply assert the cognitive componency implied by the mechanistic paradigm of natural science. And this, remember, was what made Hutto and Myin’s argument more deceptive than illuminating. Rather than focus on ineliminability, they turned to inscrutability, the bullet everyone—including themselves!—has already implicitly or explicitly bitten.

Just Plain Crazy Enactive Cognition, however, diagnoses the problem in terms of metacognitive neglect. Content, as it turns out, isn’t the only way to short-circuit the apparent obviousness cognitive componency. One might ask, for instance, why it took us so damn long to realize the fractionate, heuristic nature of our own cognitive capacities. Metacognitive neglect provides an obvious answer: Absent any way of making the requisite distinctions, we simply assumed cognition was monolithic and universal. Absent the ability to discriminate environmentally dependent cognitive functions, it was difficult to see cognition as a biological component of a far larger, ‘extensive’ mechanism. A gear that can turn every wheel is no gear at all.

‘Simples’ are cheaper to manage than ‘complexes’ and evolution is a miser. We cognize/metacognize persons rather than subpersonal assemblages because this was all the information our ancestors required. Not only is metacognition blind to the subpersonal, it is blind to the fact that it is blind: as far as it’s concerned, the ‘person’ is all there is. Evolution had no clue we would begin reverse-engineering her creation, begin unearthing the very causal information that our social and metacognitive heuristic systems are adapted to neglect. Small wonder we find ourselves so perplexed! Every time we ask how this machinery could generate ‘persons’—rational, rule-following, and autonomous ‘agents’—we’re attempting to understand the cognitive artifact of a heuristic system designed to problem solve in the absence of causal information in terms of causal information. Not surprisingly, we find ourselves grinding our heuristic gears.

The person, naturalistically understood, can be seen as a kind of strategic simplification. Given the abject impossibility of accurately intuiting itself, the brain only cognizes itself so far as it once paid evolutionary dividends and no further. The person, which remains naturalistically inscrutable as an accomplishment (How could physical structure and dynamics generate ‘rational agency’?) becomes naturalistically obvious, even inevitable, when viewed as an artifact of neglect.[12] Since intuiting the radically procrustean nature of the person requires more information, more metabolic expense, evolution left us blessedly ignorant of the possibility. What little we can theoretically metacognize becomes an astounding ‘plenum,’ the sum of everything to be metacognized—a discrete and naturalistically inexplicable entity, rather than a shadowy glimpse serving obscure ancestral needs. We seem to be a ‘rational agent’ before all else

Until, that is, disease or brain injury astounds us.[13]

This explanatory pattern holds for all intentional phenomena. Intentionality isn’t so much a ‘stance’ we take to systems, as Dennett argues, as it is a particular family of heuristic mechanisms adapted to solve certain problem ecologies. Intentionality, in other words, is mechanical—which is to say, not intentional. Resorting to these radically heuristic mechanisms may be the only way to solve a great number of problems, but it doesn’t change the fact that what we are actually doing, what is actually going on in our brain, is natural like anything else, mechanical. The fact that you, me, or anyone exploits the heuristic efficiency of terms like ‘exploit’ no more presupposes any implicit commitment to the priority, let alone the ineliminability, of intentionality than reliance on naive physics implies the falsehood of quantum mechanics.

This has to be far and away the most difficult confound to surmount: the compulsion to impute efficacy to our metacognitive inklings. So it seems that what we call ‘rationality,’ even though it so obviously bears all the hallmarks of informatic underdetermination, must in some way drive ‘action.’ As the sum of what our brain can cognize of its activity, our brain assumes that it exhausts that activity. It mistakes what little it cognizes for the breath-taking complexity of what it actually is. The granular shadows—‘reasons,’ ‘rules,’ ‘goals,’ and so on—seem to cast the physical structure and dynamics of the brain, rather than vice versa. The hard won biological efficacy of the brain is attributed to some mysterious, reason-imbibing, judgment-making ‘mind.’

Metacognitive incapacity simply is not on the metacognitive menu. Thus the reflexive, question-begging assumption that any use of normative terms presupposes normativity rather than the spare mechanistic sketch provided above.

Here we can clearly see both the form of the Master Hard Problem and the way to circumvent it. Intentionality seems inscrutable to naturalistic explanation because intentional heuristics are adapted to solve problems in the absence of pertinent causal information—the very information naturalistic explanation requires. Metacognitive blindness to the fractionate, heuristic nature of cognition also means metacognitive blindness to the various problem ecologies those heuristics are adapted to solve. In the absence of information (difference making differences), we historically assumed simplicity, a single problem ecology with a single problem solving capacity. Only the repeated misapplication of various heuristics over time provided the information needed to distinguish brute subcapacities and subecologies. Eventually we came to distinguish causal and intentional problem-solving, and to recognize their peculiar, mutual antipathy as well. But so long as metacognition remained blind to metacognitive blindness, we persisted in committing the Accomplishment Fallacy, cognizing intentional phenomena as they appeared to metacognition as accomplishments, rather than side-effects of our brain’s murky sense of itself.

So instead of seeing cognition wholly in enactive terms of componency—which is to say, in terms of mechanistic covariance—we found ourselves confronted by what seemed to be obvious, existent ‘intentional properties.’ Thus explanatory ineliminability, the conviction that any adequate naturalistic account of cognition would have to naturalistically account for intentional phenomena such as evaluability—the very properties, it so happens, that underwrite the attribution of representational content to the brain.

So, where Radical Enactive Cognition is forced to ignore the Master Problem in order to opportunistically game the problem of naturalistic inscrutability (in its restricted representationalist form) to its own advantage, Just Plain Crazy Enactivist Cognition is able to tackle the problem whole by simply turning the traditional accomplishment paradigm upside down. The theoretical disarray of cognitive science, it claims, is an obvious artifact of informatic underdetermination. What distinguishes this instance of underdetermination is the degree it turns on the invisibility of metacognitive incapacity, the way cognizing the insufficiency of the information and resources available to metacognition requires more information and resources. This generates the illusion of metacognitive sufficiency, the implicit conviction that what we intuit is what there is…

That we actually possess something called a ‘mind.’

Thus the ‘Just Plain Crazy’—the Blind Brain Theory offers nothing by way of redemption, only what could be the first naturalistically plausible way out of the traditional maze. On BBT, ‘consciousness’ or ‘mind’ is just the brain seen darkly.

In Hutto and Myin’s account of Radical Enactive Cognition, considerations of the kinds of conceptual resources various positions possess to tackle various problems figure large. The more problem solving resources a position possesses the better. In this respect, the superiority of JPCEC to REC should be clear already: insofar as REC, espousing both inscrutability and ineliminability, actually turns on the Master Hard Problem, it clearly lacks the conceptual resources to solve it.

But surely more is required. Any position that throws out the baby of explanatory ineliminability with the bathwater of naturalistic inscrutability has a tremendous amount of ‘splainin’ to do. In his Radical Embodied Cognition, Anthony Chemero does an excellent job illustrating the ‘guide to discovery’ objection to antirepresentationalist approaches to cognition such as his own. He relates the famous debate between Ernst Mach and Ludwig Boltzmann regarding the role ‘atoms’ in physics. For Mach, atoms amounted to an unnecessary fairy-tale posit, something that serious physicists did not need to carry out their experimental work. In his 1900 “The Recent Development of Method in Theoretical Physics,” however, Boltzmann turned the tide of the debate by showing how positing atoms had played an instrumental role in generating a number of further discoveries.

The power of this argumentative tactic was brought home to me in a recent talk by Bill Bechtel,[14] who presented his own guide to discovery argument for representationalism by showing the way representational thinking facilitated the discovery of place and grid cells and the role they play in spatial memory and navigation. Chemero, given his pluralism, is more interested in showing that radical embodied approaches possess their own pedigree of discoveries. In Radicalizing Enactivism, Hutto and Myin seem more interested in simply blunting the edge of these arguments and moving on. In their version, they stress the fact that scientists actually don’t talk about content and representation all that much. Bechtel, however, was at pains to show that they do! And why shouldn’t they, he would ask, given that we find ‘maps’ scattered throughout the brain?

The big thing to note here is the inevitability of argumentative stalemate. Neither side possesses the ‘conceptual resources’ to do much more than argue about what actual researchers actually mean or think and how this bears on their subsequent discoveries. Insofar as it possesses the ‘he-said-she-said’form of a domestic spat, you could say this debate is tailor-made to be intractable. Who the hell knows what anyone is ‘really thinking’? And it seems we make discoveries both positing representations and positing their absence!

Just Plain Crazy Enactive Cognition, however, possesses the resources to provide a far more comprehensive, albeit entirely nonredemptive, view. It begins by reminding us that any attempt to understand the brain necessarily involves the brain. It reminds us, in other words, of the subpersonally enactive nature of all research, that it involves physical systems engaging other physical systems. Insofar as researchers have brains, this has to be the case. The question then becomes one of how representational cognition could possibly fit into this thoroughly mechanical picture.

Pointing out our subpersonal relation to our subject matter is well and fine. The problem is one of connecting this picture to our intuitive, intentional understanding of our relation. Given the appropriate resources, we could specify all the mechanical details of the former relation—we could cobble together an exhaustive account of all the systematic covariances involved—and still find ourselves unable to account for out and out crucial intentional properties such as ‘evaluability.’ Call this the ‘cognitive zombie hunch.’

Now the fact that ‘hard problems’ and ‘zombie hunches’ seem to plague all the varying forms of intentionality and phenomenality is certainly no coincidence. But if other approaches touch on this striking parallelism at all, they typically advert—the way Hutto and Myin do—to some vague notion of ‘conceptual incompatibility,’ one definitive enough to rationalize some kind of redemptive form of ‘irreducibility,’ and nothing more. On Just Plain Crazy Enactive Cognition, however, these are precisely the kinds of problems we should expect given the heuristic character of the cognitive systems involved.

To say that cognition is heuristic, recall, is to say, 1) that it possesses a given problem-ecology, and 2) that it neglects otherwise relevant information. As we have seen, (1) warrants what I’ve been calling ‘subpersonal enactivism.’ The key to unravelling the knot of representationalism, of finding some way to square the purely mechanical nature of cognition with apparently self-evident intentional properties such as evaluability lies in (2). The problem, remember, is that any exhaustive mechanical account of cognition leaves us unable to account for the intentional properties of cognition. One might ask, ‘Where do these properties come from? What makes ‘evaluability,’ say, tick?’ But the problem, of course, is that we don’t know. What is more, we can’t even fathom what it would take to find out. Thus all the second-order attempts to reinterpret obvious ignorance into arcane forms of ‘irreducibility.’ But if we can’t naturalistically explain where these extraordinary properties come from, perhaps we can naturalistically explain where our idea of these extraordinary properties comes from…

Where else, if not metacognition?

And as we saw above, metacognition involves neglect at every turn. Any human brain attempting to cognize its own cognitive capacities simply cannot—for reasons of structural complicity (the fact that it is the very thing it is attempting to cognize) and target complexity (the fact that its complexity vastly outruns its ability to cognize complexity)—cognize those capacities the same way it cognizes its natural environments, which is to say, causally. The human brain necessarily suffers what might be called proximal or medial neglect. It constitutes its own blind spot, insofar as it cannot cognize its own functions in the same manner that it cognizes environmental functions.

One minimal phenomenological claim one could make is that the neurofunctionality that enables conscious cognition and experience is in no way evident in conscious cognition and experience. On BBT, this is a clear cut artifact of medial neglect, the fact that the brain simply cannot engage the proximate mechanical complexities it requires to engage its distal environments. Solving itself, therefore, requires a special kind of heuristic, one cued to providing solutions in the abject absence of causal information pertaining to its actual neurofunctionality.

Think about it. You see trees, not trees causing you to see trees. Even though you are an environmentally engaged ‘tree cognizing’ system, phenomenologically you simply see… trees. All the mechanical details of your engagement, the empirical facts of your coupled systematicity, are walled off by neglect—occluded. Because they are occluded, ‘seeing trees’ not only becomes all that you can intuit, it becomes all that you need to intuit, apparently.

Thus ‘aboutness,’ or intentionality in Brentano’s restricted sense: given the structural occlusion of our componency, the fact that we’re simply another biomechanically embedded biomechanical system, problems involving our cognitive relation to our environments have to be solved in some other way, in terms not requiring this vast pool of otherwise relevant information. Aboutness is this alternative, the primary way our brains troubleshoot their cognitive engagements.

It’s important to note here that the ‘aboutness heuristic’ lies outside the brain’s executive purview, that its deployment is mandatory. No matter how profoundly we internalize our intellectual understanding of our componency, we see trees nevertheless. This is what makes aboutness so compelling: it constitutes our intuitive baseline.

So, when our brains are cued to troubleshoot their cognitive engagements they’re attempting to finesse an astronomically complex causal symphony via a heuristic that is insensitive to causality. This means that aboutness, even though it captures the brute cognitive relation involved, has no means of solving the constraints involved. Thus normativity, the hanging constraints (or ‘skyhooks’ as Dennett so vividly analogizes them) we somehow intuit when troubleshooting the accuracy of various aboutnesses. As a result, we cognize cognition as a veridical aboutness—in terms commensurate with subjectivity rather than componency.

Nor do we seem to have much choice. Our intuitive understanding of understanding as evaluable, intentional directedness seems to be reflexive, a kind of metacognitive version of a visual illusion. This is why thought experiments like Leibniz’s Mill or arguments like Searle’s Chinese Room rattle our intuitions so: because, for one, veridical aboutness heuristics have adapted to solve problems without causal information, and because deliberative metacognition, at least, cannot identify the heuristics as such and so assumes the universality of their application. Our intuitive understanding of understanding intuitively strikes us as the only game in town.

This is why the frame of veridical aboutness anchors countless philosophical chasses, why you find it alternately encrusted in the human condition, boiled down to its formal bones, pitched as the ground of mere experience, or painted as the whole of reality. For millennia, human philosophical thought has buzzed within it like a fly in an invisible Klein Bottle, finding ourselves caught in the self-same dichotomies of subject and object, ideal and real.

Philosophy’s inability to clarify any of its particularities attests to its metacognitive informatic penury. Intentionality is a haiku—we simply lack the information and resources to pin any one interpretation to its back. And yet, as obviously scant as this picture is, we’ve presumed the diametric opposite historically, endlessly insisting, as if afflicted with a kind of theoretical anosognosia, that it provides the very frame of intelligibility rather than a radically heuristic way to solve for cognition.

Thus the theoretical compulsion that is representationalism. Given the occlusion of componency, or medial neglect, any instance of mistaken cognition necessarily becomes binary, a relation between. To hallucinate is to be directed at something not of the world, which is to say, at something other than the world. The intuitions underwriting veridical directedness, in other words, lend themselves to further intuitions regarding the binary structure of mistaken cognition. Because veridical aboutness constitutes our mandatory default problem solving mode, any account of mistaken cognition in terms of componency—in terms of mere covariance— seems not only counter-intuitive, but hopelessly procrustean as well, to be missing something impossible to explain and yet ‘obviously essential.’ Since the mechanical functions of cognition are themselves mandatory to scientific understanding, theorists feel compelled to map veridical aboutness onto those functions.

Thus the occult notion of mental and perceptual content, the ontological attribution of veridical aboutness to various components in the brain (typically via some semantic account of information).

Given that the function of veridical aboutness is to solve in the absence of mechanical information, it is perhaps surprising that it is relatively easy to attribute to various mechanisms. Mechanistic inscrutability, it turns out, is apparently no barrier to mechanistic applicability. But this actually makes a good deal of sense. Given that any component of a mechanism is a component by virtue of its dynamic, systematic interrelations with the rest of the mechanism, it can always be argued that any downstream component possesses implicit ‘information about’ other parts of the mechanism. When that component is dedicated, however, when it simply discharges the same function come what may, the ‘veridical’ aspect becomes hard to understand, and the attribution seems arbitrary. Like our intuitive sense of agency, veridicality requires ‘wiggle room.’ This is why the attribution possesses real teeth only when the component at issue plays a variable, regulatory function like, say, a Watt governor on a steam engine. As mechanically brute as a Watt governor is, it somehow still makes ‘sense’ to say that it is ‘right or wrong,’ performing as it ‘should.’ (Make no mistake: veridical aboutness heuristics do real cognitive work, just in a way that resists mechanical analysis—short of Just Plain Crazy Enactive Cognition, that is).

The debate thus devolves into the blind (because we have no metacognitive inkling that heuristics are involved) application of competing heuristics. The representationalist generally emphasizes the component at issue, drawing attention away from the systematic nature of the whole to better leverage the sense of variability or ‘wiggle room’ required to cue our veridical intuitions. The anti-representationalist, on the other hand, will emphasize the mechanism as a whole, drawing attention to the temporally deterministic nature of the processes at work to block any intuition of variability, to deny the representationalist their wiggle room.

This was why Bechtel, in his presentation on the role representations played in the discovery of place and grid cells, remained fixated on the notion of ‘neural maps’: these are the components that, when conceived apart from the monstrously complicated neural mechanisms they functioned within, are most likely to trigger the intuition of veridical aboutness, and so seem like bits of nature possessing the extraordinary property of being true or false of the world— obvious representations.

Those bits, of course, possessed no such extraordinary properties. Certainly they recapitulate environmental information, but any aboutness they seem to possess is simply an artifact of our hardwired penchant to problem solve (or communicate our solutions) around our own pesky mechanical details.

But if anything speaks to the difficulty we have overcoming our intuitions of veridical aboutness, it is the degree to which so-called anti-representationalists like Hutto and Myin so readily concede it otherwise. Apparently, even radicals have a hard time denying its reality. Even Dennett, whose position often verges on Just Plain Crazy Enactive Cognition, insists that intentionality can be considered ‘real’ to the extent that intentional attributions pick out real patterns.[15] But do they? For instance, how could positing a fictive relationship, veridical aboutness, solve anything, let alone the cognitive operations of the most complicated machine known? There’s no doubt that solutions follow upon such posits regularly enough. But the posit only needs to be systematically related to the actual mechanical work of problem-solving for that to be the case. Perhaps the posit solves an altogether different problem, such as the need to communicate cognitive issues.

The problem, in other words, lies with metacognition. In addition to asking what informs our intentional attributions, we need to ask what informs our attributions of ‘intentional attribution’? Does adopting the ‘intentional stance’ serve to efficiently solve certain problems, or does it serve to efficiently communicate certain problems solved by other means—even if only to ourselves? Could it be a kind of orthogonal ‘meta-heuristic,’ a way to solve the problem of communicating solutions? Dennett’s ‘intentional stance’ possesses nowhere near the conceptual resources required to probe the problem of intentionality from angles such as these. In fact, it lacks the resources to tackle the problem in anything but the most superficial naturalistic terms. As often as Dennett claims that the intentional arises from the natural, he never actually provides any account of how.[16]

As intuitively appealing as the narrative granularity of Dennett’s ‘intentional stance’ might be, it leaves the problem of intentionality stranded at all the old philosophical border stations.[17] The approach advocated here, however, where we speak of the deployment of various subpersonal heuristics, is less intuitive, hewing to componency as it does, but to the extent that it poses the problem of intentionality in mechanical as opposed to intentional terms, it stamps the passport, and finally welcomes intentionality to the realm of natural science. The mechanical idiom, which allows us to scale up and down various ‘levels of description,’ to speak of proteins and organelles and cells and organisms and ecologies in ontologically continuous terms, is tailor made for dealing with the complexities raised above.

Just Plain Crazy Enactive Cognition follows through on the problem of the intentional in a ruthlessly consistent manner. The story is mechanical all the way down—as we should expect, given the successes of the natural sciences. The ‘craziness,’ by its lights, is the assumption that one can pick and choose between intentional phenomena, eliminate this, yet pin the very possibility of intelligibility on that.

Consider Andy Clark’s now famous attempt (1994, 1997) to split the difference between embodied and intellectual approaches to cognition: the notion that some systems are, as he terms it, ‘representation hungry.’[18] One of the glaring difficulties faced by ‘radical enactive’ approaches turns on the commitment to direct realism. The representationalist has no problem explaining the constructed nature of perception, the fact that we regularly ‘see more than there is’: once the brain has accumulated enough onboard environmental ‘information about,’ direct sensory information is relegated to a ‘supervisory’ role. Since this also allows them to intuitively solve the ‘hard’ problem of illusion, biting the Hard Problem of Content seems more than a fair trade.

Those enactivists who eschew perceptual content not only reject information about but all the explanatory work it seems to do. This puts them in the unenviable theoretical position of arguing that perception is direct, and that the environment, accordingly, possesses all the information required for perceptually guided behaviour. All sophisticated detection systems, neural or electronic, need to solve the Inverse Problem, the challenge of determining properties belonging to distal systems via the properties of some sensory medium. Since sensory properties are ambiguous between any number of target properties, added information is required to detect the actual property responsible. Short of the system accumulating environmental information, it becomes difficult to understand how such disambiguation could be accomplished. The dilemma becomes progressively more and more difficult the higher you climb the cognitive ladder. So with language, for instance, you simply see/hear simple patterns of shape/sound from which you derive things like murderous intent to theories of cognition!

Some forms of cognition, in other words, seem to be more representation hungry than others, with human communication appearing to be the most representation hungry of all. In all likelihood this is the primary reason Hutto and Myin opt to game naturalistic inscrutability and explanatory ineliminability the way they do, rather than argue anything truly radical.

But if this is where the theoretical opportunism of Radical Embodied Cognition stands most revealed, it is also where the theoretical resources of Just Plain Crazy Enactive Cognition—or the Blind Brain Theory—promise to totally redefine the debate as traditionally conceived. No matter how high we climb Clark’s Chain of Representational Hunger, veridical aboutness remains just as much a heuristic—and therefore just as mechanical—as before. On BBT, Clark’s Chain of Representational Hunger is actually a Chain of Mechanical Complexity: the more sophisticated the perceptually guided behaviour, the more removed from bare stimulus-response, the more sophisticated the machinery required—full stop. It’s componency all the way down. On a thoroughgoing natural enactive view—which is to say, a mechanical view—brains can be seen as devices that transform environmental risk into onboard mechanical complexity, a complexity that, given medial neglect, metacognition flattens into heuristics such as aboutness. Certainly part of that sophistication involves various recapitulations of environmental structure, numerous ‘maps,’ but only as components of larger biomechanical systems, which are themselves components of the environments they are adapted to solve. This is as much the case with ‘pinnacle cognition,’ human theoretical practice, as it is with brute stimulus and response. There’s no content to be found anywhere simply because, as inscrutability has shouted for so very long, there simply is no such thing outside of our metacognitively duped imaginations.

The degree that language seems to require content is simply the degree to which the mechanical complexities involved elude metacognition—which is to say, the degree to which language has to be heuristically cognized in noncausal terms. In the absence of cognizable causal constraints , the fact that language is a biomechanical phenomena, we cognize ‘hanging constraints,’ the ghost-systematicity of normativity. In the absence of cognizable causal componency, the fact that we are mechanically embedded in our environments, we cognize aboutness, a direct and naturalistically occult relation that somehow binds words to world. In the absence of any way to cognize these radical heuristics as such, we assume their universality and sufficiency—convince ourselves that these things are real.

On the Blind Brain Theory, or as I’ve been calling it here, Just Plain Crazy Enactive Cognition, we are natural all the way down. On this account, intentionality is simply what mechanism looks like from a particular, radically blinkered angle. There is no original intentionality, and neither is there any derived intentionality. If our brains do not ‘take as meaningful,’ then neither do we. If environmental speech cues the application of various, radically heuristic cognitive systems in our brain, then this is what we are actually doing whenever we understand any speaker.

Intentionality is a theoretical construct, the way it looks whenever we ‘descriptively encounter’ or theoretically metacognize our linguistic activity—when we take a particular, information starved perspective on ourselves. As intentionally understood, norms, reasons, symbols, and so on are the descriptions of blind anosognosiacs, individuals convinced they can see for the simple lack of any intuition otherwise. The intuition, almost universal in philosophy, that ‘rule following’ or ‘playing the game of giving and asking for reasons’ is what we implicitly do is simply a cognitive conceit. On the contrary, what we implicitly do is mechanically participate in our environments as a component of our environments.

Now because it’s neglect that we are talking here, which is to say, a cognitive incapacity that we cannot cognize, I appreciate how counter-intuitive—even crazy—this must all sound. What I’m basically saying is that the ancient skeptics were right: we simply don’t know what we are talking about when we turn to theoretical metacognition for answers. But where the skeptics were primarily limited to second-order observations of interpretative underdetermination, I have an empirical tale to tell, a natural explanation for that interpretative underdetermination (and a great deal besides), one close to what I think cognitive science will come to embrace in the course of time. Even if you disagree, I would wager that you do concede the skeptical challenge is legitimate one, that there is a reason why so much philosophy can be read as a response to it. If so, then I would entreat you to regard this as a naturalized skepticism. The fact is, we have more than enough reason to grant the skeptic the legitimacy of their worry. In this respect, Just Plain Crazy Enactive Cognition provides a possible naturalistic explanation for what is already a legitimate worry.

Just consider how remarkably frail the intuitive position is despite seeming so obvious. Given that I used the term ‘legitimate’ in the preceding paragraph, the dissenter’s reflex will be to accuse me of obvious ‘incoherence,’ to claim that I am implicitly presupposing the very normativity I claim to be explaining away.

But am I? Is ‘presupposing normativity’ really what I am implicitly doing when I use terms such as ‘legitimate’? Well, how do you know? What informs this extraordinary claim to know what I ‘necessarily mean’ better than I do? Why should I trust your particular interpretation, given that everyone seems to have their own version? Why should I trust any theoretical metacognitive interpretation, for that matter, given their manifest unreliability?

I’ll wait for your answer. In the meantime, I’m sure you’ll understand if I continue assuming that whatever I happen to be implicitly doing is straightforwardly compatible with the mechanical paradigm of natural science.

For all its craziness, Just Plain Crazy Enactive Cognition is a very tough nut to crack. The picture it paints is a troubling one, to be sure. If empirically confirmed, it will amount to an overthrow of ‘noocentrism’ comparable to the overthrow of geocentrism and biocentrism in centuries previous.[19] Given our traditional understanding of ourselves, it is without a doubt an unmitigated disaster, a worst-case scenario come true. Given the quest to genuinely understand ourselves, however, it provides a means to dissolve the Master Problem, to naturalistically understand intentionality, and so a way to finally—finally!—cognize our profound continuity with nature.

In fact, the more you ponder it, the more inevitable it seems. Evolution gave us the cognition we needed, nothing more. To the degree we relied on metacognition and casual observation to inform our self-conception, the opportunistic nature of our cognitive capacities remained all but invisible, and we could think ourselves the very rule, stamped not just the physical image of God, but in His cognitive image as well. Like God, we had no back side, nothing to render us naturally contingent. We were the motionless centre of the universe: the earth, in a very real sense, was simply enjoying our ride. The fact of our natural, evolutionarily adventitious componency escaped us because the intuition of componency requires causal information, and metacognition offered us none.

Science, in other words, was set against our bottomless metacognitive intuitions from the beginning, bound to show that our traditional understanding of our cognition, like our traditional understanding of our planet and our biology, was little more than a trick of our informatic perspective.



[1] I mean this in the umbrella sense of the term, which includes normative, teleological, and semantic phenomena.

[2] Of course, there are other apparent intentional properties of cognition that seem to require explanation as well, including aboutness, so-called ‘opacity,’ productivity, and systematicity.

[3] For those interested in a more detailed overview, I highly recommend Chapter 2 of Anthony Chemero’s Radical Embodied Cognitive Science.

[4] This is one reason why I far prefer Anthony Chemero’s Radical Embodied Cognition (2009), which, even though it is argued in a far more desultory fashion, seems to be far more honest to the strengths and weaknesses of the recent ‘enactive turn.’

[5] One need only consider the perpetual inability of its advocates to account for illusion. In their consideration of the Muller-Lyer Illusion, for instance, Hutto and Myin argue that perceptual illusions “depend for their very existence on high-level interpretative capacities being in play” (125), that illusion is quite literally something only humans suffer because only humans possess the linguistic capacity to interpret them as such. Without the capacity to conceptualize the disjunction between what we perceive and the way the world is there are no ‘perceptual illusions.’ In other words, even though it remains a fact that you perceive two lines of equal length as possessing different lengths in the Muller-Lyer Illusion, the ‘illusion’ is just a product of your ability to judge it so. Since the representationalist is interested in the abductive warrant provided by the fact of the mistaken perception, it becomes difficult to see the relevance of the judgment. If the only way the enactivist can deal with the problem of illusion is by arguing illusions are linguistic constructs, then they have a hard row to how indeed!

[6] Which given the subject matter, perhaps isn’t so ‘crazy’ after all, if Eric Schwitzgebel is to be believed!

[7] Hutto and Myin have identified the proper locus of the problem, but since they ultimately want to redeem intentionality and phenomenality, their diagnosis turns on the way the ‘theoretical attitude’—or the ‘descriptive encounter’ favoured by the ‘Intellectualist’—frames the problem in terms of two distinct relata. Thus their theoretical recommendation that we resist this one particular theoretical move and focus instead on the implicit identity belonging to their theoretical account of embodied activity.

[8] See “THE Something about Mary” for a detailed consideration of this specific problem.

[9] Without, it is important to note, solving the empirical question of what consciousness is. What BBT offers, rather, is a naturalistic account of why phenomenality and intentionality baffle us so.

[10] See “The Introspective Peepshow: Consciousness and the Dreaded Unknown Unknowns” for a more thorough account.

[11] Note also the way this clears away the ontological fog of Gibson’s ‘affordances’: our dynamic componency, the ways we are caught up in the stochastic machinery of nature, is as much an ‘objective’ feature of the world as anything else.

[12] See “Cognition Obscura” for a comprehensive overview.

[13] We understand ourselves via heuristics that simply do not admit the kind of information provided by a great number of neuropathologies. Dissociations such as pain asymbolia, for example, provide dramatic evidence of how profound our neglect-driven intuition of phenomenal simplicity runs.

[14] “Investigating Neural Representations: The Tale of Place Cells,” presented at the Rotman Institute of Philosophy, Sept. 19th, 2013.

[15] See “Real Patterns.”

[16] This is perhaps nowhere more apparent than in Dennett’s critical discussion of Brandom’s Making it Explicit, “The Evolution of [a] Why.”

[17] ‘Nibbling’ is what he calls his strategy in his latest book, where we “simply postpone the worrisome question of what really has a mind, about what the proper domain of the intentional stance is” and simply explore the power of this ‘good trick’ (Intuition Pumps, 79). Since he can’t definitively answer either question, the suspicion is that he’s simply attempting to recast a theoretical failure as a methodological success.

[18] See “Doing Without Representing?”

[19] In fact, it provides the resources to answer the puzzling question of why these ‘centrisms’ should constitute our default understanding in the first place.

Godelling in the Valley

by rsbakker

Either mathematics is too big for the human mind or the human mind is more than a machine” – Kurt Godel


Okay, so this is purely speculative, but it is interesting, and I think worthwhile farming out to brains far better trained than mine.

So BBT suggests that the ‘a priori’ is best construed as a kind of cognitive illusion, a consequence of the metacognitive opacity of those processes underwriting those ‘thoughts’ we are most inclined to call ‘analytic’ and ‘a priori.’ The necessity, abstraction, and internal relationality that seem to characterize these thoughts can all be understood in terms of information privation, the consequence of our metacognitive blindness to what our brain is actually doing when we engage in things like mathematical cognition. The idea is that our intuitive sense of what it is we think we’re doing when we do math—our ‘insights’ or ‘inferences,’ our ‘gists’ or ‘thoughts’—is fragmentary and deceptive, a drastically blinkered glimpse of astronomically complex, natural processes.

The ‘a priori,’ on this view, characterizes the inscrutability, rather than the nature, of mathematical cognition. Even without empirical evidence of unconscious processing, mathematical reasoning has always been deeply mysterious, apparently the most certain form of cognition when performed, and yet perennially resistant to decisive second order reflection. We can do it well enough—well enough to radically transform the world when applied in concert with empirical observation—and yet none of us can agree on just what it is that’s being done.

On BBT, our various second-order theoretical interpretations of mathematics are chronically underdetermined for the same reason any theoretical interpretation in science is underdetermined: the lack of information. What dupes philosophers into transforming this obvious epistemic vice into a beguiling cognitive virtue is simply the fact that we also lack any information pertaining to the lack of this information. Since they have no inkling that their murky inklings involve ‘murkiness’ at all, they simply assume the sufficiency of those inklings.

BBT therefore predicts that the informational dividends of the neurocognitive revolution will revolutionize our understanding of mathematics. At some point we’ll conceive our mathematical intuitions as ‘low-dimensional shadows’ of far more complex processes that escape conscious cognition. Mathematics will come to be understood in terms of actual physical structures doing actual physical things to actual physical structures. And the historical practice of mathematics will be reconceptualized as a kind of inter-cranial computer science, as experiments in self-programming.

Now as strange as it might sound, you have to admit this makes an eerie kind of sense. Problems, after all, are posed and answers arise. No matter how fine we parse the steps, this is the way it seems to work: we ‘ponder,’ or input, problems, and solutions, outputs, arise via ‘insight,’ and successes are subsequently committed to ‘habit’ (so that the systematicities discovered seem to somehow exist ‘all at once’). This would certainly explain Hintikka’s ‘scandal of deduction,’ the fact that purported ‘analytic’ operations regularly provide us with genuinely novel information. And it decisively answers the question of what Wigner famously called the ‘unreasonable effectiveness’ of mathematical cognition: mathematics can so effectively solve nature—enable science—simply because mathematics is nature, a kind of cognitive Swiss Army Knife extraordinaire.

On this picture, there is only implementation, implementations we ‘generalize’ over via further implementations, and so on and so on. The ideality, or ‘software,’ is simply an artifact of our metacognitive constraints, the ‘ghost’ of what remains when multiple dimensions of information are stripped away. Not only does BBT predict that the ‘foundations of mathematics’ will be shown to be computational, it also predicts that, as the complexities pile up, mathematics will become more and more the province of machines, until we reach a point where only our machines (if the possessive even applies at this point) ‘understand’ what is being explored, and the imperial mathematician dwindles to the status of technician, someone charged with translating various machine discoveries for human consumption.

But I ain’t no mathematician, so I thought I would open it up to the crowd: Does this look like the beginning?

Leaving It Implicit

by rsbakker

Since the aim of philosophy is not “to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term” with as little information as possible, I thought it worthwhile to take another run at the instinct to raise firewalls about certain discourses, to somehow immunize them from the plague of scientific information to come. I urge anyone disagreeing to sound off, to explain to me how it’s possible to assert the irrelevance of any empirical discovery in advance, because I am duly mystified. On the one hand, we have these controversial sketches regarding the nature of meaning and normativity, and on the other we have the most complicated mechanism known, the human brain. And learning the latter isn’t going to revolutionize the former?

Of course it is. We are legion, a myriad of subpersonal heuristic systems that we cannot intuit as such. We have no inkling of when we swap between heuristics and so labour under the illusion of cognitive continuity. We have no inkling as to the specific problem-ecologies our heuristics are adapted to and so labour under the illusion of cognitive universality. We are, quite literally, blind to the astronomical complexity of what we are and what we do. I’ve spent these past 18 months on TPB brain-storming novel ways to conceptualize this blindness, and how we might see the controversies and conundrums of traditional philosophy as its expression.

Say that consciousness accompanies/facilitates/enables a disposition to ‘juggle’ cognitive resources, to creatively misapply heuristics in the discovery of exaptive problem ecologies. Traditional philosophy, you might say, represents the institutionalization of this creative misapplication, the ritualized ‘making problematic’ ourselves and our environments. As an exercise in serial misapplication, one must assume (as indeed every individual philosophy does) that the vast bulk of philosophy solves nothing whatsoever. But if one thinks, as I do, that philosophy was a necessary condition of science and democracy, then the obvious, local futility of the philosophical enterprise would seem to be globally redeemed. Thinkers are tinkers, and philosophy is a grand workshop: while the vast majority of the gadgets produced will be relegated to the dustbin, those few that go retail can have dramatic repercussions.

Of course, the hubris is there staring each and every one of us in the face, though its universality renders it almost invisible. To the extent that we agree with ourselves, we all assume we’ve won the Magical Belief Lottery—the conviction, modest or grand, that this gadget here will be the one that reprograms the future.

I’m going to call my collection of contending gadgets, ‘progressive naturalism,’ or more simply, pronaturalism. It is progressive insofar as it attempts to continue the project of disenchantment, to continue the trend of replacing traditional intentional understanding with mechanical understanding. It is naturalistic insofar as it pilfers as much information and as many of its gadgets from natural science as it can.

So from a mechanical problem-solving perspective, words are spoken and actions… simply ensue. Given the systematicity of the ensuing actions, the fact that one can reliably predict the actions that typically follow certain utterances, it seems clear that some kind of constraint is required. Given the utter inaccessibility of the actual biomechanics involved, those constraints need to be conceived in different terms. Since the beginning of philosophy, normativity has been the time-honoured alternative. Rather than positing causes, we attribute reasons to explain the behaviour of others. Say you shout “Duck!” to our golf partner. If he fails to duck and turns to you quizzically instead, you would be inclined to think him incompetent, to say something like, “When I say ‘Duck!’ I mean ‘Duck!’”

From a mechanical perspective, in other words, normativity is our way of getting around the inaccessibility of what is actually going on. Normativity names a family of heuristic tools, gadgets that solve problems absent biomechanical information. Normative cognition, in other words, is a biomechanical way of getting around the absence of biomechanical information.

What else would it be?

From a normative perspective, however, the biomechanical does not seem to exist, at least at the level of expression. This is no coincidence, given that normative heuristics systematically neglect otherwise relevant biomechanical information. Nor is the manifest incompatibility between the normative and biomechanical perspectives any coincidence: as a way to solve problems absent mechanical information, normative cognition will only reliably function in those problem ecologies lacking that information. Information formatted for mechanical cognition simply ‘does not compute.’

From a normative perspective, in other words, the ‘normative’ is bound to seem both ontologically distinct and functionally independent vis a vis the mechanical. And indeed, once one begins taking a census of the normative terms used in biomechanical explanations, it begins to seem clear that normativity is not only distinct and independent, but that it comes first, that it is, to adopt the occult term normalized by the tradition, ‘a priori.’

From the mechanical perspective, these are natural mistakes to make given that mechanical information systematically eludes theoretical metacognition as well. As I said, we are blind to the astronomical complexities of what we are and what we do. Whenever a normative philosopher attempts to ‘make explicit’ our implicit sayings and doings they are banking on the information and cognitive resources they happen to have available. They have no inkling that they’re relying on any heuristics at all, let alone a variety of them, let alone any clear sense of the narrow problem-ecologies they are adapted to solve. They are at best groping their way to a possible solution in the absence of any information pertaining to what they are actually doing.

From the mechanical perspective, in other words, the normative philosopher has only the murkiest idea of what’s going on. They theorize ‘takings as’ and ‘rules’ and ‘commitments’ and ‘entitlements’ and ‘uses’—they develop their theoretical vocabulary—absent any mechanical information, which is to say, absent the information underwriting the most reliable form of theoretical cognition humanity has ever achieved.

The normative philosopher is now in a bind. Given that the development of their theoretical vocabulary turns on the absence of mechanical information, they have no way of asserting that what they are ‘making explicit’ is not actually mechanical. If the normativity of the normative is not given, then the normative philosopher simply cannot assume normative closure, that the use of normative terms—such as ‘use’—implicitly commits any user to any kind of theoretical normative realism, let alone this or that one. This is the article of faith I encounter most regularly in my debates with normative types: that I have to be buying into their picture somehow, somewhere. My first order use of ‘use’ no more commits me to any second-order interpretation of the ‘meaning of use’ as something essentially normative than uttering the Lord’s name in vain commits me to Christianity. The normative philosopher’s inability to imagine how it could be otherwise certainly commits me to nothing. Evolution has given me all these great, normative gadgets—I would be an idiot not to use them! But please, if you want to convince me that these gadgets aren’t gadgets at all, that they are something radically different from anything in nature, then you’re going to have to tell me how and why.

It’s just foot-stomping otherwise.

And this is where I think the bind becomes a garrotte, because the question becomes one of just how the normative philosopher could press their case. If they say their theoretical vocabulary is merely ‘functional,’ a way to describe actual functions at a ‘certain level’ you simply have to ask them to evidence this supposed ‘actuality.’ How can you be sure that your ‘functions’ aren’t, as Craver and Piccinini would argue, ‘mechanism sketches,’ ways to rough out what is actually going on absent the information required to know what’s actually going on? It is a fact that we are blind to the astronomical complexity of what we are and what we do: How do you know if the rope you keep talking about isn’t actually an elephant’s tail?

The normative philosopher simply cannot presume the sufficiency of the information at their disposal. On the one hand, the first-order efficacy of the target vocabulary in no way attests to the accuracy of their second-order regimentations: our ‘mindreading’ heuristics were selected precisely because they were efficacious. The same can be said of logic or any other apparently ‘irreducibly normative’ family of formal problem-solving procedures. Given the relative ease with which these procedures can be mechanically implemented in a simple register system, it’s hard to understand how the normative philosopher can insist they are obviously ‘intrinsically normative.’ Is it simply a coincidence that our brains are also mechanical? Perhaps it is simply our metacognitive myopia, our (obvious) inability to intuit the mechanical complexity of the brain buzzing behind our eyeballs, that leads us to characterize them as such. This would explain the utter lack of second-order, theoretical consensus regarding the nature of these apparently ‘formal’ problem solving systems. Regardless, the efficacy of normative terms in everyday contexts no more substantiates any philosophical account of normativity than the efficacy of mathematics substantiates any given philosophy of mathematics.

Normative intuitions, on the other hand, are equally useless. If ‘feeling right’ had anything but a treacherous relationship with ‘being right,’ we wouldn’t be having this conversation. Not only are we blind to the astronomical complexities of what we are and what we do, we’re blind to this blindness as well! Like Plato’s prisoners, normative philosophers could be shackled to a play of shadows, convinced they see everything they need to see simply for want of information otherwise.

But aside from intuition (or whatever it is that disposes us to affirm certain ‘inferences’ more than others), just what does inform normative theoretical vocabularies?

Good question!

On the mechanical perspective, normative cognition involves the application of specialized heuristics in specialized problem-ecologies—ways we’ve evolved (and learned) to muddle through our own mad complexities. When I utter ‘use’ I’m deploying something mechanical, a gadget that allows me to breeze past the fact of my mechanical blindness and to nevertheless ‘cognize’ given that the gadget and the problem ecologies are properly matched. Moreover, since I understand that ‘use,’ like ‘meaning,’ is a gadget, I know better than to hope that second-order applications of this and other related gadgets to philosophical problem-ecologies will solve much of anything—that is, unless your problem happens to be filling lecture time!

So when Brandom writes, for instance, “What we could call semantic pragmatism is the view that the only explanation there could be for how a given meaning gets associated with a vocabulary is to be found in the use of that vocabulary…” (Extending the Project of Analysis, 11), I hear the claim that the heuristic misapplications characteristic of traditional semantic philosophy can only be resolved via the heuristic misapplications characteristic of traditional pragmatic philosophy. We know that normative cognition is profoundly heuristic. We know that heuristics possess problem ecologies, that they are only effective in parochial contexts. Given this, the burning question for any project like Brandom’s has to be whether the heuristics he deploys are even remotely capable of solving the problems he tackles.

One would think this is a pretty straightforward question deserving a straightforward answer—and yet, whenever I raise it, it’s either passed over in silence or I’m told that it doesn’t apply, that it runs roughshod over some kind of magically impermeable divide. Most recently I was told that my account refuses to recognize that we have ‘perfectly good descriptions’ of things like mathematical proof procedures, which, since they can be instantiated in a variety of mechanisms, must be considered independently of mechanism.

Do we have perfectly good descriptions of mathematical proof procedures? This is news to me! Every time I dip my toe in the philosophy of mathematics I’m amazed by the florid diversity of incompatible theoretical interpretations. In fact, it seems pretty clear that we have no consensus-compelling idea of what mathematics is.

Does the fact that various functions can be realized in a variety of different mechanisms mean that those functions must be considered independently of mechanism altogether? Again, this is news to me. As convenient as it is to pluck apparently identical functions from a multiplicity of different mechanisms in certain problem contexts, it simply does not follow that one must do the same for all problem contexts. For one, how do we know we’ve got those functions right? Perhaps the granularity of the information available occludes a myriad of functional differences. Consider money: despite being a prototypical ‘virtual machine’ (as Dennett calls it in his latest book), there can be little doubt that the mechanistic details of its instantiation have a drastic impact on its function. The kinds of computerized nanosecond transactions now beginning to dominate financial markets could make us pine for good old ‘paper changing hands’ days soon enough. Or consider normativity: perhaps our blindness to the heuristic specificity of normative cognition has led us to theoretically misconstrue its function altogether. There’s gotta be some reason why no one seems to agree. Perhaps mathematics baffles us simply because we cannot intuit how it is instantiated in the human machine! We like to think, for instance, that the atemporal systematicity of mathematics is what makes it so effective—but how do we know this isn’t just another ‘noocentric’ conceit? After all, we have no way of knowing what function our conscious awareness of mathematical cognition plays in mathematical cognition more generally. All that seems certain is that it is not the whole story. Perhaps our apparently all-important ‘abstractions’ are better conceived as low-dimensional shadows of what is actually going on.

And all this is just to say that normativity, even in its most imposing, formal guises, isn’t something magical. It is an evolved capacity to solve specific problems given limited resources. It is natural— not normative. As a natural feature of human cognition, it is simply another object of ongoing scientific inquiry. As another object of ongoing scientific inquiry, we should expect our traditional understanding to be revolutionized, that positions such as ‘inferentialism’ will come to sound every bit as prescientific as they in fact are. To crib a conceit of Feynman’s: the more we learn, the more the neural stage seems too big for the normative philosopher’s drama.

The Four Goads (at the Crossroads)

by rsbakker

So I finished the first draft of The Unholy Consult 3:14 pm, yesterday afternoon. Things are feeling kinda surreal – it’s been a helluva long haul, man! There’s still a tremendous amount of work to be done. I have exhaustive rewrites planned for a couple of the plot-lines – about a quarter of the book all told. But for whatever reason I became insanely meticulous fleshing out the master plot, and even though it remains uber-generic all the way down, I’m pretty sure nothing like it has been written before.  Whether that’s a good or bad thing, I don’t know. The best I can do is take it to the limit of my abilities and nothing more.

I remember having lunch with Guy Kay and Peter Halasz in Toronto not long after the publication of The Darkness that Comes Before. After explaining my ridiculous multi-volume blueprint for the series, Guy earnestly began trying to talk me out of the notion. He started listing all the notorious foibles of the high fantasy long form, how the plot-lines ‘bush,’ how the quality of subsequent installments drops off as the author’s enthusiasm inevitably wanes, and how it was simply impossible, given the sheer creaking weight of all the verbiage that has come before, to provide a climax that was anything but anti-climactic…

“But imagine,” I replied, “a multi-volume series as tight as single book!”

I’ve been imagining ever since. I already had a preposterously long list of goals: to avoid sentimentalism in all its nefarious guises; to portray a truly septic ancient world, one as steeped in  bigotry and brutality as was our own; to portray psychologically realistic characters; to sustain a lyrical scriptural tone; and to resist ideological anachronisms – to challenge rather than pander to the inevitable moral pieties of certain readers.

To these I added four more Goads:

1) Stick to the original cast.

2) Strive to make each book better than the last.

3) Resist the urge to ‘go baroque.’

4) Write the conclusion that everything preceding demands.

Or in sum, stay true to my original vision.

It was sometime after the publication of The Warrior-Prophet, I think, that I realized how my first list had pretty much doomed me to be a genre outlier, a cult as opposed to commercially successful writer. The Attack of the Femtards was something I had anticipated, even courted – but unfortunately moral notoriety doesn’t make for many book sales! (Quick word of advice: If you ever have to defend yourself from a morality-based character attack, be funny, because actual arguments, no matter how nifty, will avail you nothing). Given that the whole point of importing ‘literary’ complexities into epic fantasy was to reach out, to short circuit the way technology allows us to spontaneously group ourselves according to patterns of cultural consumption, to cleanse the incipient heretics from our reading lists, I see the series as largely an artistic failure so far.

And I’ve found myself making a mantra of the Four Goads, telling myself that if I could follow through on my nutbar vision then I will have done something too peculiar to easily dismiss for reasons of righteousness or taste – a series that demands careful consideration, love or hate. The idea was to write something monstrous, a kind of Lovecraftian code that I could upload into the collective mainframe, where it would hunch upon so many borders as to become a crossroads, a passage between otherwise incompatible empires.

Well… It is monstrous! And for this lonely reader at least, it cleaves true as true to its founding vision.

For all of you gnashing and rending for the wait, I apologize. Your chance to judge will come soon enough!