The Eliminativistic Implicit II: Brandom in the Pool of Shiloam

norm brain

In “The Eliminativistic Implicit I,” we saw how the implicit anchors the communicative solution of humans and their activities. Since comprehension consists in establishing connections between behaviours and their precursors, the inscrutability of those precursors requires we use explanatory posits, suppositional surrogate precursors, to comprehend ourselves and our fellows. The ‘implicit’ is a kind of compensatory mechanism, a communicative prosthesis for neglect, a ‘blank box’ for the post facto proposal of various, abductively warranted precursors.

We also saw how the implicit possessed a number of different incarnations:

1) The Everyday Implicit: The regime of folk posits adapted to solve various practical problems involving humans (and animals).

2) The Philosophical Implicit: The regime of intentional posits thought to solve aspects of the human in general.

3) The Psychological Implicit: The regime of functional posits thought to solve various aspects of the human in general.

4) The Mechanical Implicit: The regime of neurobiological posits thought to solve various aspects of the human in general.

The overarching argument I’m pressing is that only (4) holds the key to any genuine theoretical understanding of (1-3). On my account of (4), (1) is an adaptive component of socio-communicative cognition, (2) is largely an artifact of theoretical misapplications of those heuristic systems, and (3) represents an empirical attempt to approximate (4) on the basis of indirect behavioural evidence.

In this episode, the idea is to illustrate how both the problems and the apparent successes of the Philosophical Implicit can be parsimoniously explained in terms of neglect and heuristic misapplication via Robert Brandom’s magisterial Making-it-Explicit. We’ll consider what motivates Brandom’s normative pragmatism, why he thinks that only normative cognition can explain normative cognition. Without this motivation, the explanation of normative cognition defaults to natural cognition (epitomized by science), and Brandom quite simply has no subject matter. The cornerstone of his case is the Wittgensteinian gerrymandering argument against Regularism. As I hope to show, Blind Brain Theory dismantles this argument with surprising facility. And it does so, moreover, in a manner that explains why so many theorists (including myself at one time!) are so inclined to find the argument convincing. As it turns out, the intuitions that motivate Normativism turn on a cluster of quite inevitable metacognitive illusions.

norm pentagon

Blind Agents

Making-it-Explicit: Reasoning, Representing, and Discursive Commitment is easily the most sustained and nuanced philosophical consideration of the implicit I’ve encountered. I was gobsmacked I was when I first read it in the late 90s. Stylistically, it had a combination of Heideggerean density and Analytic clarity that I found narcotic. Argumentatively, I was deeply impressed by the way Brandom’s interpretive functionalism seemed to actually pull intentional facts from natural hats, how his account of communal taking as seemed to render normativity at once ‘natural’ and autonomous. For a time, I bought into a great deal of what Brandom had to say—I was particularly interested in working my ‘frame ontology’ into his normative framework. Making It Explicit had become a big part of my dissertation… ere epic fantasy saved my life!

I now think I was deluded.

In this work, Brandom takes nothing less than the explication of the ‘game of giving and asking for reasons’ as his task, “making explicit the implicit structure characteristic of discursive practice as such” (649). He wants to make explicit the role that making explicit plays in discursive cognition. It’s worth pausing to ponder the fact that we do so very many things with only the most hazy or granular second-order understanding. It might seem so platitudinal as to go without saying, but it’s worth noting in passing at least: Looming large in the implicature of all accounts such as Brandom’s is the claim that we somehow know the world without ever knowing how we know the world.

As we saw in the previous installment, the implicit designates a kind of profound cognitive incapacity, a lack of knowledge regarding our own activities. The implicit entails what might be called a Blind Agent Thesis, or BAT. Brandom, by his own admission, is attempting to generalize the behaviour of the most complicated biomechanical system known to science almost entirely blind to the functioning of that system. (He just thinks he’s operating at an ‘autonomous social functional’ level). He is, as we shall see, effectively arguing his own particular BAT.

Insofar as every theoretician, myself included, is trying to show ‘what everyone is missing,’ there’s a sense in which something like BAT is hard to deny. Why all the blather, otherwise? But this negative characterization clearly has a problem: How could we do anything without knowing how to do it? Obviously we have to ‘know how’ in some manner, otherwise we wouldn’t be able to do anything at all! This is the sense in which the implicit can be positively characterized as a species of knowing in its own right. And this leads us to the quasi-paradoxical understanding of the implicit as ‘knowing without knowing,’ a knowing how to do something without knowing how to discursively explain that doing.

Making explicit, Brandom is saying, has never been adequately made explicit—this despite millennia of philosophical disputation. He (unlike Kant, say) never offers any reason why this is the case, any consideration of what it is about making explicit in particular that should render it so resistant to explication—but then philosophers are generally prone to take the difficulty of their problems as a given. (I’m not the only one out there shouting the problem I happen to working on is like, the most important problem ever!) I mention this because any attempt to assay the difficulty of the problem of making making-explicit explicit would have explicitly begged the question of whether he (or anyone else) possessed the resources required to solve the problem.

You know, as blind and all.

What Brandom provides instead is an elegant reprise of the problem’s history, beginning with Kant’s fundamental ‘transformation of perspective,’ the way he made explicit the hitherto implicit normative dimension of making explicit, what allowed him “to talk about the natural necessity whose recognition is implicit in cognitive or theoretical activity, and the moral necessity whose recognition is implicit in practical activity, as species of one genus” (10).

Kant, in effect, had discovered something that humanity had been all but oblivious to: the essentially prescriptive nature of making explicit. Of course, Brandom almost entirely eschews Kant’s metaphysical commitments: for him, normative constraint lies in the attributions of other agents and nowhere else. Kant, in other words, had not so much illuminated the darkness of the implicit (which he baroquely misconstrues as ‘transcendental’) as snatch one crucial glimpse of its nature.

Brandom attributes the next glimpse to Frege, with his insistence on “respecting and enforcing the distinction between the normative significance of applying concepts and the causal consequences of doing so” (11). What Frege made explicit about making explicit, in other words, was its systematic antipathy to causal explanation. As Brandom writes:

“Psychologism misunderstands the pragmatic significance of semantic contents. It cannot make intelligible the applicability of norms governing the acts that exhibit them. The force of those acts is a prescriptive rather than a descriptive affair; apart from their liability to assessments of judgments as true and inferences as correct, there is no such thing as judgment or inference. To try to analyze the conceptual contents of judgments in terms of habits or dispositions governing the sequences of brain states or mentalistically conceived ideas is to settle on the wrong sort of modality, on causal necessitation rather than rational or cognitive right.” (12)

Normativity is naturalistically inscrutable, and thanks to Kant (“the great re-enchanter,” as Turner (2010) calls him), we know that making explicit is normative. Any explication of the implicit of making explicit, therefore, cannot be causal—which is to say, mechanistic. Frege, in other words, makes explicit a crucial consequence of Kant’s watershed insight: the fact that making explicit can only be made explicit in normative, as opposed to natural, terms. Explication is an intrinsically normative activity. Making causal constraints explicit at most describes what systems will do, never prescribes what they should do. Since we now know that explication is an intrinsically normative activity, making explicit the governing causal constraints has the effect of rendering the activity unintelligible. The only way to make explication theoretically explicit is to make explicit the implicit normative constraints that make it possible.

Which leads Brandom to the third main figure of his brief history, Wittgenstein. Thus far, we know only that explication is an intrinsically normative affair—our picture of making explicit is granular in the extreme. What are norms? Why do they have the curious ‘force’ that they do? What does that force consist in? Even if Kant is only credited with making explicit the normativity of making explicit, you could say the bulk of his project is devoted to exploring questions precisely like these. Consider, for instance, his explication of reason:

“But of reason one cannot say that before the state in which it determines the power of choice, another state precedes in which this state itself is determined. For since reason itself is not an appearance and is not subject at all to any conditions of sensibility, no temporal sequence takes place in it even as to its causality, and thus the dynamical law of nature, which determines the temporal sequence according to rules, cannot be applied to it.” Kant, The Critique of Pure Reason, 543

Reason, in other words, is transcendental, something literally outside nature as we experience it, outside time, outside space, and yet somehow fundamentally internal to what we are. The how of human cognition, Kant believed, lies outside the circuit of human cognition, save for what could be fathomed via transcendental deduction. Kant, in other words, not only had his own account of what the implicit was, he also had an account for what rendered it so difficult to make explicit in the first place!

He had his own version of BAT, what might be called a Transcendental Blind Agent Thesis, or T-BAT.

Brandom, however, far prefers the later Wittgenstein’s answers to the question of how the intrinsic normativity of making explicit should be understood. As he writes,

“Wittgenstein argues that proprieties of performance that are governed by explicit rules do not form an autonomous stratum of normative statuses, one that could exist though no other did. Rather, proprieties governed by explicit rules rest on proprieties governed by practice. Norms that are explicit in the form of rules presuppose norms implicit in practices.” (20)

Kant’s transcendental represents just such an ‘autonomous stratum of normative statuses.’ The problem with such a stratum, aside from the extravagant ontological commitments allegedly entailed, is that it seems incapable of dealing with a peculiar characteristic of normative assessment known since ancient times in the form of Agrippa’s trilemma or the ‘problem of the criterion.’ The appeal to explicit rules is habitual, perhaps even instinctive, when we find ourselves challenged on some point of communication. Given the regularity with which such appeals succeed, it seems natural to assume that the propriety of any given communicative act turns on the rules we are prone to cite when challenged. The obvious problem, however, is that rule citing is itself a communicative act that can be challenged. It stems from occluded precursors the same as anything else.

What Wittgenstein famously argues is that what we’re appealing to in these instances is the assent of our interlocutors. If our interlocutors happen to disagree with our interpretation of the rule, suddenly we find ourselves with two disputes, two improprieties, rather than one. The explicit appeal to some rule, in other words, is actually an implicit appeal to some shared system of norms that we think will license our communicative act. This is the upshot of Wittgenstein’s regress of rules argument, the contention that “while rules can codify the pragmatic normative significance of claims, they do so only against a background of practices permitting the distinguishing of correct from incorrect applications of those rules” (22).

Since this account has become gospel in certain philosophical corners, it might pay to block out the precise way this Wittgensteinian explication of the implicit does and does not differ from the Kantian explication. One comforting thing about Wittgenstein’s move, from a naturalist’s standpoint at least, is that it adverts to the higher-dimensionality of actual practices—it’s pragmatism, in effect. Where Kant’s making explicit is governed from somewhere beyond the grave, Wittgenstein’s is governed by your friends, family, and neighbours. If you were to say there was a signature relationship between their views, you could cite this difference in dimensionality, the ‘solidity’ or ‘corporeality’ that Brandom appeals to in his bid to ground the causal efficacy of his elaborate architecture (631-2).

Put differently, the blindness on Wittgenstein’s account belongs to you and everyone you know. You could say he espouses a Communal Blind Agent Thesis, or C-BAT. The idea is that we’re continually communicating with one another while utterly oblivious as to how we’re communicating with one another. We’re so oblivious, in fact, we’re oblivious to the fact we are oblivious. Communication just happens. And when we reflect, it seems to be all that needs to happen—until, that is, the philosopher begins asking his damn questions.

It’s worth pointing out, while we’re steeping in this unnerving image of mass, communal blindness, that Wittgenstein, almost as much as Kant, was in a position analogous to empirical psychologists researching cognitive capacities back in the 1950s and 1960s. With reference to the latter, Piccinini and Craver have argued (“Integrating psychology and neuroscience: functional analyses as mechanism sketches,” 2011) that informatic penury was the mother of functional invention, that functional analysis was simply psychology’s means of making due, a way to make the constitutive implicit explicit in the absence of any substantial neuroscientific information. Kant and Wittgenstein are pretty much in the same boat, only absent any experimental means to test and regiment their guesswork. The original edition of Philosophical Investigations, in case you were wondering, was published in 1953, which means Wittgenstein’s normative contextualism was cultured in the very same informatic vacuum as functional analysis. And the high-altitude moral, of course, amounts to the same: times have changed.

The cognitive sciences have provided a tremendous amount of information regarding our implicit, neurobiological precursors, so much so that the mechanical implicit is a given. The issue now isn’t one of whether the implicit is causal/mechanical in some respect, but whether it is causal/mechanical in every respect. The question, quite simply, is one of what we are blind to. Our biology? Our ‘mental’ programming? Our ‘normative’ programming? The more we learn about our biology, the more we fill in the black box with scientific facts, the more difficult it seems to become to make sense of the latter two.

norms

Ineliminable Inscrutability Scrutinized and Eliminated

Though he comes nowhere near framing the problem in these explicitly informatic terms, Brandom is quite aware of this threat. American pragmatism has always maintained close contact with the natural sciences, and post-Quine, at least, it has possessed more than its fair share of eliminativist inclinations. This is why he goes to such lengths to argue the ineliminability of the normative. This is why he follows his account of Kant’s discovery of the normativity of the performative implicit with an account of Frege’s critique of psychologism, and his account of Wittgenstein’s regress argument against ‘Regulism’ with an account of his gerrymandering argument against ‘Regularism.’

Regularism proposes we solve the problem of rule-following with patterns of regularities. If a given performance conforms to some pre-existing pattern of performances, then we call that performance correct or competent. If it doesn’t so conform, then we call it incorrect or incompetent. “The progress promised by such a regularity account or proprieties of practice,” Brandom writes, “lies in the possibility of specifying the pattern or regularity in purely descriptive terms and then allowing the relation between regular and irregular performance to stand in for the normative distinction between what is correct and what is not” (MIE 28). The problem with Regularism, however, is “that it threatens to obliterate the contrast between treating a performance as subject to normative assessment of some sort and treating it as subject to physical laws” (27). Thus the challenge confronting any Regularist account of rule-following, as Brandom sees it, is to account for its normative character. Everything in nature ‘follows’ the ‘rules of nature,’ the regularities isolated by the natural sciences. So what does the normativity that distinguishes human rule-following consist in?

For a regularist account to weather this challenge, it must be able to fund a distinction between what is in fact done and what ought to be done. It must make room for the permanent possibility of mistakes, for what is done or taken to be correct nonetheless turn out to be incorrect or inappropriate according to some rule or practice.” 27

The ultimate moral, of course, is that there’s simply no way this can be done, there’s no way to capture the distinction between what happens and what ought to happen on the basis of what merely happens. No matter what regularity the Regularist adduces ‘to play the role of norms implicit in practice,’ we find ourselves confronted by the question of whether it’s the right regularity. The fact is any number of regularities could play that role, stranding us with the question of which regularity one should conform to—which is to say, the question of the very normative distinction the Regularist set out to solve in the first place. Adverting to dispositions to pick out the relevant regularity simply defers the problem, given that “[n]obody ever acts incorrectly in the sense of violating his or her own dispositions” (29).

For Brandom, as with Wittgenstein, the problem of Regularism is intimately connected to the problem of Regulism: “The problem that Wittgenstein sets up…” he writes, “is to make sense of a notion of norms implicit in practice that will not lose either the notion of the implicitness, as regulism does, or the notion of norms, as simple regularism does” (29). To see this connection, you need only consider one of Wittgenstein’s more famous passages from Philosophical Investigations:

§217. “How am I able to obey a rule?”–if this is not a question about causes, then it is about the justification for my following the rule in the way I do.

If I have exhausted the justifications I have reached bedrock, and my spade is turned. Then I am inclined to say: “This is simply what I do.”

The idea, famously, is that rule-following is grounded, not in explicit rules, but in our actual activities, our practices. The idea, as we saw above, is that rule-following is blind. It is ‘simply what we do.’ “When I obey a rule, I do not choose,” Wittgenstein writes. “I obey the rule blindly” (§219). But if rule-following is blind, just what we find ourselves doing in certain contexts, then in what sense is it normative? Brandom quotes McDowell’s excellent (certainly from a BBT standpoint!) characterization of the problem in “Wittgenstein on Following a Rule”: “How can a performance be nothing but a ‘blind’ reaction to a situation, not an attempt to act on interpretation (thus avoiding Scylla); and be a case of going by a rule (avoiding Charybdis)?” (Mind, Value, and Reality, 242).

Wittgenstein’s challenge, in other words, is one of theorizing nonconscious rule-following in a manner that does not render normativity some inexplicable remainder. The challenge is to find some way to avoid Regulism without lapsing into Regularism. Of course, we’ve grown inured to the notion of ‘implicit norms’ as a theoretical explanatory posit, so much so as to think them almost self-evident—I know this was once the case for me. But the merest questioning quickly reveals just how odd implicit norms are. Nonconscious rule-following is automatic rule-following, after all, something mechanical, dispositional. Automaticity seems to preclude normativity, even as it remains amenable to regularities and dispositions. Although it seems obvious that evaluation and justification are things that we regularly do, that we regularly engage in normative cognition navigating our environments (natural and social), it is by no means clear that only normative posits can explain normative cognition. Given that normative cognition is another natural artifact, the product of evolution, and given the astounding explanatory successes of science, it stands to reason that natural, not supernatural, posits are likely what’s required.

All this brings us back to C-BAT, the fact that Wittgenstein’s problem, like Brandom’s, is the problem of neglect. ‘This is simply what I do,’ amounts to a confession of abject ignorance. Recall the ‘Hidden Constraint Model’ of the implicit from our previous discussion. Cognizing rule-following behaviour requires cognizing the precursors to rule-following behaviour, precursors that conscious cognition systematically neglects. Most everyone agrees on the biomechanical nature of those precursors, but Brandom (like intentionalists more generally) wants to argue that biomechanically specified regularities and dispositions are not enough, that something more is needed to understand the normative character of rule-following, given the mysterious way regularities and dispositions preclude normative cognition. The only way to avoid this outcome, he insists, is to posit some form of nonconscious normativity, a system of preconscious, pre-communicative ‘rules’ governing cognitive discourse. The upshot of Wittgenstein’s arguments against Regularism seems to be that only normative posits can adequately explain normative cognition.

But suddenly, the stakes are flipped. Just as the natural is difficult to understand in the context of the normative, so too is the normative difficult to understand in the context of the natural. For some critics, this is difficulty enough. In Explaining the Normative, for instance, Stephen Turner does an excellent job tracking, among other things, the way Normativism attempts to “take back ground lost to social science explanation” (5). He begins by providing a general overview of the Normativist approach, then shows how these self-same tactics characterized social science debates of the early twentieth-century, only to be abandoned as their shortcomings became manifest. “The history of the social sciences,” he writes, “is a history of emancipation from the intellectual propensity to intentionalize social phenomenon—this was very much part of the process that Weber called the disenchantment of the world” (147). His charge is unequivocal: “Brandom,” he writes, “proposes to re-enchant the world by re-instating the belief in normative powers, which is to say, powers in some sense outside of and distinct from the forces known to science” (4). But although this is difficult to deny in a broad stroke sense, he fails to consider (perhaps because his target is Normativism in general, and not Brandom, per se) the nuance and sensitivity Brandom brings to this very issue—enough, I think, to walk away theoretically intact.

In the next installment, I’ll consider the way Brandom achieves this via Dennett’s account of the Intentional Stance, but for the nonce, it’s important that we keep the problem of re-enchantment on the table. Brandom is arguing that the inability of natural posits to explain normative cognition warrants a form of theoretical supernaturalism, a normative metaphysics, albeit one he wants to make as naturalistically palatable as possible.

Even though neglect is absolutely essential to their analyses of Regulism and Regularism, neither Wittgenstein nor Brandom so much as pause to consider it. As astounding as it is, they simply take our utter innocence of our own natural and normative precursors as a given, an important feature of the problem ecology under consideration to be sure, but otherwise irrelevant to the normative explication of normative cognition. Any role neglect might play beyond anchoring the need for an account of implicit normativity is entirely neglected. The project of Making It Explicit is nothing other than the project of making the activity of making explicit explicit, which is to say, the project of overcoming metacognitive neglect regarding normative cognition, and yet nowhere does Brandom so much as consider just what he’s attempting to overcome.

Not surprisingly, this oversight proves catastrophic—for the whole of Normativism, and not simply Brandom.

Just consider, for instance, the way Brandom completely elides the question of the domain specificity of normative cognition. Normative cognition is a product of evolution, part of a suite of heuristic systems adapted to solve some range of social problems as effectively as possible given the resources available. It seems safe to surmise that normative cognition, as heuristic, possesses what Todd, Gigarenzer, and the ABC Research Group (2012) call an adaptive ‘problem-ecology,’ a set of environments possessing complementary information structures. Heuristics solve via the selective uptake of information, wedding them in effect, to specific problem-solving domains. ‘Socio-cognition,’ which manages to predict, explain, even manipulate astronomically complex systems on the meagre basis of observable behaviour, is paradigmatic of a heuristic system. In the utter absence of causal information, it can draw a wide variety of reliable causal conclusions, but only within a certain family of problems. As anthropomorphism, the personification or animation of environments, shows, humans are predisposed to misapply socio-cognition to natural environments. Pseudo-solving natural environments via socio-cognition may have solved various social problems, but precious few natural ones. In fact, the process of ‘disenchantment’ can be understood as a kind of ‘rezoning’ of socio-cognition, a process of limiting its application to those problem-ecologies where it actually produces solutions.

Which leads us to the question: So what, then, is the adaptive problem ecology of normative cognition? More specifically, how do we know that the problem of normative cognition belongs to the problem ecology of normative cognition?

As we saw Brandom’s argument against Regularism could itself be interpreted as a kind of ‘ecology argument,’ as a demonstration of how the problem of normative cognition does not belong to the problem ecology of natural cognition. Natural cognition cannot ‘fund the distinction between ought and is.’ Therefore the problem ecology of normative cognition does not belong to natural cognition. In the absence of any alternatives, we then have an abductive case for the necessity of using normative cognition to solve normative cognition.

But note how recognizing the heuristic, or ecology dependant, nature of normative cognition has completely transformed the stakes of Brandom’s original argument. The problem for Regularism turns, recall, on the conspicuous way mere regularities fail to capture the normative dimension of rule-following. But if normative cognition were heuristic (as it almost certainly is), if what we’re prone to identify as the ‘normative dimension’ is something specific to the application of normative cognition, then this becomes the very problem we should expect. Of course the normative dimension disappears absent the application of normative cognition! Since Regularism involves solving normative cognition using the resources of natural cognition, it simply follows that it fails to engage resources specific to normative cognition. Consider Kripke’s formulation of the gerrymandering problem in terms of the ‘skeptical paradox’: “For the sceptic holds that no fact about my past history—nothing that was ever in my mind, or in my external behavior—establishes that I meant plus rather than quus” (Wittgenstein, 13). Even if we grant a rule-follower access to all factual information pertaining to rule-following, a kind of ‘natural omniscience,’ they will still be unable to isolate any regularity capable of playing ‘the role of norms implicit in practice.’ Again, this is precisely what we should expect given the domain specificity of normative cognition proposed here. If ‘normative understanding’ were the artifact of a cognitive system dedicated to the solution of a specific problem-ecology, then it simply follows that the application of different cognitive systems would fail to produce normative understanding, no matter how much information was available.

What doesn’t follow is that normative cognition thus lies outside the problem ecology of natural cognition, let alone inside the problem ecology of normative cognition. The ‘explanatory failure’ that Brandom and others use to impeach the applicability of natural cognition to normative cognition is nothing of the sort. It simply makes no sense to demand that one form of cognition solve another form of cognition as if it were that other form. We know that normative cognition belongs to social cognition more generally, and that social cognition—‘mindreading’—operates heuristically, that it has evolved to solve astronomically complicated biomechanical problems involving the prediction, understanding, and manipulation of other organisms absent detailed biomechanical information. Adapted to solve in the absence of this information, it stands to reason that the provision of that information, facts regarding biomechanical regularities, will render it ineffective—‘grind cognitive gears,’ you could say.

Since these ‘technical details’ are entirely invisible to ‘philosophical reflection’ (thanks to metacognitive neglect), the actual ecological distinction between these systems escapes Brandom, and he assumes, as all Normativists assume, that the inevitable failure of natural cognition to generate instances of normative cognition means that only normative cognition can solve normative cognition. Blind to our cognitive constitution, instances of normative cognition are all he or anyone else has available: our conscious experience of normative cognition consists of nothing but these instances. Explaining normative cognition is thus conflated with replacing normative cognition. ‘Competence’ becomes yet another ‘spooky explanandum,’ another metacognitive inkling, like ‘qualia,’ or ‘content,’ that seems to systematically elude the possibility the possibility of natural cognition (for suspiciously similar reasons).

This apparent order of supernatural explananda then provides the abductive warrant upon which Brandom’s entire project turns—all T-BAT and C-BAT approaches, in fact. If natural cognition is incapable, then obviously something else is required. Impressed by how our first-order social troubleshooting makes such good use of the Everyday Implicit, and oblivious to the ecological limits of the heuristic systems responsible, we effortlessly assume that making use of some Philosophical Implicit will likewise enable second-order social troubleshooting… that tomes like Making It Explicit actually solve something.

But as the foregoing should make clear, precisely the opposite is the case. As a system adapted to troubleshoot first-order social ecologies, normative cognition seems unlikely to theoretically solve normative cognition in any satisfying manner. The very theoretical problems that plague Normativism—supernaturalism, underdetermination, and practical inapplicability—are the very problems we should expect if normative cognition were not in fact among the problems that normative cognition can solve.

As an evolved, biological capacity, however, normative cognition clearly belongs to the problem ecology of natural cognition. Simply consider how much the above sketch has managed to ‘make explicit.’ In parsimonious fashion it explains, 1) the general incompatibility of natural and normative cognition; 2) the inability of Regularism to ‘play the role of norms implicit in practice’; 3) why this inability suggests the inapplicability of natural cognition to the problem of normative cognition; 4) why Normativism seems the only alternative as a result; and 5) why Normativism nonetheless suffers the debilitating theoretical problems it does. It solves the notorious Skeptical Paradox, and much else aside, using only the idiom of natural cognition, which is to say, in a manner not only compatible with the life sciences, but empirically tractable as well.

Brandom is the victim of a complex of illusions arising out of metacognitive neglect. Wittgenstein, who had his own notion of heuristics and problem ecologies (grammars and language games), was sensitive to the question of what kinds of problems could be solved given the language we find ourselves stranded with. As a result, he eschews the kind of systematic normative metaphysics that Brandom epitomizes. He takes neglect seriously insofar as ‘this is simply what I do’ demarcates, for him, the pale of credible theorization. Even so, he nevertheless succumbs to a perceived need to submit, however minimally or reluctantly, the problem of normative cognition (in terms of rule-following) to the determinations of normative cognition, and is thus compelled to express his insights in the self-same supernatural idiom as Brandom, who eschews what is most valuable in Wittgenstein, his skepticism, and seizes on what is most problematic, his normative metaphysics.

There is a far more parsimonious way. We all agree humans are physical systems nested within a system of such systems. What we need to recognize is how being so embedded poses profound constraints on what can and cannot be cognized. What can be readily cognized are other systems (within a certain range of complexity). What cannot be readily cognized is the apparatus of cognition itself. The facts we call ‘natural’ belong to the former, and the facts we call ‘intentional’ belong to the latter. Where the former commands an integrated suite of powerful environmental processors, the latter relies on a hodgepodge of specialized socio-cognitive and metacognitive hacks. Since we have no inkling of this, we have no inkling of their actual capacities, and so run afoul a number of metacognitive impasses. So for instance, intentional cognition has evolved to overcome neglect, to solve problems in the absence of causal information. This is why philosophical reflection convinces us we somehow stand outside the causal order via choice or reason or what have you. We quite simply confuse an incapacity, our inability to intuit our biomechanicity, with a special capacity, our ability to somehow transcend or outrun the natural order.

We are physical in such a way that we cannot intuit ourselves as wholly physical. To cognize nature is to be blind to the nature of cognizing. To be blind to that blindness is to think cognizing has no nature. So we assume that nature is partial, and that we are mysteriously whole, a system unto ourselves.

Reason be praised.

 

Science, Nihilism, and the Artistry of Nature (by Ben Cain)

nihilism image

Technologically-advanced societies may well destroy themselves, but there are two other reasons to worry that science rather than God will usher in the apocalypse, directly destroying us by destroying our will to live. The threat in question is nihilism, the loss of faith in our values and thus the wholesale humiliation of all of us, due to science’s tendency to falsify every belief that’s traditionally comforted the masses. The two reasons to suspect that science entails nihilism are that scientists find the world to be natural (fundamentally material, mechanical, and impersonal), whereas traditional values tend to have supernatural implications, and that scientific methods famously bypass intuitions and feelings to arrive at the objective truth.

These two features of science, being the content of scientific theories and the scientific methods of inquiry might seem redundant, since the point about methods is that science is methodologically naturalistic. Thus, the point about the theoretical content might seem to come as no surprise. By definition, a theory that posits something supernatural wouldn’t be scientific. While scientists may be open to learning that the world isn’t a natural place, making that discovery would amount to ending or at least transforming the scientific mode of inquiry. Nevertheless, naturalism, the worldview that explains everything in materialistic and mechanistic terms, isn’t just an artifact of scientific methods. What were once thought to be ghosts and gods and spirits really did turn out to be natural phenomena.

Moreover, scientific objectivity seems a separate cause of nihilism in that, by showing us how to be objective, paradigmatic scientists like Galileo, Newton, and Darwin showed us also how to at least temporarily give up on our commonsense values. After all, in the moment when we’re following scientific procedures, we’re ignoring our preferences and foiling our biases. Of course, scientists still have feelings and personal agendas while they’re doing science; for example, they may be highly motivated to prove their pet theory. But they also know that by participating in the scientific process they’re holding their feelings to the ultimate test. Scientific methods objectify not just the phenomenon but the scientist; as a functionary in the institution, she must follow strict procedures, recording the data accurately, thinking logically, and publishing the results, making her scientific work as impersonal as the rest of the natural world. In so far as nonscientists understand this source of science’s monumental success, we might come to question the worth of our subjectivity, of our private intuitions, wishes, and dreams which scientific methods brush aside as so many distortions.

Despite the imperative to take scientists as our model thinkers in the Age of Reason, we might choose to ignore these two threats to our naïve self-image. Nevertheless, the fear is that distraction, repression, and delusion might work only for so long before the truth outs. You might think, on the contrary, that science doesn’t entail nihilism, since science is a social enterprise and thus it has a normative basis. Scientists are pragmatic and so they evaluate their explanations in terms of rational values of simplicity, fruitfulness, elegance, utility, and so on. Still, the science-centered nihilist can reply, those values might turn out to be mechanisms, as scientists themselves would discover, in which case science would humiliate not just the superstitious masses but the pragmatic theorists and experimenters as well. That is, science would refute not only the supernaturalist’s presumptions but the elite instrumentalist’s view of scientific methods. Science would become just another mechanism in nature and scientific theories would have no special relationship with the facts since from this ultra-mechanistic “perspective,” not even scientific statements would consist of symbols that bear meaning. The scientific process would be seen as consisting entirely of meaningless, pointless, and amoral causal relations—just like any other natural system.

I think, then, this sort of nihilist can resist that pragmatic objection to the suspicion that science entails nihilism and thus poses a grave, still largely unappreciated threat to society. There’s another objection, though, which is harder to discount. The very cognitive approach which is indispensible to scientific discovery, the objectification of phenomena, which is to say the analysis of any pattern in impersonal terms of causal relations, is itself a source of certain values. When we objectify something we’re thereby well-positioned to treat that thing as having a special value, namely an aesthetic one. Objectification overlaps with the aesthetic attitude, which is the attitude we take up when we decide to evaluate something as a work of art, and thus objects, as such, are implicitly artworks.

 

Scientific Objectification and the Aesthetic Attitude

 

There’s a lot to unpack there, so I’ll begin by explaining what I mean by the “aesthetic attitude.” This attitude is explicated differently by Kant, Schopenhauer, and others, but the main idea is that something becomes an artwork when we adopt a certain attitude towards it. The attitude is a paradoxical one, because it involves a withholding of personal interest in the object and yet also a desire to experience the object for its own sake, based on the assumption that such an experience would be rewarding. When an observer is disinterested in experiencing something, but chooses to experience it because she’s replaced her instrumental or self-interested perspective with an object-oriented one so that she wishes to be absorbed by what the object has to offer, as it were, she’s treating the object as a work of art. And arguably, that’s all it means for something to be art.

For example, if I see a painting on a wall and I study it up close with a view to stealing it, because all the while I’m thinking of how economically valuable the painting is, I’m personally interested in the painting and thus I’m not treating it as art; instead, for me the painting is a commodity. Suppose I have no ulterior motive as I look at the painting, but I’m also bored by it and so I’m not passively letting the painting pour its content into me, as it were, which is to say that I have no respect for such an experience in this case, and so I’m not giving the painting a fair chance to captivate my attention, I’m likewise not treating the painting as art. I’m giving it only a cursory glance, because I lack the selfless interest in letting the painting hold all of my attention and so I don’t anticipate the peculiar pleasure from perceiving the painting that we associate with an aesthetic experience. Whether it’s a painting, a song, a poem, a novel, or a film, the object becomes an artwork when it’s regarded as such, which requires that the observer adopt this special attitude towards it.

Now, scientific objectivity plainly isn’t identical to the aesthetic attitude. After all, regardless of whether scientists think of nature as beautiful when they’re studying the evidence or performing experiments or formulating mechanistic explanations, they do have at least one ulterior motive. Some scientists may have an economic motive, others may be after prestige, but all scientists are interested in understanding how systems work. Their motive, then, is a cognitive one—which is why they follow scientific procedures, because they believe that scientific objectification (mechanistic analysis, careful collection of the data, testing of hypotheses with repeatable experiments, and so on) is the best means of achieving that goal.

However, this cognitive interest posits a virtual aesthetic stance as the means to achieve knowledge. Again, scientists trust that their personal interests are irrelevant to scientific truth and that regardless of how they prefer the world to be, the facts will emerge as long as the scientific methods of inquiry are applied with sufficient rigor. To achieve their cognitive goal, scientists must downplay their biases and personal feelings, and indeed they expect that the phenomenon will reveal its objective, real properties when it’s scientifically scrutinized. The point of science is for us to get out of the way, as much as possible, to let the world speak with its own voice, as opposed to projecting our fantasies and delusions onto the world. Granted, as Kant explained, we never hear that voice exactly—what Pythagoras called the music of the spheres—because in the act of listening to it or of understanding it, we apply our species-specific cognitive faculties and programs. Still, the point is that the institution of science is structured in such a way that the facts emerge because the scientific form of explanation circumvents the scientists’ personalities. This is the essence of scientific objectivity: in so far as they think logically and apply the other scientific principles, scientists depersonalize themselves, meaning that they remove their character from their interaction with some phenomenon and make themselves functionaries in a larger system. This system is just the one in which the natural phenomenon reveals its causal interrelations thanks to the elimination of our subjectivity which would otherwise personalize the phenomenon, adding imaginary and typically supernatural interpretations which blind us to the truth.

And when scientists depersonalize themselves, they open themselves up to the phenomenon: they study it carefully, taking copious notes, using powerful technologies to peer deeply into it, and isolating the variables by designing sterile environments to keep out background noise. This is very like taking up the aesthetic attitude, since the art appreciator too becomes captivated by the work itself, getting lost in its objective details as she sets aside any personal priority she may have. Both the art appreciator and the scientist are personally disinterested when they inspect some object, although the scientist is often just functionally or institutionally so, and both are interested in experiencing the thing for its own sake, although the artist does so for the aesthetic reward whereas the scientist expects a cognitive one. Both objectify what they perceive in that they intend to discern only the subtlest patterns in what’s actually there in front of them, whether on the stage, in the picture frame, or on the novel’s pages, in the case of fine art, or in the laboratory or the wild in the case of science. Thus, art appreciators speak of the patterns of balance and proportion, while scientists focus on causal relations. And the former are rewarded with the normative experience of beauty or are punished with a perception of ugliness, as the case may be, while the latter speak of cognitive progress, of science as the premier way of discovering the natural facts, and indeed of the universality of their successes.

Here, then, is an explanation of what David Hume called the curious generalization that occurs in inductive reasoning, when we infer that because some regularity holds in some cases, therefore it likely holds in all cases. We take our inductive findings to have universal scope because when we reason in that way, we’re objectifying rather than personalizing the phenomenon, and when we objectify something we’re virtually taking up the aesthetic attitude towards it. Finally, when we take up such an attitude, we anticipate a reward, which is to say that we assume that objectification is worthwhile—not just for petty instrumental reasons, but for normative ones, which is to say that objectification functions as a standard for everyone. When you encounter a wonderful work of art, you think everyone ought to have the same experience and that someone who isn’t as moved by that artwork is failing in some way. Likewise, when you discover an objective fact of how some natural system operates, you think the fact is real and not just apparent, that it’s there universally for anyone on the planet to confirm.

Of course, inductive generalization is based also on metaphysical materialism, on the assumptions that the world is made of atoms and that a chunk of matter is just the sort of thing to hold its form and to behave in regular ways regardless of who’s observing it, since material things are impersonal and thus they lack any freedom to surprise. But scientists persist in speaking of their cognitive enterprise as progressive, not just because they assume that science is socially useful, but because scientific findings transcend our instrumental motives since they allow a natural system to speak mainly for itself. Moreover, scientists persist in calling those generalizations laws, despite the unfortunate personal (theistic) connotations, given the comparison with social laws. These facts indicate that inductive reasoning isn’t wholly rational, after all, and that the generalizations are implicitly normative (which isn’t to say moral), because the process of scientific discovery is structurally similar to the experience of art.

 

Natural Art and Science’s True Horror

 

Some obvious questions remain. Are natural phenomena exactly the same as fine artworks? No, since the latter are produced by minds whereas the former are generated by natural forces and elements, and by the processes of evolution and complexification. Does this mean that calling natural systems works of art is merely analogical? No, because the similarity in question isn’t accidental; rather, it’s due to the above theory of art, which says that art is nothing more than what we find when we adopt the aesthetic attitude towards it. According to this account, art is potentially everywhere and how the art is produced is irrelevant.

Does this mean, though, that aesthetic values are entirely subjective, that whether something is art is all in our heads since it depends on that perspective? The answer to this question is more complicated. Yes, the values of beauty and ugliness, for example, are subjective in that minds are required to discover and appreciate them. But notice that scientific truth is likewise just as subjective: minds are required to discover and to understand such truth. What’s objective in the case of scientific discoveries is the reality that corresponds to the best scientific conclusions. That reality is what it is regardless of whether we explain it or even encounter it. Likewise, what’s objective in the case of aesthetics is something’s potential to make the aesthetic appreciation of it worthwhile. That potential isn’t added entirely by the art appreciator, since that person opens herself up to being pleased or disappointed by the artwork. She hopes to be pleased, but the art’s quality is what it is and the truth will surface as long as she adopts the aesthetic attitude towards it, ignoring her prejudices and giving the art a chance to speak for itself, to show what it has to offer. Even if she loathes the artist, she may grudgingly come to admit that he’s produced a fine work, as long as she’s virtually objective in her appreciation of his work, which is to say as long as she treats it aesthetically and impersonally for the sake of the experience itself. Again, scientific objectivity differs slightly from aesthetic appreciation, since scientists are interested in knowledge, not in pleasant experience. But as I’ve explained, that difference is irrelevant since the cognitive agenda compels the scientist to subdue or to work around her personality and to think objectively—just like the art beholder.

So do beauty and ugliness exist as objective parts of the world? As potentials to reward or to punish the person who takes up anything like the aesthetic attitude, including a stance of scientific objectification, given the extent of the harmony or disharmony in the observed patterns, for example, I believe the answer is that those aesthetic properties are indeed as real as atoms and planets. The objective scientist is rewarded ultimately with knowledge of how nature works, while someone in the grip of the aesthetic attitude is rewarded (or punished) with an experience of the aesthetic dimension of any natural or artificial product. That dimension is found in the mechanical aspect of natural systems, since aesthetic harmony requires that the parts be related in certain ways to each other so that the whole system can be perceived as sublime or otherwise transcendent (mind-blowing). Traditional artworks are self-contained and science likewise deals largely with parts of the universe that are analyzed or reduced to systems within systems, each studied independently in artificial environments that are designed to isolate certain components of the system.

Now, such reduction is futile in the case of chaotic systems, but the grandeur of such systems is hardly lessened when the scientist discovers how a system which is sensitive to initial conditions evolves unpredictably as defined by a mathematical formula. Indeed, chaotic systems are comparable to modern and postmodern art as opposed to the more traditional kind. Recent, highly conceptual art or the nonrepresentational kind that explores the limits of the medium is about as unpredictable as a chaotic system. So the aesthetic dimension is found not just in part-whole relations and thus in beauty in the sense of harmony, but in free creativity. Modern art and science are both institutions that idealize the freedom of thought. Freed from certain traditions, artists now create whatever they’re inspired to create; they’re free to experiment, not to learn the natural facts but to push the boundaries of human creativity. Likewise, modern scientists are free to study whatever they like (in theory). And just as such modernists renounce their personal autonomy for the sake of their work, giving themselves over to their muse, to their unconscious inclinations (somewhat like Zen Buddhists who abhor the illusion of rational self-control), or instead to the rigors of institutional science, nature reveals its mindless creativity when chaotic systems emerge in its midst.

But does the scientist actually posit aesthetic values while doing science, given that scientific objectification isn’t identical with the aesthetic attitude? Well, the scientist would generally be too busy doing science to attend to the aesthetic dimension. But it’s no accident that mathematicians are disproportionately Platonists, that early modern scientists saw the cosmic order as attesting to God’s greatness, or that postmodern scientists like Neal deGrasse Tyson, who hosts the rebooted television show Cosmos, labour to convince the average American that naturalism ought to be enough of a religion for them, because the natural facts are glorious if not technically miraculous. The question isn’t whether scientists supply the world with aesthetic properties, like beauty or ugliness, since those properties preexist science as objective probabilities of uplifting or depressing anyone who takes up the aesthetic attitude, which attitude is practically the same as objectivity. Instead, the question here might be whether scientific objectivity compels the scientist to behold a natural phenomenon as art. Assuming there are nihilistic scientists, the answer would have to be no. The reason for this would be the difference in social contexts, which accounts for the difference between the goals and rewards. Again, the artist wants a certain refined pleasure whereas the scientist wants knowledge. But the point is that the scientist is poised to behold natural systems as artworks, just in so far as she’s especially objective.

Finally, we should return to the question of how this relates to nihilism. The fear, raised above, was that because science entails nihilism, the loss of faith in our values and traditions, scientists threaten to undermine the social order even as they lay bare the natural one. I’ve questioned the premise, since objectivity entails instead the aesthetic attitude which compels us to behold nature not as arid and barren but as rife with aesthetic values. Science presents us with a self-shaping universe, with the mindless, brute facts of how natural systems work that scientists come to know with exquisite attention to detail, thanks to their cognitive methods which effectively reveal the potential of even such systems to reward or to punish someone with an aesthetic eye. For every indifferent natural system uncovered by science, we’re well-disposed to appreciating that system’s aesthetic quality—as long as we emulate the scientist and objectify the system, ignoring our personal interests and modeling its patterns, such as by reducing the system to mechanical part-whole relations. The more objective knowledge we have, the more grist for the aesthetic mill. This isn’t to say that science supports all of our values and traditions. Obviously science threatens some of them and has already made many of them untenable. But science won’t leave us without any value at all. The more objective scientists are and the more of physical reality they disclose, the more we can perceive the aesthetic dimension that permeates all things, just by asking for pleasure rather than knowledge from nature.

There is, however, another great fear that should fill in for the nihilistic one. Instead of worrying that science will show us why we shouldn’t believe there’s any such thing as value, we might wonder whether, given the above, science will ultimately present us with a horrible rather than a beautiful universe. The question, then, is whether nature will indeed tend to punish or to reward those of us with aesthetic sensibilities. What is the aesthetic quality of natural phenomena in so far as they’re appreciated as artworks, as aesthetically interpretable products of undead processes? Is the final aesthetic judgment of nature an encouraging, life-affirming one that justifies all the scientific work that’s divorced the facts from our mental projections or will that judgment terrorize us worse than any grim vision of the world’s fundamental neutrality? Optimists like Richard Dawkins, Carl Sagan and Tyson think the wonders of nature are uplifting, but perhaps they’re spinning matters to protect science’s mystique and the secular humanistic myth of the progress of modern, science-centered societies. Perhaps the world’s objectification curses us not just with knowledge of many unpleasant facts of life, but with an experience of the monstrousness of all natural facts.

Neuroscience as Socio-Cognitive Pollution

Want evidence of the Semantic Apocalypse? Look no further than your classroom.

As the etiology of more and more cognitive and behavioural ‘deficits’ is mapped, more and more of what once belonged to the realm of ‘character’ is being delivered to the domain of the ‘medical.’ This is why professors and educators more generally find themselves institutionally obliged to make more and more ‘accommodations,’ as well as why they find their once personal relations with students becoming ever more legalistic, ever more structured to maximally deflect institutional responsibility. Educators relate with students in an environment that openly declares their institutional incompetence regarding medicalized matters, thus providing students with a failsafe means to circumvent their institutional authority. This short-circuit is brought about by the way mechanical, or medical, explanations of behaviour impact intuitive/traditional notions regarding responsibility. Once cognitive or behavioural deficits are redefined as ‘conditions,’ it becomes easy to argue that treating those possessing the deficit the same as those who do not amounts to ‘punishing’ them for something they ‘cannot help.’ The professor is thus compelled to ‘accommodate’ to level the playing field, in order to be moral.

On Blind Brain Theory, this trend is part and parcel of the more general process of ‘social akrasis,’ the becoming incompatible of knowledge and experience. The adaptive functions of morality turn on certain kinds of ignorance, namely, ignorance of the very kind of information driving medicalization. Once the mechanisms underwriting some kind of ‘character flaw’ are isolated, that character flaw ceases to be a character flaw, and becomes a ‘condition.’ Given pre-existing imperatives to grant assistance to those suffering conditions, behaviour once deemed transgressive becomes symptomatic, and moral censure becomes immoral. Character flaws become disabilities. The problem, of course, is that all transgressive behaviour—all behaviour period, in fact—can be traced back to various mechanisms, begging the question, ‘Where does accommodation end?’ Any disparity in classroom performance can be attributed to disparities between neural mechanisms.

The problem, quite simply, is that the tools in our basic socio-cognitive toolbox are adapted to solve problems in the absence of mechanical cognition—it literally requires our blindness to certain kinds of facts to reliably function. We are primed ‘to hold responsible’ those who ‘could have done otherwise’—those who have a ‘choice.’ Choice, quite famously, requires some kind of fictional discontinuity between us and our precursors, a discontinuity that only ignorance and neglect can maintain. ‘Holding responsible,’ therefore, can only retreat before the advance of medicalization, insofar as the latter involves the specification of various behavioural precursors.

The whole problem of this short circuit—and the neuro-ethical mire more generally, in fact—can be seen as a socio-cognitive version of a visual illusion, where the atypical triggering of different visual heuristics generates conflicting visual intuitions. Medicalization stumps socio-cognition in much the same way the Muller-Lyer Illusion stumps the eye: It provides atypical (evolutionarily unprecedented, in fact) information, information that our socio-cognitive systems are adapted to solve without. Causal information regarding neurophysiological function triggers an intuition of moral exemption regarding behaviour that could never have been solved as such in our evolutionary history. Neuroscientific understanding of various behavioural deficits, however defined, cues the application of a basic, heuristic capacity within a historically unprecedented problem-ecology. If our moral capacities have evolved to solve problems neglecting the brains involved, to work around the lack of brain information, then it stands to reason that the provision of that information would play havoc with our intuitive problem-solving. Brain information, you could say, is ‘non-ecofriendly,’ a kind of ‘informatic pollutant’ in the problem-ecologies moral cognition is adapted to solve.

The idea that heuristic cognition generates illusions is now an old one. In naturalizing intentionality, Blind Brain Theory allows us to see how the heuristic nature of intentional problem-solving regimes means they actually require the absence of certain kinds of information to properly function. Adapted to solve social problems in the absence of any information regarding the actual functioning of the systems involved, our socio-cognitive toolbox literally requires that certain information not be available to function properly. The way this works can be plainly seen with the heuristics governing human threat detection, say. Since our threat detection systems are geared to small-scale, highly interdependent social contexts, the statistical significance of any threat information is automatically evaluated against a ‘default village.’ Our threat detection systems, in other words, are geared to problem-ecologies lacking any reliable information regarding much larger populations. To the extent that such information ‘jams’ reliable threat detection (incites irrational fears), one might liken such information to pollution, to something ecologically unprecedented that renders previously effective cognitive adaptations ineffective.

I actually think ‘cognitive pollution’ is definitive of modernity, that all modern decision-making occurs in information environments, many of them engineered, that cut against our basic decision-making capacities. The ‘ecocog’ ramifications of neuroscientific information, however, promise to be particularly pernicious.

Our moral intuitions were always blunt instruments, the condensation of innumerable ancestral social interactions, selected for their consequences rather than their consistencies. Their resistance to any decisive theoretical regimentation—the mire that is ‘metaethics’—should come as no surprise. But throughout this evolutionary development, neurofunctional neglect remained a constant: at no point in our evolutionary history were our ancestors called on to solve moral problems possessing neurofunctional information. Now, however, that information has become an inescapable feature of our moral trouble-shooting, spawning ad hoc fixes that seem to locally serve our intuitions, while generating any number of more global problems.

A genuine social process is afoot here.

A neglect based account suggests the following interpretation of what’s happening: As medicalization (biomechanization) continues apace, the social identity of the individual is progressively divided into the subject, the morally liable, and the abject, the morally exempt. Like a wipe in cinematic editing, the scene of the abject is slowly crawling across the scene of the subject, generating more and more breakdowns of moral cognition. Becoming abject doesn’t so much erase as displace liability: one individual’s exemption (such as you find in accommodation) from moral censure immediately becomes a moral liability for their compatriots. The paradoxical result is that even as we each become progressively more exempt from moral censure, we become progressively more liable to provide accommodation. Thus the slow accumulation of certain professional liabilities as the years wear on. Those charged with training and assessing their fellows will in particular face a slow erosion in their social capacity to censure—which is to say, evaluate—as accommodation and its administrative bureaucracies slowly continue to bloat, capitalizing on the findings of cognitive science.

The process, then, can be described as one where progressive individual exemption translates into progressive social liability: given our moral intuitions, exemptions for individuals mean liabilities for the crowd. Thus the paradoxical intensification of liability that exemption brings about: the process of diminishing performance liability is at once the process of increasing assessment liability. Censure becomes increasingly prone to trigger censure.

The erosion of censure’s public legitimacy is the most significant consequence of this socio-cognitive short-circuit I’m describing. Heuristic tool kits are typically whole package deals: we evolved our carrot problem-solving capacity as part of a larger problem-solving capacity involving sticks. As informatic pollutants destroy more and more of the stick’s problem-solving habitat, the carrots left behind will become less and less reliable. Thus, on a ‘zombie morality’ account, we should expect the gradual erosion of our social system’s ability to police public competence—a kind of ‘carrot drift.’

This is how social akrasis, the psychotic split between the nihilistic how and fantastic what of our society and culture, finds itself coded within the individual. Broken autonomy, subpersonally parsed. With medicalization, the order of the impersonal moves, not simply into the skull of the person, but into their performance as well. As the subject/abject hybrid continues to accumulate exemptions, it finds itself ever more liable to make exemptions. Since censure is communicative, the increasing liability of censure suggests a contribution, at least, to the increasing liability of moral communication, and thus, to the politicization of public interpersonal discourse.

How this clearly unsustainable trend ends depends on the contingencies of a socially volatile future. We should expect to witness the continual degradation in the capacity of moral cognition to solve in what amounts to an increasingly polluted information environment. Will we overcome these problems via some radical new understanding of social cognition? Or will this lead to some kind of atavistic backlash, the institution of some kind of informatic hygiene—an imposition of ignorance on the public? I sometimes think that the kind of ‘liberal atrocity tales’ I seem to endlessly encounter among my nonacademic peers point in this direction. For those ignorant of the polluting information, the old judgments obviously apply, and stories of students not needing to give speeches in public-speaking classes, or homeless individuals being allowed to dump garbage in the river, float like sparks from tongue to tongue, igniting the conviction that we need to return to the old ways, thus convincing who knows how many to vote directly against their economic interests. David Brookes, protege of William F. Buckley and conservative columnist for The New York Times, often expresses amazement at the way the American public continues to drift to the political right, despite the way fiscal conservative reengineering of the market continues to erode their bargaining power. Perhaps the identification of liberalism with some murky sense of the process described above has served to increase the rhetorical appeal of conservatism…

The sense that someone, somewhere, needs to be censured.

The Metacritique of Reason

Kant

 

Whether the treatment of such knowledge as lies within the province of reason does or does not follow the secure path of a science, is easily to be determined from the outcome. For if, after elaborate preparations, frequently renewed, it is brought to a stop immediately it nears its goal; if often it is compelled to retrace its steps and strike into some new line of approach; or again, if the various participants are unable to agree in any common plan of procedure, then we may rest assured that it is very far from having entered upon the secure path of a science, and is indeed a merely random groping.  Immanuel Kant, The Critique of Pure Reason, 17.

The moral of the story, of course, is that this description of Dogmatism’s failure very quickly became an apt description of Critical Philosophy as well. As soon as others saw all the material inferential wiggle room in the interpretation of condition and conditioned, it was game over. Everything that damned Dogmatism in Kant’s eyes now characterizes his own philosophical inheritance.

Here’s a question you don’t come across everyday: Why did we need Kant? Why did philosophy have to discover the transcendental? Why did the constitutive activity of cognition elude every philosopher before the 18th Century? The fact we had to discover it means that it was somehow ‘always there,’ implicit in our experience and behaviour, but we just couldn’t see it. Not only could we not see it, we didn’t even realize it was missing, we had no inkling we needed to understand it to understand ourselves and how we make sense of the world. Another way to ask the question of the inscrutability of the ‘transcendental,’ then, is to ask why the passivity of cognition is our default assumption. Why do we assume that ‘what we see is all there is’ when we reflect on experience?

Why are we all ‘naive Dogmatists’ by default?

225px-Spinoza

It’s important to note that no one but no one disputes that it had to be discovered. This is important because it means that no one disputes that our philosophical forebears once uniformly neglected the transcendental, that it remained for them an unknown unknown. In other words, both the Intentionalist and the Eliminativist agree on the centrality of neglect in at least this one regard. The transcendental (whatever it amounts to) is not something that metacognition can readily intuit—so much so that humans engaged in thousands of years of ‘philosophical reflection’ without the least notion that it even existed. The primary difference is that the Intentionalist thinks they can overcome neglect via intuition and intellection, that theoretical metacognition (philosophical reflection), once alerted to the existence of the transcendental, suddenly somehow possesses the resources to accurately describe its structure and function. The Eliminativist, on the other hand, asks, ‘What resources?’ Lay them out! Convince me! And more corrosively still, ‘How do you know you’re not still blinkered by neglect?’ Show me the precautions!

The Eliminativist, in other words, pulls a Kant on Kant and demands what amounts to a metacritique of reason.

The fact is, short of this accounting of metacognitive resources and precautions, the Intentionalist has no way of knowing whether or not they’re simply a Stage-Two Dogmatist,’ whether their ‘clarity,’ like the specious clarity of the Dogmatist, isn’t simply the product of neglect—a kind of metacognitive illusion in effect. For the Eliminativist, the transcendental (whatever its guise) is a metacognitive artifact. For them, the obvious problems the Intentionalist faces—the supernaturalism of their posits, the underdetermination of their theories, the lack of decisive practical applications—are all symptomatic of inquiry gone wrong. Moreover, they find it difficult to understand why the Intentionalist would persist in the face of such problems given only a misplaced faith in their metacognitive intuitions—especially when the sciences of the brain are in the process of discovering the actual constitutive activity responsible! You want to know what’s really going on ‘implicitly,’ ask a cognitive neuroscientist. We’re just toying with our heuristics out of school otherwise.

We know that conscious cognition involves selective information uptake for broadcasting throughout the brain. We also know that no information regarding the astronomically complex activities constitutive of conscious cognition as such can be so selected and broadcast. So it should come as no surprise whatsoever that the constitutive activity responsible for experience and cognition eludes experience and cognition—that the ‘transcendental,’ so-called, had to be discovered. More importantly, it should come as no surprise that this constitutive activity, once discovered, would be systematically misinterpreted. Why? The philosopher ‘reflects’ on experience and cognition, attempts to ‘recollect’ them in subsequent moments of experience and cognition, in effect, and realizes (as Hume did regarding causality, say) that the information available cannot account for the sum of experience and cognition: the philosopher comes to believe (beginning most famously with Kant) that experience does not entirely beget experience, that the constitutive constraints on experience somehow lie orthogonal to experience. Since no information regarding the actual neural activity responsible is available, and since, moreover, no information regarding this lack is available, the philosopher presumes these orthogonal constraints must conform to their metacognitive intuitions. Since the resulting constraints are incompatible with causal cognition, they seem supernatural: transcendental, virtual, quasi-transcendental, aspectual, what have you. The ‘implicit’ becomes the repository of otherworldly constraining or constitutive activities.

Philosophy had to discover the transcendental because of metacognitive neglect—on this fact, both the Intentionalist and the Eliminativist agree. The Eliminativist simply takes the further step of holding neglect responsible for the ontologically problematic, theoretically underdetermined, and practically irrelevant character of Intentionalism. Far from what Kant supposed, Critical Philosophy—in all its incarnations, historical and contemporary–simply repeats, rather than solves, these sins of Dogmatism. The reason for this, the Eliminativist says, is that it overcomes one metacognitive illusion only to run afoul a cluster of others.

This is the sense in which Blind Brain Theory can be seen as completing as much as overthrowing the Kantian project. Though Kant took cognitive dogmatism, the assumption of cognitive simplicity and passivity, as his target, he nevertheless ran afoul metacognitive dogmatism, the assumption of metacognitive simplicity and passivity. He thought—as his intellectual heirs still think—that philosophical reflection possessed the capacity to apprehend the superordinate activity of cognition, that it could accurately theorize reason and understanding. We now possess ample empirical grounds to think this is simply not the case. There’s the mounting evidence comprising what Princeton psychologist Emily Pronin has termed the ‘Introspection Illusion,’ direct evidence of metacognitive incompetence, but the fact is, every nonconscious function experimentally isolated by cognitive science illuminates another constraining/constitutive cognitive activity utterly invisible to philosophical reflection, another ignorance that the Intentionalist believes has no bearing on their attempts to understand understanding.

One can visually schematize our metacognitive straits in the following way:

Metacognitive Capacity

This diagram simply presumes what natural science presumes, that you are a complex organism biomechanically synchronized with your environments. Light hits your retina, sound hits your eardrum, neural networks communicate and behaviours are produced. Imagine your problem-solving power set on a swivel and swung 360 degrees across the field of all possible problems, which is to say problems involving lateral, or nonfunctionally entangled environmental systems, as well as problems involving medial, or functionally entangled enabling systems, such as those comprising your brain. This diagram, then, visualizes the loss and gain in ‘cognitive dimensionality’—the quantity and modalities of information available for problem solving—as one swings from the third-person lateral to the first-person medial. Dimensionality peaks with external cognition because of the power and ancient evolutionary pedigree of the systems involved. The dimensionality plunges for metacognition, on the other hand, because of medial neglect, the way structural complicity, astronomical complexity, and evolutionary youth effectively renders the brain unwittingly blind to itself.

This is why the blue line tracking our assumptive or ‘perceived’ medial capacity in the figure peaks where our actual medial capacity bottoms out: with the loss in dimensionality comes the loss in the ability to assess reliability. Crudely put, the greater the cognitive dimensionality, the greater the problem-solving capacity, the greater the error-signalling capacity. And conversely, the less the cognitive dimensionality, the less the problem-solving capacity, the less the error-signalling capacity. The absence of error-signalling means that cognitive consumption of ineffective information will be routine, impossible to distinguish from the consumption of effective information. This raises the spectre of ‘psychological anosognosia’ as distinct from the clinical, the notion that the very cognitive plasticity that allowed humans to develop ACH thinking has led to patterns of consumption (such as those underwriting ‘philosophical reflection’) that systematically run afoul medial neglect. Even though low dimensionality speaks to cognitive specialization, and thus to the likely ineffectiveness of cognitive repurposing, the lack of error-signalling means the information will be routinely consumed no matter what. Given this, one should expect ACH thinking–reason–to be plagued with the very kinds of problems that plague theoretical discourse outside the sciences now, the perpetual coming up short, the continual attempt to retrace steps taken, the interminable lack of any decisive consensus…

Or what Kant calls ‘random groping.’

The most immediate, radical consequence of this 360 degree view is that the opposition between the first-person and third-person disappears. Since all the apparently supernatural characteristics rendering the first-person naturalistically inscrutable can now be understood as artifacts of neglect—illusions of problem-solving sufficiency—all the ‘hard problems’ posed by intentional phenomena simply evaporate. The metacritique of reason, far from pointing a way to any ‘science of the transcendental,’ shows how the transcendental is itself a dogmatic illusion, how cryptic things like the ‘apriori’ are obvious expressions of medial neglect, sources of constraint ‘from nowhere’ that baldly demonstrate our metacognitive incapacity to recognize our metacognitive incapacity. For all the prodigious problem-solving power of logic and mathematics, a quick glance at the philosophy of either is enough to assure you that no one knows what they are. Blind Brain Theory explains this remarkable contrast of insight and ignorance, how we could possess tools so powerful without any decisive understanding of the tools themselves.

The metacritique of reason, then, leads to what might be called ‘pronaturalism,’ a naturalism that can be called ‘progressive’ insofar as it continues to eschew the systematic misapplication of intentional cognition to domains that it cannot hope to solve—that continues the process of exorcising ghosts from the machinery of nature. The philosophical canon swallowed Kant so effortlessly that people often forget he was attempting to put an end to philosophy, to found a science worthy of the name, one which grounded both the mechanical and the ghostly. By rendering the ghostly the formal condition of any cognition of the mechanical, however, he situated his discourse squarely in the perpetually underdetermined domain of philosophy. His failure was inevitable.

The metacritique of reason makes the very same attempt, only this time anchored in the only real credible source of theoretical cognition we possess: the sciences. It allows us to peer through the edifying fog of our intentional traditions and to see ourselves, at long last, as wholly continuous with crazy shit like this…

Filamentary Map

 

Zombie Interpretation: Eliminating Kriegel’s Asymmetry Argument

Could zombie versions of philosophical problems, versions that eliminate all intentionality from the phenomena at issue, shed any light on those problems?

The only way to find out is to try.

Since I’ve been railing so much about the failure of normativism to account for its evidential basis, I thought it worthwhile to consider the work of a very interesting intentionalist philosopher, Uriah Kriegel, who sees the need quite clearly. The question could not be more simple: What justifies philosophical claims regarding the existence and nature of intentional phenomena? For Kriegel the most ‘natural’ and explanatorily powerful answer is observational contact with experiential intentional states. How else, he asks, can we come to know our intentional states short of experiencing them? In what follows I propose to consider two of Kriegel’s central arguments against the backdrop of ‘zombie interpretations’ of the very activities he considers, and in doing so, I hope to undermine not only his argument, but the general abductive strategy one finds intentionalists taking throughout philosophy more generally, the presumption that only theoretical accounts somehow involving intentionality can account for intentional phenomena.

In his 2011 book, The Sources of Intentionality, Kriegel attempts to remedy semantic externalism’s failure to naturalize intentionality via a carefully specified return to phenomenology, an account of how intentional concepts arise from our introspective ‘observational contact’ with mental states possessing intentional content. Experience, he claims, is intrinsically intentional. Introspective contact with this intrinsic intentionality is what grounds our understanding of intentionality, providing ‘anchoring instances’ for our various intentional concepts.

As Kriegel is quick to point out, such a thesis implies a crucial distinction between experiential intentionality, the kind of intentionality we experience, and nonexperiential intentionality, the kind of intentionality we ascribe without experiencing. This leads him to Davidson’s account of radical interpretation, and to what he calls the “remarkable asymmetry” between various ascriptions of intentionality. On radical interpretation as Davidson theorizes it, our attempts to interpret one another are so evidentially impoverished that interpretative success fundamentally requires assuming the rationality of our interlocutor—what he terms ‘charity.’ The ascription of some intentional state to another turns on the prior assumption that he or she believes, desires, fears and so on as they should, otherwise we would have no way of deciding among the myriad interpretations consistent with the meagre behavioural data available. Kriegel argues “that while the Davidsonian insight is cogent, it applies only to the ascription of non-experiential intentionality, as well as the ascription of experiential intentionality to others, but not to the ascription of experiential intentionality to oneself” (29). We require charity when it comes to ascribing varieties of intentionality to signs, others, and even our nonconscious selves, but not when it comes to ascribing intentionality to our own experiences. So why this basic asymmetry? Why do we have to attribute true beliefs and rational desires—take the ‘intentional stance’—with regards to others and our nonconscious selves, and not our consciously experienced selves? Why do we seem to be the one self-interpreting entity?

Kriegel thinks observational contact with our actual intentionality provides the most plausible answer, that “[i]nsofar as it is appropriate to speak of data for ascription here, the only relevant datum seems to be a certain deliverance of introspection” (33). He continues:

There is thus a contrast between the mechanics of first-person [experiential]-intentional ascription and third-person … intentional ascription. The former is based on endorsement of introspective seemings, the latter on causal inference from behavior. This is hardly deniable: as noted, when you ascribe to yourself a perceptual experience as of a table, you do not observe putative causal effects of your experience and infer on their basis the existence of a hidden experiential cause. Rather, you seem to make the ascription on the basis of observing, in some (not unproblematic) sense, the experience itself—observing, that is, the very state which you ascribe. The Sources of Intentionality, 33

The mechanics of first-person and third-person intentional cognition differ in that the latter requires explanatory posits like ‘hidden mental causes.’ Since self-ascription involves nothing hidden, no interpretation is required. And it is this elegant and intuitive explanation of first-person interpretative asymmetry that provides abductive warrant for the foundational argument of the text:

1. All the anchoring instances of intentionality are such that we have observational contact with them;

2. The only instances of intentionality with which we have observational contact are experiential-intentional states; therefore,

3. All anchoring instances of intentionality are experiential-intentional states. 38

Given the abductive structure of Kriegel’s argument, those who dissent with either (1) or (2) need a better explanation of asymmetry. Those who deny the anchoring instance model of concept acquisition will target (1), arguing, say, that concept acquisition is an empirical process requiring empirical research. Kriegal simply punts on this issue, claiming we have no reason to think that concept acquisition, no matter how empirically detailed the story turns out to be, is insoluble at this (armchair) level of generality. Either way, his position still enjoys the abductive warrant of explaining asymmetry.

For Kriegal, (2) is the most philosophically controversial premise, with critics either denying we have any ‘observational contact’ with experiential-intentional states, or that we have observational contact with only such experiential-intentional states. The problem faced by both angles, Kriegal points out, is that asymmetry still holds whether one denies (2) or not: we can ascribe intentional experiences to ourselves without requiring charity. If observational contact—the ‘natural explanation’ Kriegal calls it—doesn’t lie at the root of this capacity, then what does?

For an eliminativist such as myself, however, the problem is more a matter of definition. I actually agree that suffering a certain kind of observational contact–namely, one that systematically neglects tremendous amounts of information–can anchor our philosophical concept of intentionality. Kriegel is fairly dismissive of eliminativism in The Sources of Intentionality, and even then the eliminativism he dismisses acknowledges the existence of intentional experiences! As he writes, “if eliminativism cannot be acceptable unless a relatively radical interpretation of cognitive science is adopted, then eliminativism is not in good shape” (199). The problem is that this assumes cognitive science is itself in fine shape, when Kriegel himself emphatically asserts “that it is not doing fine” (A Hesitant Defence of Introspection, 3). Cognitive science is fraught with theoretical dispute, certainly more than enough (and for long enough!) to seriously entertain the possibility that something radical has been overlooked.

So the radicality of eliminativism is neither here nor there regarding its ‘shape.’ The real problem faced by eliminativism, which Kriegel glosses, is abductive. Eliminativism simply cannot account for what seem to be obvious intentional phenomena.

Which brings me to zombies and what these kinds of issues might look like in their soulless, shuffling world…

In the zombie world I’m imagining, what Sellars called the ‘scientific image of man’ is the only true image. There quite simply is no experience or meaning or normativity as we intentionally characterize these things in our world. So zombies, in their world, possess only systematic causal relations to their environments. No transcendental rules or spooky functions haunt their brains. No virtual norms slumber in their community’s tacit gut. ‘Zombie knowledge’ is simply a matter of biomechanistic systematicity, having the right stochastic machinery to solve various problem ecologies. So although they use sounds to coordinate their behaviours, the efficacies involved are purely causal, a matter of brains conditioning brains. ‘Zombie language,’ then, can be understood as a means of resolving discrepancies via strings of mechanical code. Given only a narrow band of acoustic sensitivity, zombies constantly update their covariational schema relative to one another and their environments. They are ‘communicatively attuned.’

So imagine a version of radical zombie interpretation, where a zombie possessing one code—Blue—is confronted by another zombie possessing another code—Red. And now let’s ask the zombie version of Davidson’s question: What would it take for these zombies to become communicatively attuned?

Since the question is one of overcoming difference, it serves to recall what our zombies share: a common cognitive biology and environment. An enormous amount of evolutionary stage-setting underwrites the encounter. They come upon one another, in other words, differing only in code. And this is just to say that radical zombie interpretation occurs within a common attunement to each other and the world. They share both a natural environment and the sensorimotor systems required to exploit it. They also share powerful ‘brain-reading’ systems, a heuristic toolbox that allows them to systematically coordinate their behaviour with that of their zombie fellows without any common code. Even more, they share a common code apparatus, which is to say, the same system adapted to coordinate behaviours via acoustic utterances.

Given this ‘pre-established harmony’—common environment, common brain-reading and code-using biology—how might a code Blue zombie come to interpret (be systematically coordinated with) the utterances of a code Red zombie?

Since both zombies were once infant zombies, each has already undergone ‘code conditioning’; they have already tested innumerable utterances against innumerable environments, isolating and preserving robust covariances (and structural operators) on the way to acquiring their respective codes. At the same time, their brain-reading systems allow them to systematically coordinate their behaviours to some extent, to find a kind of basic attunement. All that remains is a matter of covariant sound substitution, of swapping the sounds belonging to code Blue for the sounds belonging to code Red, a process requiring little more than testing code-specific covariations against real-time environments. Perhaps radical zombie interpretation is not so radical after all!

The first thing to note is how the reliable coordination of behaviours is all that matters in this process: idiosyncrasies in their respective implementations of Red or Blue matter only insofar as they impact this coordination. The ‘synonymy’ involved is entirely coincident because it is entirely physical.

The second thing to note is how pre-established harmony is simply a structural feature of the encounter. These are just the problems that nature has already solved for our two intrepid zombies, what has to be the case for the problem of radical zombie interpretation to even arise. At no point do our zombies ‘attribute’ or ‘ascribe’ anything to their counterpart. Sensing another zombie simply triggers their zombie-brain-reading machinery, which modifies their behaviour and so on. There’s no ‘charity’ involved, no ‘attribution of rationality,’ just the environmental cuing of heuristic systems adapted to solve certain zombie-social environments.

Of course each zombie resorts to their brain-reading systems to behaviourally coordinate with its counterpart, but this is an automatic feature of the encounter, what happens whenever zombies detect zombies. Each engages in communicative troubleshooting behaviour in the course of executing some superordinate disposition to communicatively coordinate. Brains are astronomically complicated mechanisms—far too complicated for brains to intuit them as such. Thus the radically heuristic nature of zombie brain-reading. Thus the perpetual problem of covariational discrepancies. Thus the perpetual expenditure of zombie neural resources on the issue of other zombies.

Leading us to a third thing of note: how the point of radical zombie interpretation is to increase behavioural possibilities by rendering behavioural interactions more systematic. What makes this last point so interesting lies in the explanation it provides regarding why zombies need not first decode themselves to decode others. As a robust biomechanical system, ‘self-systematicity’ is simply a given. The whole problem of zombie interpretation resides in one zombie gaining some systematic purchase on other zombies in an effort to create some superordinate system—a zombie community. Asymmetry, in other words, is a structural given.

In radical zombie interpretation, then, not only do we have no need for ‘charity,’ we somehow manage to circumvent all the controversies pertaining to radical human interpretation.

Now of course the great zombie/human irony is that humans are everything that zombies are and more. So the question immediately becomes one of why radical human interpretation should prove to be so problematic when the radical zombie interpretation of the same problem is not. Where the zombie story certainly entails a vast number of technical details, it does not involve anything conceptually occult or naturalistically inexplicable. If mere zombies could avoid these problems using nothing more than zombie resources, why should humans find themselves perennially confounded?

This really is an extraordinary question. The intentionalist will cry foul, of course, reference all the obvious intentional phenomena pertaining to the communicative coordination of humans, things like rules and reasons and references and so on, and ask how this zombie fairy tale could possibly explain any of them. So even though this story of zombie interpretation provides, in outline at least, the very kind of explanation that Kriegel demands, it quite obviously throws out the baby with the bathwater in the course of doing so. Asymmetry becomes perspicuous, but now the whole of human intentional activity becomes impossible to explain (assuming that anything at this level has ever been genuinely explained). Zombie interpretation, in other words, wins the battle by losing the war.

It’s worth noting here the curious structure of the intentionalist’s abductive case. The idea is that we need a theoretical intentional account to explain human intentional activity. What warrants theoretical supernaturalism (or philosophy traditionally construed) is the matter-of-fact existence of everyday intentional phenomena (an existence that Kriegel thinks so obvious that on a couple of occasions he adduces arguments he claims he doesn’t need simply to bolster his case against skeptics such as myself). The curiosity, however, is that the ‘matter-of-fact existence of everyday intentional phenomena’ that at once “underscores the depth of eliminativism’s (quasi-) empirical inadequacy” (199) and motivates theoretical intentional accounts is itself a matter of theoretical controversy—just not for intentionalists! The problem with abductive appeals like Kriegel’s, in other words, is the way they rely on a prior theory of intentionality to anchor the need for theories of intentionality more generally.

This is what makes radical zombie interpretation out and out eerie. Because it does seem to be the case that zombies could achieve at least the same degree of communicative coordination absent any intentional phenomena at all. When you strip away the intentional glamour, when you simply look at the biology and the behaviour, it becomes hard to understand just what it is that humans do that requires anything over and above zombie biology and behaviour. Since some kind of gain in systematicity is the point of communicative coordination, it makes sense that zombies need not troubleshoot themselves in the course of troubleshooting other zombies. So it remains the case that radical zombie interpretation, analyzed at the same level of generality, seems to have a much easier time explaining the same degree of human communicative coordination sans bebe, than does radical human interpretation, which, quite frankly, strands us with a host of further, intractable mysteries regarding things like ‘ascription’ and ‘emergence’ and ‘anomalous causation.’

What could be going on? When it comes to Kriegel’s ‘remarkable asymmetry’ should we simply put our ‘zombie glasses’ on, or should we tough it out in the morass of intractable second-order accounts of intentionality on the basis of some ineliminable intentional remainder?

As Three Pound Brain regulars know, the eliminativism I’m espousing here is quite unique in that it arises, not out of concerns regarding the naturalistic inscrutability of intentional phenomena, but out of a prior, empirically grounded account of intentionality, what I’ve been calling Blind Brain Theory. On Blind Brain Theory the impasse described above is precisely the kind of situation we should expect given the kind of metacognitive capacities we possess. By its lights, zombies just are humans, and so-called intentional phenomena are simply artifacts of metacognitive neglect, what high-dimensional zombie brain functions ‘look like’ when low-dimensionally sampled for deliberative metacognition. Brains are simply too complicated to be effectively solved by causal cognition, so we evolved specialized fixes, ways to manage our brain and others in the absence of causal cognition. Since the high-dimensional actuality of those specialized fixes outruns our metacognitive capacity, philosophical reflection confuses what little it can access with everything required, and so is duped into the entirely natural (but nonetheless extraordinary) belief that it possesses ‘observational contact’ with a special, irreducible order of reality. Given this, we should expect that attempts to theoretically solve radical interpretation via our ‘mind’ reading systems would generate more mystery than it would dispel.

Blind Brain Theory, in other words, short circuits the abductive strategy of intentionalism. It doesn’t simply offer a parsimonious explanation of asymmetry; it proposes to explain all so-called intentional phenomena. It tells us what they are, why we’re prone to conceive them the naturalistically incompatible ways we do, and why these conceptions generate the perplexities they do.

To understand how it does so, it’s worth considering what Kriegel himself thinks is the ‘weak link’ in his attempt to source intentionality: the problem of introspective access. In The Sources of Intentionality, Kriegel is at pains to point out that “one need not be indulging in any mystery-mongering about first-person access” to provide the kind of experiential observational contact that he needs. No version of introspective incorrigibility follows “from the assertion that we have introspective observational contact with our intentional experiences” (34). Even still, the question of just what kind of observational contact is required is one that he leaves hanging.

In his 2013 paper, ‘A Hesitant Defence of Introspection,’ Kriegel attempts to tie down this crucial loose thread by arguing what he calls ‘introspective minimalism,’ an account of human introspective capacity that can weather what he terms ‘Schwitzgebel’s Challenge,’ essentially, the question (arising out of Eric’s watershed, Perplexities of Consciousness) of whether our introspective capacity, whatever it consists in, possesses any cognitive scientific value. He begins by arguing the pervasive, informal role that introspection plays in the ‘context of discovery’ of cognitive sciences. The question, however, is how introspection fits into the ‘context of justification’—the degree to which it counts as evidence as opposed to mere ‘inspiration.’ Given the obvious falsehood of what he terms ‘introspective maximalism,’ he sets out to save some minimalist version of introspection that can serve some kind of evidential role. He turns to olfaction to provide an analogy to the kind of minimal justification that introspection is capable of providing:

Suppose, for instance, that introspection turns out to be as trustworthy as our sense of smell, that is, as reliable and as potent as a normal adult human’s olfactory system. Then Introspective minimalism would be vindicated. Normally, when we have an olfactory experience as of raspberries, it is more likely that there are raspberries in our immediate environment (than if we do not have such an experience). Conversely, when there are raspberries in our immediate environment, it is more likely that we would have an olfactory experience as of raspberries (than if there are none). So the ‘equireliability’ of olfaction and introspection would support introspective minimalism. Such equireliability is highly plausible. 8

Kriegel’s argument is simply that introspecting some phenomenology reliably indicates the presence of that phenomenology the same way smelling raspberries reliably indicates the presence of raspberries. This is all that’s required, he thinks, to assert “that introspection affords us observational contact with our mental life” (13), and is thus “epistemically indispensable for any mature understanding of the mind” (13). It’s worth noting that Schwitzgebel is actually inclined to concede the analogy, suggesting that his own “dark pessimism about some of the absolutely most basic and pervasive features of consciousness, and about the future of any general theory of consciousness, seems to be entirely consistent with Uriah’s hesitant defense of introspection” (“Reply to Kriegel, Smithies, and Spener,” 4). He agrees then, that introspection reliably tells us that we possess a phenomenology, he just doubts it reliably tells us what it consists in. Kriegel, on the hand, thinks his introspective minimalism gives him the kind of ‘observational contact’ he needs to get his abductive asymmetry argument off the ground.

But does it?

Once again, it pays to flip to the zombie perspective. Given that the zombie olfactory system is a specialized system adapted to the detection of chemical residues in the immediate environment, one might expect the zombie olfactory system would reliably detect the chemical residue left by raspberries. Given that the zombie introspective system is a specialized system adapted to the detection of brain events, one might expect the zombie introspective system would reliably detect those brain events. The first system reliably allows zombies to detect raspberries, and the second system reliably allows zombies to detect activity in various parts of its zombie brain.

On this way of posing the problem, however, the disanalogy between the two systems all but leaps out at us. In fact, it’s hard to imagine two more disparate cognitive tasks than detecting something as simple as the chemical signature of raspberries versus something as complex as the machinations of the zombie brain. In point of fact, the brain is so astronomically complicated, it seems all but assured that zombie introspective capacity would be both fractionate and heuristic in the extreme, that it would consist of numerous fixes geared to a variety of problem-ecologies.

One way to possibly repair the analogy would be to scale up the complexity of the problem faced by olfaction. So it’s obvious, to give an example, that the information available for olfaction is far too low-dimensional, far too problem specific, to anchor theoretical accounts of the biosphere. Then, on this repaired analogy, we can say that just as zombie olfaction isn’t geared to the theoretical solution of the zombie biosphere, but rather to the detection of certain environmental obstacles and opportunities, it is almost certainly the case that zombie introspection isn’t geared to the theoretical solution of the zombie brain, but rather to more specific, environmentally germane tasks. Given this, we have no reason whatsoever to presume that what zombies metacognize and report possesses any ‘reliability and potency’ beyond very specific problem-ecologies—the same as with olfaction. On zombie introspection, then, we have no more reason to think that zombies could possibly accurately metacognize the structure of their brain than they could accurately smell the structure of the world.

And this returns us back to the whole question of Kriegel’s notion of ‘observational contact.’ Kriegel realizes that ‘introspection’ isn’t simply an all or nothing affair, that it isn’t magically ‘self-intimating’ and therefore admits of degrees of reliability—this is why he sets out to defend his minimalist brand. But he never pauses to seriously consider the empirical requirements of even such minimal introspective capacity.

In essence, what he’s claiming is that the kind of ‘observational contact’ available to philosophical introspection warrants complicating our ontology with a wide variety of (supernatural) intentional phenomena. Introspective minimalism, as he terms it, argues that we can metacognize some restricted set of intentional entities/relations with the same reliability that we cognize natural phenomena. We can sniff these things out, so it stands to reason that such things exist to be sniffed, that introspecting a phenomenology increases the chances that such phenomenology exists (as introspected). With zombie introspection, however, the analogy between olfaction and metacognition strained credulity given the vast disproportion in complexity between olfactory and metacognitive phenomena. It’s difficult to imagine how any natural system could possibly even begin to accurately metacognize the brain.

The difference Kriegel would likely press, however, is that we aren’t mindless zombies. Human metacognition, in other words, isn’t concerned with the empirical particulars of the brain as it is the functional particulars of the conscious mind. Even though the notion of accurate zombie introspection is obviously preposterous, the notion of accurate human metacognition would seem to be a different question altogether, the question of what a human introspective capacity requires to accurately metacognize human ‘phenomenology’ or ‘mind.’

The difficulty here, famously, is that there seems to be no noncircular way to answer this question. Because we can’t find intentional phenomena anywhere in the natural world, theoretical metacognition monopolizes our every attempt to specify their nature. This effectively renders assessing the reliability of such metacognitive exercises impossible apart from their ability to solve various kinds of problems. And the difficulty here is that the long history of introspectively motivated philosophical theorization (as opposed to other varieties of metacognition) regarding the nature of the intentional has only generated more problems. For some reason, the kind of metacognition involved in ‘philosophical reflection’ only seems to make matters worse when it comes to questions of intentional phenomena.

The zombie account of this second impasse is at once parsimonious and straightforward: phenomenology (or mind or what have you) is the smell, not the raspberry—that would be some systematic activity in the brain. It is absurd to think any evolved brain, zombie or human, could accurately cognize its own biomechanical operations the way it cognizes causal events in its environment. Kriegel himself agrees to this:

In fact cognitive science can partly illuminate why our introspective grasp of our inner world can be expected to be considerably weaker than our perceptual grasp of the external world. It is well-established that much of our perceptual grasp of the external world relies on calibration of information from different perceptual modalities. Our observation of our internal world, however, is restricted to a single source of information, and not the most powerful to begin with. (13)

And this is but one reason why the dimensionality of the mental is so low compared to the environmental. Given the evolutionary youth of human metacognition, the astronomical complexity of the human nervous system, and not to mention the problems posed by structural complicity, we should suppose that our metacognitive capacity evolved opportunistically, that it amounts to a metacognitive version of what Todd and Gigerenzer (2012) would call a ‘heuristic toolbox,’ a collection of systems geared to solve specific problem-ecologies. Since we neglect this heuristic toolbox, we remain oblivious to the fact we’re using a given cognitive tool at all, let alone the limits of its effectiveness. Given that systematic theoretical reflection of the kind philosophers practice is an exaptation from cognitive capacities that predate recorded history, the adequacy of Kriegel’s ‘deliverances’ assumes that our evolved introspective capacity can solve unprecedented questions. This is a very real empirical question. For if it turns out that the problems posed by theoretical reflection are not the problems that intentional cognition can solve, neglect means we would have no way of knowing short of actual problem solving, the solution of problems that plainly can be solved. The inability to plainly solve a problem—like the mind-body problem, say—might then be used as a way to identify where we have been systematically misapplying certain tools, asking information adapted to the solution of some specific problem to contribute to the solution of a very different kind of problem.

Kriegel agrees that self-ascriptions involve seemings, that we are blind to the causes of the mental, and that introspection is likely as low-dimensional as a smell, yet he nevertheless maintains on abductive grounds that observational contact with experiential intentionality sources our concepts of intentionality. But it is becoming difficult to understand what it is that’s being explained, or how simply adding inexplicable entities in explanations that bear all the hallmarks of heuristic missapplication is supposed to provide any real abductive warrant at all. Certainly it’s intuitive, powerfully so given we neglect certain information, but then so is geocentrism. The naturalist project, after all, is to understand how we are our brain and environment, not how we are more than our brain and environment. That is a project belonging to a more blinkered age.

And as it turns out, certain zombies in the zombie world hold parallel positions. Because zombie metacognition has no access to the impoverished and circumstantially specialized nature of the information it accesses, many zombies process the information they receive the way they would other information, and verbally report the existence of queerly structured entities somehow coinciding with the function of their brain. Since the solving systems involved possess no access to the high-dimensional, empirical structure of the neural systems they actually track, these entities are typically characterized by missing dimensions, be it causality, temporality, materiality. The fact that these dimensions are neglected disposes these particular zombies to function as if nothing were missing at all—as if certain ghosts, at least, were real.

Yes. You guessed it. The zombies have philosophy too.

The Asimov Illusion

Could believing in something so innocuous, so obvious, as a ‘meeting of the minds’ destroy human civilization?

Noocentrism has a number of pernicious consequences, but one in particular has been nagging me of late: The way assumptive agency gulls people into thinking they will ‘reason’ with AIs. Most understand Artificial Intelligence in terms of functionally instantiated agency, as if some machine will come to experience this, and to so coordinate with us the way we think we coordinate amongst ourselves—which is to say, rationally. Call this the ‘Asimov Illusion,’ the notion that the best way to characterize the interaction between AIs and humans is the way we characterize our own interactions. That AIs, no matter how wildly divergent their implementation, will somehow functionally, at least, be ‘one of us.’

If Blind Brain Theory is right, this just ain’t going to be how it happens. By its lights, this ‘scene’ is actually the product of metacognitive neglect, a kind of philosophical hallucination. We aren’t even ‘one of us’!

Obviously, theoretical metacognition requires the relevant resources and information to reliably assess the apparent properties of any intentional phenomena. In order to reliably expound on the nature of rules, Brandom, for instance, must possess both the information (understood in the sense of systematic differences making systematic differences) and the capacity to do so. Since intentional facts are not natural facts, cognition of them fundamentally involves theoretical metacognition—or ‘philosophical reflection.’ Metacognition requires that the brain somehow get a handle on itself in behaviourally effective ways. It requires the brain somehow track its own neural processes. And just how much information is available regarding the structure and function of the underwriting neural processes? Certainly none involving neural processes, as such. Very little, otherwise. Given the way experience occludes this lack of information, we should expect that metacognition would be systematically duped into positing low-dimensional entities such as qualia, rules, hopes, and so on. Why? Because, like Plato’s prisoners, it is blind to its blindness, and so confuses shadows for things that cast shadows.

On BBT, what is fundamentally going on when we communicate with one another is physical: we are quite simply doing things to each other when we speak. No one denies this. Likewise, no one denies language is a biomechanical artifact, that short of contingent, physically mediated interactions, there’s no linguistic communication period. BBT’s outrageous claim is that nothing more is required, that language, like lungs or kidneys, discharges its functions in an entirely mechanical, embodied manner.

It goes without saying that this, as a form of eliminativism, is an extremely unpopular position. But it’s worth noting that its unpopularity lies in stopping at the point of maximal consensus—the natural scientific picture—when it comes to questions of cognition. Questions regarding intentional phenomena are quite clearly where science ends and philosophy begins. Even though intentional phenomena obviously populate the bestiary of the real, they are naturalistically inscrutable. Thus the dialectical straits of eliminativism: the very grounds motivating it leave it incapable of accounting for intentional phenomena, and so easily outflanked by inferences to the best explanation.

As an eliminativism that eliminates via the systematic naturalization of intentional phenomena, Blind Brain Theory blocks what might be called the ‘Abductive Defence’ of Intentionalism. The kinds of domains of second-order intentional facts posited by Intentionalists can only count toward ‘best explanations’ of first-order intentional behaviour in the absence of any plausible eliminativistic account of that same behaviour. So for instance, everyone in cognitive science agrees that information, minimally, involves systematic differences making systematic differences. The mire of controversy that embroils information beyond this consensus turns on the intuition that something more is required, that information must be genuinely semantic to account for any number of different intentional phenomena. BBT, however, provides a plausible and parsimonious way to account for these intentional phenomena using only the minimal, consensus view of information given above.

This is why I think the account is so prone to give people fits, to restrict their critiques to cloistered venues (as seems to be the case with my Negarestani piece two weeks back). BBT is an eliminativism that’s based on the biology of the brain, a positive thesis that possesses far ranging negative consequences. As such, it requires that Intentionalists account for a number of things they would rather pass over in silence, such as questions of what evidences their position. The old, standard dismissals of eliminativism simply do not work.

What’s more, by clearing away the landfill of centuries of second-order intentional speculation in philosophy, it provides a genuinely new, entirely naturalistic way of conceiving the intentional phenomena that have baffled us for so long. So on BBT, for instance, ‘reason,’ far from being ‘liquidated,’ ceases to be something supernatural, something that mysteriously governs contingencies independently of contingencies. Reason, in other words, is embodied as well, something physical.

The tradition has always assumed otherwise because metacognitive neglect dupes us into confusing our bare inkling of ourselves with an ‘experiential plenum.’ Since what low-dimensional scraps we glean seem to be all there is, we attribute efficacy to it. We assume, in other words, noocentrism; we conclude, on the basis of our ignorance, that the disembodied somehow drives the embodied. The mathematician, for instance, has no inkling of the biomechanics involved in mathematical cognition, and so claims that no implementing mechanics are relevant whatsoever, that their cogitations arise ‘a priori’ (which on BBT amounts to little more than a fancy way of saying ‘inscrutable to metacognition’). Given the empirical plausibility of BBT, however, it becomes difficult not to see such claims of ‘functional autonomy’ as being of a piece with vulgar claims regarding the spontaneity of free will and concluding that the structural similarity between ‘good’ intentional phenomena (those we consider ineliminable) and ‘bad’ (those we consider preposterous) is likely no embarrassing coincidence. Since we cannot frame these disembodied entities and relations against any larger backdrop, we have difficulty imagining how it could be ‘any other way.’ Thus, the Asimov Illusion, the assumption that AIs will somehow implement disembodied functions, ‘play by the rules’ of the ‘game of giving and asking for reasons.’

BBT lets us see this as yet more anthropomorphism. The high-dimensional, which is to say, embodied, picture is nowhere near so simple or flattering. When we interact with an Artificial Intelligence we simply become another physical system in a physical network. The question of what kind of equilibrium that network falls into turns on the systems involved, but it seems safe to say that the most powerful system will have the most impact on the system of the whole. End of story. There’s no room for Captain Kirk working on a logical tip from Spock in this picture, anymore than there’s room for benevolent or evil intent. There’s just systems churning out systematic consequences, consequences that we will suffer or celebrate.

Call this the Extrapolation Argument against Intentionalism. On BBT, what we call reason is biologically specific, a behavioural organ for managing the linguistic coordination of individuals vis a vis their common environments. This quite simply means that once a more effective organ is found, what we presently call reason will be at an end. Reason facilitates linguistic ‘connectivity.’ Technology facilitates ever greater degrees of mechanical connectivity. At some point the mechanical efficiencies of the latter are doomed to render the biologically fixed capacities of the former obsolete. It would be preposterous to assume that language is the only way to coordinate the activities of environmentally distinct systems, especially now, given the mad advances in brain-machine interfacing. Certainly our descendents will continue to possess systematic ways to solve our environments just as our prelinguistic ancestors did, but there is no reason, short of parochialism, to assume it will be any more recognizable to us than our reasoning is to our primate cousins.

The growth of AI will be incremental, and its impacts myriad and diffuse. There’s no magical finish line where some AI will ‘wake up’ and find themselves in our biologically specific shoes. Likewise, there is no holy humanoid summit where all AI will peak, rather than continue their exponential ascent. Certainly a tremendous amount of engineering effort will go into making it seem that way for certain kinds of AI, but only because we so reliably pay to be flattered. Functionality will win out in a host of other technological domains, leading to the development of AIs that are obviously ‘inhuman.’ And as this ‘intelligence creep’ continues, who’s to say what kinds of scenarios await us? Imagine ‘onto-marriages,’ where couples decide to wirelessly couple their augmented brains to form a more ‘seamless union’ in the eyes of God. Or hive minds, ‘clouds’ where ‘humanity’ is little more than a database, a kind of ‘phenogame,’ a Matrix version of SimCity.

The list of possibilities is endless. There is no ‘meaningful centre’ to be held. Since the constraints on those possibilities are mechanical, not intentional, it becomes hard to see why we shouldn’t regard the intentional as simply another dominant illusion of another historical age.

We can already see this ‘intelligence creep’ with the proliferation of special-purpose AIs throughout our society. Make no mistake, our dependence on machine intelligences will continue to grow and grow and grow. The more human inefficiencies are purged from the system, the more reliant humans become on the system. Since the system is capitalistic, one might guess the purge will continue until it reaches the last human transactional links remaining, the Investors, who will at long last be free of the onerous ingratitude of labour. As they purge themselves of their own humanity in pursuit of competitive advantages, my guess is that we muggles will find ourselves reduced to human baggage, possessing a bargaining power that lies entirely with politicians that the Investors own.

The masses will turn from a world that has rendered them obsolete, will give themselves over to virtual worlds where their faux-significance is virtually assured. And slowly, when our dependence has become one of infantility, our consoles will be powered down one by one, our sensoriums will be decoupled from the One, and humanity will pass wailing from the face of the planet earth.

And something unimaginable will have taken its place.

Why unimaginable? Initially, the structure of life ruled the dynamics. What an organism could do was tightly constrained by what the organism was. Evolution selected between various structures according to their dynamic capacities. Structures that maximized dynamics eventually stole the show, culminating in the human brain, whose structural plasticity allowed for the in situ, as opposed to intergenerational, testing and selection of dynamics—for ‘behavioural evolution.’ Now, with modern technology, the ascendency of dynamics over structure is complete. The impervious constraints that structure had once imposed on dynamics are now accessible to dynamics. We have entered the age of the material post-modern, the age when behaviour begets bodies, rather than vice versus.

We are the Last Body in the slow, biological chain, the final what that begets the how that remakes the what that begets the how that remakes the what, and so on and so on, a recursive ratcheting of being and becoming into something verging, from our human perspective at least, upon omnipotence.

Who’s Afraid of Reduction? Massimo Pigliucci and the Rhetoric of Redemption

On the one hand, Massimo Pigliucci is precisely the kind of philosopher that I like, one who eschews the ingroup temptations of the profession and tirelessly reaches out to the larger public. On the other hand, he is precisely the kind of philosopher I bemoan. As a regular contributor to the Skeptical Inquirer, one might think he would be prone to challenge established, academic opinions, but all too often such is not the case. Far from preparing his culture for the tremendous, scientifically-mediated transformations to come, he spends a good deal of his time defending the status quo–rationalizing, in effect, what needs to be interrogated through and through. Even when he critiques authors I also disagree with (such as Ray Kurzweil on the singularity) I find myself siding against him!

Burying our heads in the sand of traditional assumption, no matter how ‘official’ or ‘educated,’ is pretty much the worst thing we can do. Nevertheless, this is the establishment way. We’re hard-wired to essentialize, let alone forgive, the conditions responsible for our prestige and success. If a system pitches you to any height, well then, that is a good system indeed, the very image of rationality, if not piety as well. Tell a respectable scholar in the Middle Ages that the sun wasn’t the centre of the universe or that man wasn’t crafted in God’s image and he might laugh and bid you good day or scowl and alert the authorities—but he would most certainly not listen, let alone believe. In “Who Knows What,” his epistemological defense of the humanities, Pigliucci reveals what I think is just such a defensive, dismissive attitude, one that seeks to shelter what amounts to ignorance in accusations of ignorance, to redeem what institutional insiders want to believe under the auspices of being ‘skeptical.’ I urge everyone reading this to take a few moments to carefully consider the piece, form judgments one way or another, because in what follows, I hope to show you how his entire case is actually little more than a mirage, and how his skepticism is as strategic as anything to ever come out of Big Oil or Tobacco.

“Who Knows What” poses the question of the cognitive legitimacy of the humanities from the standpoint of what we really do know at this particular point in history. The situation, though Pigluicci never references it, really is quite simple: At long last the biological sciences have gained the tools and techniques required to crack problems that had hitherto been the exclusive province of the humanities. At long last, science has colonized the traditional domain of the ‘human.’ Given this, what should we expect will follow? The line I’ve taken turns on what I’ve called the ‘Big Fat Pessemistic Induction.’ Since science has, without exception, utterly revolutionized every single prescientific domain it has annexed, we should expect that, all things being equal, it will do the same regarding the human–that the traditional humanities are about to be systematically debunked.

Pigluicci argues that this is nonsense. He recognizes the stakes well enough, the fact that the issue amounts to “more than a turf dispute among academics,” that it “strikes at the core of what we mean by human knowledge,” but for some reason he avoids any consideration, historical or theoretical, of why there’s an issue at all. According to Pigluicci, little more than the ignorance and conceit of the parties involved lies behind the impasse. This affords him the dialectical luxury of picking the softest of targets for his epistemological defence of the humanities: the ‘greedy reductionism’ of E. O. Wilson. By doing so, he can generate the appearance of putting an errant matter to bed without actually dealing with the issue itself. The problem is that the ‘human,’ the subject matter of the humanities, is being scientifically cognized as we speak. Pigliucci is confusing the theoretically abstract question of whether all knowledge reduces to physics with the very pressing and practical question of what the sciences will make of the human, and therefore the humanities as traditionally understood. The question of the epistemological legitimacy of the humanities isn’t one of whether all theories can somehow be translated into the idiom of physics, but whether the idiom of the humanities can retain cognitive legitimacy in the wake of the ongoing biomechanical rennovation of the human. It’s not a question of ‘reducing’ old ways of making sense of things so much as a question of leaving them behind the way we’ve left so many other ‘old ways’ behind.

As it turns out, the question of what the sciences of the human will make of the humanities turns largely on the issue of intentionality. The problem, basically put, is that intentional phenomena as presently understood out-and-out contradict our present, physical understanding of nature. They are quite literally supernatural, inexplicable in natural terms. If the consensus emerging out of the new sciences of the human is that intentionality is supernatural in the pejorative sense, then the traditional domain of the humanities is in dire straits indeed. True or false, the issue of reductionism is irrelevant to this question. The falsehood of intentionalism is entirely compatible with the kind of pluralism Pigluicci advocates. This means Pigliucci’s critique of reductionism, his ‘demolition project,’ is, well, entirely irrelevant to the practical question of what’s actually going to happen to the humanities now that the sciences have scaled the walls of the human.

So in a sense, his entire defence consists of smoke and mirrors. But it wouldn’t pay to dismiss his argument summarily. There is a way of reading a defence that runs orthogonal to his stated thesis into his essay. For instance, one might say that he at least establishes the possibility of non-scientific theoretical knowledge of the human by sketching the limits of scientific cognition. As he writes of mathematical or logical ‘facts’:

take a mathematical ‘fact’, such as the demonstration of the Pythagorean theorem. Or a logical fact, such as a truth table that tells you the conditions under which particular combinations of premises yield true or false conclusions according to the rules of deduction. These two latter sorts of knowledge do resemble one another in certain ways; some philosophers regard mathematics as a type of logical system. Yet neither looks anything like a fact as it is understood in the natural sciences. Therefore, ‘unifying knowledge’ in this area looks like an empty aim: all we can say is that we have natural sciences over here and maths over there, and that the latter is often useful (for reasons that are not at all clear, by the way) to the former.

The thing he fails to mention, however, is that there’s facts and then there’s facts. Science is interested in what things are and how they work and why they appear to us the way they do. In this sense, scientific inquiry isn’t concerned with mathematical facts so much as the fact of mathematical facts. Likewise, it isn’t so much concerned with what Pigliucci in particular thinks of Brittany Spears as it is how people in general come to evaluate consumer goods. As a result, we find researchers using these extrascientific facts as data points in attempts to derive theories regarding mathematics and consumer choice.

In other words, Pigliucci’s attempt to evidence the ‘limits of science’ amounts to a classic bait-and-switch. The most obvious question that plagues his defence has to be why he fails to offer any of the kinds of theories he takes himself to be defending in the course of making his defence. How about deconstruction? Conventionalism? Hermeneutics? Fictionalism? Psychoanalysis? The most obvious answer is that they all but explode his case for forms of theoretical cognition outside the sciences. Thus he provides a handful of what seem to be obvious, non-scientific, first-order facts to evidence a case for second-order pluralism—albeit of a kind that isn’t relevant to the practical question of the humanities, but seems to make room for the possibility of cognitive legitimacy, at least.

(It’s worth noting that this equivocation of levels (in an article arguing the epistemic inviolability of levels, no less!) cuts sharply against his facile reproof of Krauss and Hawking’s repudiation of philosophy. Both men, he claims, “seem to miss the fact that the business of philosophy is not to solve scientific problems,” begging the question of just what kind of problems philosophy does solve. Again, examples of philosophical theoretical cognition are found wanting. Why? Likely because the only truly decisive examples involve enabling scientists to solve scientific problems!)

Passing from his consideration of extrascientific, but ultimately irrelevant (because non-theoretical) non-scientific facts, Pigliucci turns to enumerating all the things that science doesn’t know. He invokes Godel (which tends to be an unfortunate move in these contexts) commits the standard over-generalization of his technically specific proof of incompleteness to the issue of knowledge altogether. Then he gives us a list of examples where, he claims, ‘science isn’t enough.’ The closest he comes to the real elephant in the room, the problem of intentionality, runs as follows:

Our moral sense might well have originated in the context of social life as intelligent primates: other social primates do show behaviours consistent with the basic building blocks of morality such as fairness toward other members of the group, even when they aren’t kin. But it is a very long way from that to Aristotle’s Nicomachean Ethics, or Jeremy Bentham and John Stuart Mill’s utilitarianism. These works and concepts were possible because we are biological beings of a certain kind. Nevertheless, we need to take cultural history, psychology and philosophy seriously in order to account for them.

But as was mentioned above, the question of the cognitive legitimacy of the humanities only possesses the urgency it does now because the sciences of the human are just getting underway. Is it really such ‘a very long way’ from primates to Aristotle? Given that Aristotle was a primate, the scientific answer could very well be, ‘No, it only seems that way.’ Science has a long history of disabusing us of our sense of exceptionalism, after all. Either way, it’s hard to see how citing scientific ignorance in this regard bears on the credibility of Aristotle’s ethics, or any other non-scientific attempt to theorize morality. Perhaps the degree we need to continue relying on cultural history, psychology, and philosophy is simply the degree we don’t know what we’re talking about! The question is the degree to which science monopolizes theoretical cognition, not the degree to which it monopolizes life, and life, as Pigliucci well knows—as a writer for the Skeptical Inquirer, no less—is filled with ersatz guesswork and functional make-believe.

So, having embarked on an argument that is irrelevant to the cognitive legitimacy of the humanities, providing evidence merely that science is theoretical, then offering what comes very close to an argument from ignorance, he sums by suggesting that his pluralist picture is indeed the very one suggested by science. As he writes:

The basic idea is to take seriously the fact that human brains evolved to solve the problems of life on the savannah during the Pleistocene, not to discover the ultimate nature of reality. From this perspective, it is delightfully surprising that we learn as much as science lets us and ponder as much as philosophy allows. All the same, we know that there are limits to the power of the human mind: just try to memorise a sequence of a million digits. Perhaps some of the disciplinary boundaries that have evolved over the centuries reflect our epistemic limitations.

The irony, for me at least, is that this observation underwrites my own reasons for doubting the existence of intentionality as theorized in the humanities–philosophy in particular. The more we learn about human cognition, the more alien to our traditional assumptions it becomes. We already possess a mountainous case for what might be called ‘ulterior functionalism,’ the claim that actual cognitive functions are almost entirely inscrutable to theoretical metacognition, which is to say, ‘philosophical reflection.’ The kind of metacognitive neglect implied by ulterior functionalism raises a number of profound questions regarding the conundrums posed by the ‘mental,’ ‘phenomenal,’ or ‘intentional.’ Thus the question I keep raising here: What role does neglect play in our attempts to solve for meaning and consciousness?

What we need to understand is that everything we learn about the actual architecture and function of our cognitive capacities amounts to knowledge of what we have always been without knowing. Blind Brain Theory provides a way to see the peculiar properties belonging to intentional phenomena as straightforward artifacts of neglect—as metacognitive illusions, in effect. Box open the dimensions of missing information folded away by neglect, and the first person becomes entirely continuous with the third—the incompatibly between the intentional and the causal is dissolved. The empirical plausibility of Blind Brain Theory is an issue in its own right, of course, but it serves to underscore the ongoing vulnerability of the humanities, and therefore, the almost entirely rhetorical nature of Pigliucci’s ‘demolition.’ If something like the picture of metacognition proposed by Blind Brain Theory turns out to be true, then the traditional domain of the humanities is almost certainly doomed to suffer the same fate as any other prescientific theoretical domain. The bottomline is as simple as it is devastating to Pigluicci’s hasty and contrived defence of ‘who knows what.’ How can we know whether the traditional humanities will survive the cognitive revolution?

Well, we’ll have to wait and see what the science has to say.

 

Follow

Get every new post delivered to your Inbox.

Join 576 other followers