Three Pound Brain

No bells, just whistling in the dark…

Tag: inferentialism

The Eliminativistic Implicit II: Brandom in the Pool of Shiloam

by rsbakker

norm brain

In “The Eliminativistic Implicit I,” we saw how the implicit anchors the communicative solution of humans and their activities. Since comprehension consists in establishing connections between behaviours and their precursors, the inscrutability of those precursors requires we use explanatory posits, suppositional surrogate precursors, to comprehend ourselves and our fellows. The ‘implicit’ is a kind of compensatory mechanism, a communicative prosthesis for neglect, a ‘blank box’ for the post facto proposal of various, abductively warranted precursors.

We also saw how the implicit possessed a number of different incarnations:

1) The Everyday Implicit: The regime of folk posits adapted to solve various practical problems involving humans (and animals).

2) The Philosophical Implicit: The regime of intentional posits thought to solve aspects of the human in general.

3) The Psychological Implicit: The regime of functional posits thought to solve various aspects of the human in general.

4) The Mechanical Implicit: The regime of neurobiological posits thought to solve various aspects of the human in general.

The overarching argument I’m pressing is that only (4) holds the key to any genuine theoretical understanding of (1-3). On my account of (4), (1) is an adaptive component of socio-communicative cognition, (2) is largely an artifact of theoretical misapplications of those heuristic systems, and (3) represents an empirical attempt to approximate (4) on the basis of indirect behavioural evidence.

In this episode, the idea is to illustrate how both the problems and the apparent successes of the Philosophical Implicit can be parsimoniously explained in terms of neglect and heuristic misapplication via Robert Brandom’s magisterial Making-it-Explicit. We’ll consider what motivates Brandom’s normative pragmatism, why he thinks that only normative cognition can explain normative cognition. Without this motivation, the explanation of normative cognition defaults to natural cognition (epitomized by science), and Brandom quite simply has no subject matter. The cornerstone of his case is the Wittgensteinian gerrymandering argument against Regularism. As I hope to show, Blind Brain Theory dismantles this argument with surprising facility. And it does so, moreover, in a manner that explains why so many theorists (including myself at one time!) are so inclined to find the argument convincing. As it turns out, the intuitions that motivate Normativism turn on a cluster of quite inevitable metacognitive illusions.

norm pentagon

Blind Agents

Making-it-Explicit: Reasoning, Representing, and Discursive Commitment is easily the most sustained and nuanced philosophical consideration of the implicit I’ve encountered. I was gobsmacked I was when I first read it in the late 90s. Stylistically, it had a combination of Heideggerean density and Analytic clarity that I found narcotic. Argumentatively, I was deeply impressed by the way Brandom’s interpretive functionalism seemed to actually pull intentional facts from natural hats, how his account of communal taking as seemed to render normativity at once ‘natural’ and autonomous. For a time, I bought into a great deal of what Brandom had to say—I was particularly interested in working my ‘frame ontology’ into his normative framework. Making It Explicit had become a big part of my dissertation… ere epic fantasy saved my life!

I now think I was deluded.

In this work, Brandom takes nothing less than the explication of the ‘game of giving and asking for reasons’ as his task, “making explicit the implicit structure characteristic of discursive practice as such” (649). He wants to make explicit the role that making explicit plays in discursive cognition. It’s worth pausing to ponder the fact that we do so very many things with only the most hazy or granular second-order understanding. It might seem so platitudinal as to go without saying, but it’s worth noting in passing at least: Looming large in the implicature of all accounts such as Brandom’s is the claim that we somehow know the world without ever knowing how we know the world.

As we saw in the previous installment, the implicit designates a kind of profound cognitive incapacity, a lack of knowledge regarding our own activities. The implicit entails what might be called a Blind Agent Thesis, or BAT. Brandom, by his own admission, is attempting to generalize the behaviour of the most complicated biomechanical system known to science almost entirely blind to the functioning of that system. (He just thinks he’s operating at an ‘autonomous social functional’ level). He is, as we shall see, effectively arguing his own particular BAT.

Insofar as every theoretician, myself included, is trying to show ‘what everyone is missing,’ there’s a sense in which something like BAT is hard to deny. Why all the blather, otherwise? But this negative characterization clearly has a problem: How could we do anything without knowing how to do it? Obviously we have to ‘know how’ in some manner, otherwise we wouldn’t be able to do anything at all! This is the sense in which the implicit can be positively characterized as a species of knowing in its own right. And this leads us to the quasi-paradoxical understanding of the implicit as ‘knowing without knowing,’ a knowing how to do something without knowing how to discursively explain that doing.

Making explicit, Brandom is saying, has never been adequately made explicit—this despite millennia of philosophical disputation. He (unlike Kant, say) never offers any reason why this is the case, any consideration of what it is about making explicit in particular that should render it so resistant to explication—but then philosophers are generally prone to take the difficulty of their problems as a given. (I’m not the only one out there shouting the problem I happen to working on is like, the most important problem ever!) I mention this because any attempt to assay the difficulty of the problem of making making-explicit explicit would have explicitly begged the question of whether he (or anyone else) possessed the resources required to solve the problem.

You know, as blind and all.

What Brandom provides instead is an elegant reprise of the problem’s history, beginning with Kant’s fundamental ‘transformation of perspective,’ the way he made explicit the hitherto implicit normative dimension of making explicit, what allowed him “to talk about the natural necessity whose recognition is implicit in cognitive or theoretical activity, and the moral necessity whose recognition is implicit in practical activity, as species of one genus” (10).

Kant, in effect, had discovered something that humanity had been all but oblivious to: the essentially prescriptive nature of making explicit. Of course, Brandom almost entirely eschews Kant’s metaphysical commitments: for him, normative constraint lies in the attributions of other agents and nowhere else. Kant, in other words, had not so much illuminated the darkness of the implicit (which he baroquely misconstrues as ‘transcendental’) as snatch one crucial glimpse of its nature.

Brandom attributes the next glimpse to Frege, with his insistence on “respecting and enforcing the distinction between the normative significance of applying concepts and the causal consequences of doing so” (11). What Frege made explicit about making explicit, in other words, was its systematic antipathy to causal explanation. As Brandom writes:

“Psychologism misunderstands the pragmatic significance of semantic contents. It cannot make intelligible the applicability of norms governing the acts that exhibit them. The force of those acts is a prescriptive rather than a descriptive affair; apart from their liability to assessments of judgments as true and inferences as correct, there is no such thing as judgment or inference. To try to analyze the conceptual contents of judgments in terms of habits or dispositions governing the sequences of brain states or mentalistically conceived ideas is to settle on the wrong sort of modality, on causal necessitation rather than rational or cognitive right.” (12)

Normativity is naturalistically inscrutable, and thanks to Kant (“the great re-enchanter,” as Turner (2010) calls him), we know that making explicit is normative. Any explication of the implicit of making explicit, therefore, cannot be causal—which is to say, mechanistic. Frege, in other words, makes explicit a crucial consequence of Kant’s watershed insight: the fact that making explicit can only be made explicit in normative, as opposed to natural, terms. Explication is an intrinsically normative activity. Making causal constraints explicit at most describes what systems will do, never prescribes what they should do. Since we now know that explication is an intrinsically normative activity, making explicit the governing causal constraints has the effect of rendering the activity unintelligible. The only way to make explication theoretically explicit is to make explicit the implicit normative constraints that make it possible.

Which leads Brandom to the third main figure of his brief history, Wittgenstein. Thus far, we know only that explication is an intrinsically normative affair—our picture of making explicit is granular in the extreme. What are norms? Why do they have the curious ‘force’ that they do? What does that force consist in? Even if Kant is only credited with making explicit the normativity of making explicit, you could say the bulk of his project is devoted to exploring questions precisely like these. Consider, for instance, his explication of reason:

“But of reason one cannot say that before the state in which it determines the power of choice, another state precedes in which this state itself is determined. For since reason itself is not an appearance and is not subject at all to any conditions of sensibility, no temporal sequence takes place in it even as to its causality, and thus the dynamical law of nature, which determines the temporal sequence according to rules, cannot be applied to it.” Kant, The Critique of Pure Reason, 543

Reason, in other words, is transcendental, something literally outside nature as we experience it, outside time, outside space, and yet somehow fundamentally internal to what we are. The how of human cognition, Kant believed, lies outside the circuit of human cognition, save for what could be fathomed via transcendental deduction. Kant, in other words, not only had his own account of what the implicit was, he also had an account for what rendered it so difficult to make explicit in the first place!

He had his own version of BAT, what might be called a Transcendental Blind Agent Thesis, or T-BAT.

Brandom, however, far prefers the later Wittgenstein’s answers to the question of how the intrinsic normativity of making explicit should be understood. As he writes,

“Wittgenstein argues that proprieties of performance that are governed by explicit rules do not form an autonomous stratum of normative statuses, one that could exist though no other did. Rather, proprieties governed by explicit rules rest on proprieties governed by practice. Norms that are explicit in the form of rules presuppose norms implicit in practices.” (20)

Kant’s transcendental represents just such an ‘autonomous stratum of normative statuses.’ The problem with such a stratum, aside from the extravagant ontological commitments allegedly entailed, is that it seems incapable of dealing with a peculiar characteristic of normative assessment known since ancient times in the form of Agrippa’s trilemma or the ‘problem of the criterion.’ The appeal to explicit rules is habitual, perhaps even instinctive, when we find ourselves challenged on some point of communication. Given the regularity with which such appeals succeed, it seems natural to assume that the propriety of any given communicative act turns on the rules we are prone to cite when challenged. The obvious problem, however, is that rule citing is itself a communicative act that can be challenged. It stems from occluded precursors the same as anything else.

What Wittgenstein famously argues is that what we’re appealing to in these instances is the assent of our interlocutors. If our interlocutors happen to disagree with our interpretation of the rule, suddenly we find ourselves with two disputes, two improprieties, rather than one. The explicit appeal to some rule, in other words, is actually an implicit appeal to some shared system of norms that we think will license our communicative act. This is the upshot of Wittgenstein’s regress of rules argument, the contention that “while rules can codify the pragmatic normative significance of claims, they do so only against a background of practices permitting the distinguishing of correct from incorrect applications of those rules” (22).

Since this account has become gospel in certain philosophical corners, it might pay to block out the precise way this Wittgensteinian explication of the implicit does and does not differ from the Kantian explication. One comforting thing about Wittgenstein’s move, from a naturalist’s standpoint at least, is that it adverts to the higher-dimensionality of actual practices—it’s pragmatism, in effect. Where Kant’s making explicit is governed from somewhere beyond the grave, Wittgenstein’s is governed by your friends, family, and neighbours. If you were to say there was a signature relationship between their views, you could cite this difference in dimensionality, the ‘solidity’ or ‘corporeality’ that Brandom appeals to in his bid to ground the causal efficacy of his elaborate architecture (631-2).

Put differently, the blindness on Wittgenstein’s account belongs to you and everyone you know. You could say he espouses a Communal Blind Agent Thesis, or C-BAT. The idea is that we’re continually communicating with one another while utterly oblivious as to how we’re communicating with one another. We’re so oblivious, in fact, we’re oblivious to the fact we are oblivious. Communication just happens. And when we reflect, it seems to be all that needs to happen—until, that is, the philosopher begins asking his damn questions.

It’s worth pointing out, while we’re steeping in this unnerving image of mass, communal blindness, that Wittgenstein, almost as much as Kant, was in a position analogous to empirical psychologists researching cognitive capacities back in the 1950s and 1960s. With reference to the latter, Piccinini and Craver have argued (“Integrating psychology and neuroscience: functional analyses as mechanism sketches,” 2011) that informatic penury was the mother of functional invention, that functional analysis was simply psychology’s means of making due, a way to make the constitutive implicit explicit in the absence of any substantial neuroscientific information. Kant and Wittgenstein are pretty much in the same boat, only absent any experimental means to test and regiment their guesswork. The original edition of Philosophical Investigations, in case you were wondering, was published in 1953, which means Wittgenstein’s normative contextualism was cultured in the very same informatic vacuum as functional analysis. And the high-altitude moral, of course, amounts to the same: times have changed.

The cognitive sciences have provided a tremendous amount of information regarding our implicit, neurobiological precursors, so much so that the mechanical implicit is a given. The issue now isn’t one of whether the implicit is causal/mechanical in some respect, but whether it is causal/mechanical in every respect. The question, quite simply, is one of what we are blind to. Our biology? Our ‘mental’ programming? Our ‘normative’ programming? The more we learn about our biology, the more we fill in the black box with scientific facts, the more difficult it seems to become to make sense of the latter two.

norms

Ineliminable Inscrutability Scrutinized and Eliminated

Though he comes nowhere near framing the problem in these explicitly informatic terms, Brandom is quite aware of this threat. American pragmatism has always maintained close contact with the natural sciences, and post-Quine, at least, it has possessed more than its fair share of eliminativist inclinations. This is why he goes to such lengths to argue the ineliminability of the normative. This is why he follows his account of Kant’s discovery of the normativity of the performative implicit with an account of Frege’s critique of psychologism, and his account of Wittgenstein’s regress argument against ‘Regulism’ with an account of his gerrymandering argument against ‘Regularism.’

Regularism proposes we solve the problem of rule-following with patterns of regularities. If a given performance conforms to some pre-existing pattern of performances, then we call that performance correct or competent. If it doesn’t so conform, then we call it incorrect or incompetent. “The progress promised by such a regularity account or proprieties of practice,” Brandom writes, “lies in the possibility of specifying the pattern or regularity in purely descriptive terms and then allowing the relation between regular and irregular performance to stand in for the normative distinction between what is correct and what is not” (MIE 28). The problem with Regularism, however, is “that it threatens to obliterate the contrast between treating a performance as subject to normative assessment of some sort and treating it as subject to physical laws” (27). Thus the challenge confronting any Regularist account of rule-following, as Brandom sees it, is to account for its normative character. Everything in nature ‘follows’ the ‘rules of nature,’ the regularities isolated by the natural sciences. So what does the normativity that distinguishes human rule-following consist in?

For a regularist account to weather this challenge, it must be able to fund a distinction between what is in fact done and what ought to be done. It must make room for the permanent possibility of mistakes, for what is done or taken to be correct nonetheless turn out to be incorrect or inappropriate according to some rule or practice.” 27

The ultimate moral, of course, is that there’s simply no way this can be done, there’s no way to capture the distinction between what happens and what ought to happen on the basis of what merely happens. No matter what regularity the Regularist adduces ‘to play the role of norms implicit in practice,’ we find ourselves confronted by the question of whether it’s the right regularity. The fact is any number of regularities could play that role, stranding us with the question of which regularity one should conform to—which is to say, the question of the very normative distinction the Regularist set out to solve in the first place. Adverting to dispositions to pick out the relevant regularity simply defers the problem, given that “[n]obody ever acts incorrectly in the sense of violating his or her own dispositions” (29).

For Brandom, as with Wittgenstein, the problem of Regularism is intimately connected to the problem of Regulism: “The problem that Wittgenstein sets up…” he writes, “is to make sense of a notion of norms implicit in practice that will not lose either the notion of the implicitness, as regulism does, or the notion of norms, as simple regularism does” (29). To see this connection, you need only consider one of Wittgenstein’s more famous passages from Philosophical Investigations:

§217. “How am I able to obey a rule?”–if this is not a question about causes, then it is about the justification for my following the rule in the way I do.

If I have exhausted the justifications I have reached bedrock, and my spade is turned. Then I am inclined to say: “This is simply what I do.”

The idea, famously, is that rule-following is grounded, not in explicit rules, but in our actual activities, our practices. The idea, as we saw above, is that rule-following is blind. It is ‘simply what we do.’ “When I obey a rule, I do not choose,” Wittgenstein writes. “I obey the rule blindly” (§219). But if rule-following is blind, just what we find ourselves doing in certain contexts, then in what sense is it normative? Brandom quotes McDowell’s excellent (certainly from a BBT standpoint!) characterization of the problem in “Wittgenstein on Following a Rule”: “How can a performance be nothing but a ‘blind’ reaction to a situation, not an attempt to act on interpretation (thus avoiding Scylla); and be a case of going by a rule (avoiding Charybdis)?” (Mind, Value, and Reality, 242).

Wittgenstein’s challenge, in other words, is one of theorizing nonconscious rule-following in a manner that does not render normativity some inexplicable remainder. The challenge is to find some way to avoid Regulism without lapsing into Regularism. Of course, we’ve grown inured to the notion of ‘implicit norms’ as a theoretical explanatory posit, so much so as to think them almost self-evident—I know this was once the case for me. But the merest questioning quickly reveals just how odd implicit norms are. Nonconscious rule-following is automatic rule-following, after all, something mechanical, dispositional. Automaticity seems to preclude normativity, even as it remains amenable to regularities and dispositions. Although it seems obvious that evaluation and justification are things that we regularly do, that we regularly engage in normative cognition navigating our environments (natural and social), it is by no means clear that only normative posits can explain normative cognition. Given that normative cognition is another natural artifact, the product of evolution, and given the astounding explanatory successes of science, it stands to reason that natural, not supernatural, posits are likely what’s required.

All this brings us back to C-BAT, the fact that Wittgenstein’s problem, like Brandom’s, is the problem of neglect. ‘This is simply what I do,’ amounts to a confession of abject ignorance. Recall the ‘Hidden Constraint Model’ of the implicit from our previous discussion. Cognizing rule-following behaviour requires cognizing the precursors to rule-following behaviour, precursors that conscious cognition systematically neglects. Most everyone agrees on the biomechanical nature of those precursors, but Brandom (like intentionalists more generally) wants to argue that biomechanically specified regularities and dispositions are not enough, that something more is needed to understand the normative character of rule-following, given the mysterious way regularities and dispositions preclude normative cognition. The only way to avoid this outcome, he insists, is to posit some form of nonconscious normativity, a system of preconscious, pre-communicative ‘rules’ governing cognitive discourse. The upshot of Wittgenstein’s arguments against Regularism seems to be that only normative posits can adequately explain normative cognition.

But suddenly, the stakes are flipped. Just as the natural is difficult to understand in the context of the normative, so too is the normative difficult to understand in the context of the natural. For some critics, this is difficulty enough. In Explaining the Normative, for instance, Stephen Turner does an excellent job tracking, among other things, the way Normativism attempts to “take back ground lost to social science explanation” (5). He begins by providing a general overview of the Normativist approach, then shows how these self-same tactics characterized social science debates of the early twentieth-century, only to be abandoned as their shortcomings became manifest. “The history of the social sciences,” he writes, “is a history of emancipation from the intellectual propensity to intentionalize social phenomenon—this was very much part of the process that Weber called the disenchantment of the world” (147). His charge is unequivocal: “Brandom,” he writes, “proposes to re-enchant the world by re-instating the belief in normative powers, which is to say, powers in some sense outside of and distinct from the forces known to science” (4). But although this is difficult to deny in a broad stroke sense, he fails to consider (perhaps because his target is Normativism in general, and not Brandom, per se) the nuance and sensitivity Brandom brings to this very issue—enough, I think, to walk away theoretically intact.

In the next installment, I’ll consider the way Brandom achieves this via Dennett’s account of the Intentional Stance, but for the nonce, it’s important that we keep the problem of re-enchantment on the table. Brandom is arguing that the inability of natural posits to explain normative cognition warrants a form of theoretical supernaturalism, a normative metaphysics, albeit one he wants to make as naturalistically palatable as possible.

Even though neglect is absolutely essential to their analyses of Regulism and Regularism, neither Wittgenstein nor Brandom so much as pause to consider it. As astounding as it is, they simply take our utter innocence of our own natural and normative precursors as a given, an important feature of the problem ecology under consideration to be sure, but otherwise irrelevant to the normative explication of normative cognition. Any role neglect might play beyond anchoring the need for an account of implicit normativity is entirely neglected. The project of Making It Explicit is nothing other than the project of making the activity of making explicit explicit, which is to say, the project of overcoming metacognitive neglect regarding normative cognition, and yet nowhere does Brandom so much as consider just what he’s attempting to overcome.

Not surprisingly, this oversight proves catastrophic—for the whole of Normativism, and not simply Brandom.

Just consider, for instance, the way Brandom completely elides the question of the domain specificity of normative cognition. Normative cognition is a product of evolution, part of a suite of heuristic systems adapted to solve some range of social problems as effectively as possible given the resources available. It seems safe to surmise that normative cognition, as heuristic, possesses what Todd, Gigarenzer, and the ABC Research Group (2012) call an adaptive ‘problem-ecology,’ a set of environments possessing complementary information structures. Heuristics solve via the selective uptake of information, wedding them in effect, to specific problem-solving domains. ‘Socio-cognition,’ which manages to predict, explain, even manipulate astronomically complex systems on the meagre basis of observable behaviour, is paradigmatic of a heuristic system. In the utter absence of causal information, it can draw a wide variety of reliable causal conclusions, but only within a certain family of problems. As anthropomorphism, the personification or animation of environments, shows, humans are predisposed to misapply socio-cognition to natural environments. Pseudo-solving natural environments via socio-cognition may have solved various social problems, but precious few natural ones. In fact, the process of ‘disenchantment’ can be understood as a kind of ‘rezoning’ of socio-cognition, a process of limiting its application to those problem-ecologies where it actually produces solutions.

Which leads us to the question: So what, then, is the adaptive problem ecology of normative cognition? More specifically, how do we know that the problem of normative cognition belongs to the problem ecology of normative cognition?

As we saw Brandom’s argument against Regularism could itself be interpreted as a kind of ‘ecology argument,’ as a demonstration of how the problem of normative cognition does not belong to the problem ecology of natural cognition. Natural cognition cannot ‘fund the distinction between ought and is.’ Therefore the problem ecology of normative cognition does not belong to natural cognition. In the absence of any alternatives, we then have an abductive case for the necessity of using normative cognition to solve normative cognition.

But note how recognizing the heuristic, or ecology dependant, nature of normative cognition has completely transformed the stakes of Brandom’s original argument. The problem for Regularism turns, recall, on the conspicuous way mere regularities fail to capture the normative dimension of rule-following. But if normative cognition were heuristic (as it almost certainly is), if what we’re prone to identify as the ‘normative dimension’ is something specific to the application of normative cognition, then this becomes the very problem we should expect. Of course the normative dimension disappears absent the application of normative cognition! Since Regularism involves solving normative cognition using the resources of natural cognition, it simply follows that it fails to engage resources specific to normative cognition. Consider Kripke’s formulation of the gerrymandering problem in terms of the ‘skeptical paradox’: “For the sceptic holds that no fact about my past history—nothing that was ever in my mind, or in my external behavior—establishes that I meant plus rather than quus” (Wittgenstein, 13). Even if we grant a rule-follower access to all factual information pertaining to rule-following, a kind of ‘natural omniscience,’ they will still be unable to isolate any regularity capable of playing ‘the role of norms implicit in practice.’ Again, this is precisely what we should expect given the domain specificity of normative cognition proposed here. If ‘normative understanding’ were the artifact of a cognitive system dedicated to the solution of a specific problem-ecology, then it simply follows that the application of different cognitive systems would fail to produce normative understanding, no matter how much information was available.

What doesn’t follow is that normative cognition thus lies outside the problem ecology of natural cognition, let alone inside the problem ecology of normative cognition. The ‘explanatory failure’ that Brandom and others use to impeach the applicability of natural cognition to normative cognition is nothing of the sort. It simply makes no sense to demand that one form of cognition solve another form of cognition as if it were that other form. We know that normative cognition belongs to social cognition more generally, and that social cognition—‘mindreading’—operates heuristically, that it has evolved to solve astronomically complicated biomechanical problems involving the prediction, understanding, and manipulation of other organisms absent detailed biomechanical information. Adapted to solve in the absence of this information, it stands to reason that the provision of that information, facts regarding biomechanical regularities, will render it ineffective—‘grind cognitive gears,’ you could say.

Since these ‘technical details’ are entirely invisible to ‘philosophical reflection’ (thanks to metacognitive neglect), the actual ecological distinction between these systems escapes Brandom, and he assumes, as all Normativists assume, that the inevitable failure of natural cognition to generate instances of normative cognition means that only normative cognition can solve normative cognition. Blind to our cognitive constitution, instances of normative cognition are all he or anyone else has available: our conscious experience of normative cognition consists of nothing but these instances. Explaining normative cognition is thus conflated with replacing normative cognition. ‘Competence’ becomes yet another ‘spooky explanandum,’ another metacognitive inkling, like ‘qualia,’ or ‘content,’ that seems to systematically elude the possibility the possibility of natural cognition (for suspiciously similar reasons).

This apparent order of supernatural explananda then provides the abductive warrant upon which Brandom’s entire project turns—all T-BAT and C-BAT approaches, in fact. If natural cognition is incapable, then obviously something else is required. Impressed by how our first-order social troubleshooting makes such good use of the Everyday Implicit, and oblivious to the ecological limits of the heuristic systems responsible, we effortlessly assume that making use of some Philosophical Implicit will likewise enable secondorder social troubleshooting… that tomes like Making It Explicit actually solve something.

But as the foregoing should make clear, precisely the opposite is the case. As a system adapted to troubleshoot first-order social ecologies, normative cognition seems unlikely to theoretically solve normative cognition in any satisfying manner. The very theoretical problems that plague Normativism—supernaturalism, underdetermination, and practical inapplicability—are the very problems we should expect if normative cognition were not in fact among the problems that normative cognition can solve.

As an evolved, biological capacity, however, normative cognition clearly belongs to the problem ecology of natural cognition. Simply consider how much the above sketch has managed to ‘make explicit.’ In parsimonious fashion it explains, 1) the general incompatibility of natural and normative cognition; 2) the inability of Regularism to ‘play the role of norms implicit in practice’; 3) why this inability suggests the inapplicability of natural cognition to the problem of normative cognition; 4) why Normativism seems the only alternative as a result; and 5) why Normativism nonetheless suffers the debilitating theoretical problems it does. It solves the notorious Skeptical Paradox, and much else aside, using only the idiom of natural cognition, which is to say, in a manner not only compatible with the life sciences, but empirically tractable as well.

Brandom is the victim of a complex of illusions arising out of metacognitive neglect. Wittgenstein, who had his own notion of heuristics and problem ecologies (grammars and language games), was sensitive to the question of what kinds of problems could be solved given the language we find ourselves stranded with. As a result, he eschews the kind of systematic normative metaphysics that Brandom epitomizes. He takes neglect seriously insofar as ‘this is simply what I do’ demarcates, for him, the pale of credible theorization. Even so, he nevertheless succumbs to a perceived need to submit, however minimally or reluctantly, the problem of normative cognition (in terms of rule-following) to the determinations of normative cognition, and is thus compelled to express his insights in the self-same supernatural idiom as Brandom, who eschews what is most valuable in Wittgenstein, his skepticism, and seizes on what is most problematic, his normative metaphysics.

There is a far more parsimonious way. We all agree humans are physical systems nested within a system of such systems. What we need to recognize is how being so embedded poses profound constraints on what can and cannot be cognized. What can be readily cognized are other systems (within a certain range of complexity). What cannot be readily cognized is the apparatus of cognition itself. The facts we call ‘natural’ belong to the former, and the facts we call ‘intentional’ belong to the latter. Where the former commands an integrated suite of powerful environmental processors, the latter relies on a hodgepodge of specialized socio-cognitive and metacognitive hacks. Since we have no inkling of this, we have no inkling of their actual capacities, and so run afoul a number of metacognitive impasses. So for instance, intentional cognition has evolved to overcome neglect, to solve problems in the absence of causal information. This is why philosophical reflection convinces us we somehow stand outside the causal order via choice or reason or what have you. We quite simply confuse an incapacity, our inability to intuit our biomechanicity, with a special capacity, our ability to somehow transcend or outrun the natural order.

We are physical in such a way that we cannot intuit ourselves as wholly physical. To cognize nature is to be blind to the nature of cognizing. To be blind to that blindness is to think cognizing has no nature. So we assume that nature is partial, and that we are mysteriously whole, a system unto ourselves.

Reason be praised.

 

Leaving It Implicit

by rsbakker

Since the aim of philosophy is not “to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term” with as little information as possible, I thought it worthwhile to take another run at the instinct to raise firewalls about certain discourses, to somehow immunize them from the plague of scientific information to come. I urge anyone disagreeing to sound off, to explain to me how it’s possible to assert the irrelevance of any empirical discovery in advance, because I am duly mystified. On the one hand, we have these controversial sketches regarding the nature of meaning and normativity, and on the other we have the most complicated mechanism known, the human brain. And learning the latter isn’t going to revolutionize the former?

Of course it is. We are legion, a myriad of subpersonal heuristic systems that we cannot intuit as such. We have no inkling of when we swap between heuristics and so labour under the illusion of cognitive continuity. We have no inkling as to the specific problem-ecologies our heuristics are adapted to and so labour under the illusion of cognitive universality. We are, quite literally, blind to the astronomical complexity of what we are and what we do. I’ve spent these past 18 months on TPB brain-storming novel ways to conceptualize this blindness, and how we might see the controversies and conundrums of traditional philosophy as its expression.

Say that consciousness accompanies/facilitates/enables a disposition to ‘juggle’ cognitive resources, to creatively misapply heuristics in the discovery of exaptive problem ecologies. Traditional philosophy, you might say, represents the institutionalization of this creative misapplication, the ritualized ‘making problematic’ ourselves and our environments. As an exercise in serial misapplication, one must assume (as indeed every individual philosophy does) that the vast bulk of philosophy solves nothing whatsoever. But if one thinks, as I do, that philosophy was a necessary condition of science and democracy, then the obvious, local futility of the philosophical enterprise would seem to be globally redeemed. Thinkers are tinkers, and philosophy is a grand workshop: while the vast majority of the gadgets produced will be relegated to the dustbin, those few that go retail can have dramatic repercussions.

Of course, the hubris is there staring each and every one of us in the face, though its universality renders it almost invisible. To the extent that we agree with ourselves, we all assume we’ve won the Magical Belief Lottery—the conviction, modest or grand, that this gadget here will be the one that reprograms the future.

I’m going to call my collection of contending gadgets, ‘progressive naturalism,’ or more simply, pronaturalism. It is progressive insofar as it attempts to continue the project of disenchantment, to continue the trend of replacing traditional intentional understanding with mechanical understanding. It is naturalistic insofar as it pilfers as much information and as many of its gadgets from natural science as it can.

So from a mechanical problem-solving perspective, words are spoken and actions… simply ensue. Given the systematicity of the ensuing actions, the fact that one can reliably predict the actions that typically follow certain utterances, it seems clear that some kind of constraint is required. Given the utter inaccessibility of the actual biomechanics involved, those constraints need to be conceived in different terms. Since the beginning of philosophy, normativity has been the time-honoured alternative. Rather than positing causes, we attribute reasons to explain the behaviour of others. Say you shout “Duck!” to our golf partner. If he fails to duck and turns to you quizzically instead, you would be inclined to think him incompetent, to say something like, “When I say ‘Duck!’ I mean ‘Duck!’”

From a mechanical perspective, in other words, normativity is our way of getting around the inaccessibility of what is actually going on. Normativity names a family of heuristic tools, gadgets that solve problems absent biomechanical information. Normative cognition, in other words, is a biomechanical way of getting around the absence of biomechanical information.

What else would it be?

From a normative perspective, however, the biomechanical does not seem to exist, at least at the level of expression. This is no coincidence, given that normative heuristics systematically neglect otherwise relevant biomechanical information. Nor is the manifest incompatibility between the normative and biomechanical perspectives any coincidence: as a way to solve problems absent mechanical information, normative cognition will only reliably function in those problem ecologies lacking that information. Information formatted for mechanical cognition simply ‘does not compute.’

From a normative perspective, in other words, the ‘normative’ is bound to seem both ontologically distinct and functionally independent vis a vis the mechanical. And indeed, once one begins taking a census of the normative terms used in biomechanical explanations, it begins to seem clear that normativity is not only distinct and independent, but that it comes first, that it is, to adopt the occult term normalized by the tradition, ‘a priori.’

From the mechanical perspective, these are natural mistakes to make given that mechanical information systematically eludes theoretical metacognition as well. As I said, we are blind to the astronomical complexities of what we are and what we do. Whenever a normative philosopher attempts to ‘make explicit’ our implicit sayings and doings they are banking on the information and cognitive resources they happen to have available. They have no inkling that they’re relying on any heuristics at all, let alone a variety of them, let alone any clear sense of the narrow problem-ecologies they are adapted to solve. They are at best groping their way to a possible solution in the absence of any information pertaining to what they are actually doing.

From the mechanical perspective, in other words, the normative philosopher has only the murkiest idea of what’s going on. They theorize ‘takings as’ and ‘rules’ and ‘commitments’ and ‘entitlements’ and ‘uses’—they develop their theoretical vocabulary—absent any mechanical information, which is to say, absent the information underwriting the most reliable form of theoretical cognition humanity has ever achieved.

The normative philosopher is now in a bind. Given that the development of their theoretical vocabulary turns on the absence of mechanical information, they have no way of asserting that what they are ‘making explicit’ is not actually mechanical. If the normativity of the normative is not given, then the normative philosopher simply cannot assume normative closure, that the use of normative terms—such as ‘use’—implicitly commits any user to any kind of theoretical normative realism, let alone this or that one. This is the article of faith I encounter most regularly in my debates with normative types: that I have to be buying into their picture somehow, somewhere. My first order use of ‘use’ no more commits me to any second-order interpretation of the ‘meaning of use’ as something essentially normative than uttering the Lord’s name in vain commits me to Christianity. The normative philosopher’s inability to imagine how it could be otherwise certainly commits me to nothing. Evolution has given me all these great, normative gadgets—I would be an idiot not to use them! But please, if you want to convince me that these gadgets aren’t gadgets at all, that they are something radically different from anything in nature, then you’re going to have to tell me how and why.

It’s just foot-stomping otherwise.

And this is where I think the bind becomes a garrotte, because the question becomes one of just how the normative philosopher could press their case. If they say their theoretical vocabulary is merely ‘functional,’ a way to describe actual functions at a ‘certain level’ you simply have to ask them to evidence this supposed ‘actuality.’ How can you be sure that your ‘functions’ aren’t, as Craver and Piccinini would argue, ‘mechanism sketches,’ ways to rough out what is actually going on absent the information required to know what’s actually going on? It is a fact that we are blind to the astronomical complexity of what we are and what we do: How do you know if the rope you keep talking about isn’t actually an elephant’s tail?

The normative philosopher simply cannot presume the sufficiency of the information at their disposal. On the one hand, the first-order efficacy of the target vocabulary in no way attests to the accuracy of their second-order regimentations: our ‘mindreading’ heuristics were selected precisely because they were efficacious. The same can be said of logic or any other apparently ‘irreducibly normative’ family of formal problem-solving procedures. Given the relative ease with which these procedures can be mechanically implemented in a simple register system, it’s hard to understand how the normative philosopher can insist they are obviously ‘intrinsically normative.’ Is it simply a coincidence that our brains are also mechanical? Perhaps it is simply our metacognitive myopia, our (obvious) inability to intuit the mechanical complexity of the brain buzzing behind our eyeballs, that leads us to characterize them as such. This would explain the utter lack of second-order, theoretical consensus regarding the nature of these apparently ‘formal’ problem solving systems. Regardless, the efficacy of normative terms in everyday contexts no more substantiates any philosophical account of normativity than the efficacy of mathematics substantiates any given philosophy of mathematics.

Normative intuitions, on the other hand, are equally useless. If ‘feeling right’ had anything but a treacherous relationship with ‘being right,’ we wouldn’t be having this conversation. Not only are we blind to the astronomical complexities of what we are and what we do, we’re blind to this blindness as well! Like Plato’s prisoners, normative philosophers could be shackled to a play of shadows, convinced they see everything they need to see simply for want of information otherwise.

But aside from intuition (or whatever it is that disposes us to affirm certain ‘inferences’ more than others), just what does inform normative theoretical vocabularies?

Good question!

On the mechanical perspective, normative cognition involves the application of specialized heuristics in specialized problem-ecologies—ways we’ve evolved (and learned) to muddle through our own mad complexities. When I utter ‘use’ I’m deploying something mechanical, a gadget that allows me to breeze past the fact of my mechanical blindness and to nevertheless ‘cognize’ given that the gadget and the problem ecologies are properly matched. Moreover, since I understand that ‘use,’ like ‘meaning,’ is a gadget, I know better than to hope that second-order applications of this and other related gadgets to philosophical problem-ecologies will solve much of anything—that is, unless your problem happens to be filling lecture time!

So when Brandom writes, for instance, “What we could call semantic pragmatism is the view that the only explanation there could be for how a given meaning gets associated with a vocabulary is to be found in the use of that vocabulary…” (Extending the Project of Analysis, 11), I hear the claim that the heuristic misapplications characteristic of traditional semantic philosophy can only be resolved via the heuristic misapplications characteristic of traditional pragmatic philosophy. We know that normative cognition is profoundly heuristic. We know that heuristics possess problem ecologies, that they are only effective in parochial contexts. Given this, the burning question for any project like Brandom’s has to be whether the heuristics he deploys are even remotely capable of solving the problems he tackles.

One would think this is a pretty straightforward question deserving a straightforward answer—and yet, whenever I raise it, it’s either passed over in silence or I’m told that it doesn’t apply, that it runs roughshod over some kind of magically impermeable divide. Most recently I was told that my account refuses to recognize that we have ‘perfectly good descriptions’ of things like mathematical proof procedures, which, since they can be instantiated in a variety of mechanisms, must be considered independently of mechanism.

Do we have perfectly good descriptions of mathematical proof procedures? This is news to me! Every time I dip my toe in the philosophy of mathematics I’m amazed by the florid diversity of incompatible theoretical interpretations. In fact, it seems pretty clear that we have no consensus-compelling idea of what mathematics is.

Does the fact that various functions can be realized in a variety of different mechanisms mean that those functions must be considered independently of mechanism altogether? Again, this is news to me. As convenient as it is to pluck apparently identical functions from a multiplicity of different mechanisms in certain problem contexts, it simply does not follow that one must do the same for all problem contexts. For one, how do we know we’ve got those functions right? Perhaps the granularity of the information available occludes a myriad of functional differences. Consider money: despite being a prototypical ‘virtual machine’ (as Dennett calls it in his latest book), there can be little doubt that the mechanistic details of its instantiation have a drastic impact on its function. The kinds of computerized nanosecond transactions now beginning to dominate financial markets could make us pine for good old ‘paper changing hands’ days soon enough. Or consider normativity: perhaps our blindness to the heuristic specificity of normative cognition has led us to theoretically misconstrue its function altogether. There’s gotta be some reason why no one seems to agree. Perhaps mathematics baffles us simply because we cannot intuit how it is instantiated in the human machine! We like to think, for instance, that the atemporal systematicity of mathematics is what makes it so effective—but how do we know this isn’t just another ‘noocentric’ conceit? After all, we have no way of knowing what function our conscious awareness of mathematical cognition plays in mathematical cognition more generally. All that seems certain is that it is not the whole story. Perhaps our apparently all-important ‘abstractions’ are better conceived as low-dimensional shadows of what is actually going on.

And all this is just to say that normativity, even in its most imposing, formal guises, isn’t something magical. It is an evolved capacity to solve specific problems given limited resources. It is natural— not normative. As a natural feature of human cognition, it is simply another object of ongoing scientific inquiry. As another object of ongoing scientific inquiry, we should expect our traditional understanding to be revolutionized, that positions such as ‘inferentialism’ will come to sound every bit as prescientific as they in fact are. To crib a conceit of Feynman’s: the more we learn, the more the neural stage seems too big for the normative philosopher’s drama.