The Eliminativistic Implicit (I): The Necker Cube of Everyday and Scientific Explanation
by rsbakker
Go back to what seems the most important bit, then ask the Intentionalist this question: What makes you think you have conscious access to the information you need? They’ll twist and turn, attempt to reverse the charges, but if you hold them to this question, it should be a show-stopper.
What follows, I fear, is far longer winded.
Intentionalists, I’ve found, generally advert to one of two general strategies when dismissing eliminativism. The first is founded on what might be called the ‘Preposterous Complaint,’ the idea that eliminativism simply contradicts too many assumptions and intuitions to be considered plausible. As Uriah Kriegal puts it, “if eliminativism cannot be acceptable unless a relatively radical interpretation of cognitive science is adopted, then eliminativism is not in good shape” (“Non-phenomenal Intentionality,” 18). But where this criticism would be damning in other, more established sciences, it amounts to little more than an argument ad populum in the case of cognitive science, which as of yet lacks any consensual definition of its domain. The very naturalistic inscrutability behind the perpetual controversy also motivates the Eliminativist’s radical interpretation. The idea that something very basic is wrong with our approach to questions of experience and intentionality is by no means a ‘preposterous’ one. You could say the reality and nature of intentionality is the question. The Preposterous Complaint, in other words, doesn’t so much impugn the position as insinuate career suicide.
The second turns on what might be called the ‘Presupposition Complaint,’ the idea that eliminativism implicitly presupposes the very intentionality that it claims to undermine. The tactic generally consists of scanning the eliminativist’s claims, picking out various intentional concepts, then claiming that use of such concepts implicitly affirms the existence of intentionality. The Eliminativist, in other words, commits ‘cognitive suicide’ (as Lycan, 2005, calls it). Insofar as the use of intentional concepts is unavoidable, and insofar as the use of intentional concepts implicitly affirms the existence of intentionality, intentionality is ineliminable. The Eliminativist is thus caught in an obvious contradiction, explicitly asserting not-A on the hand, while implicitly asserting A on the other.
On BBT, intentionality as traditionally theorized, far from simply ‘making explicit’ what is ‘implicitly the case,’ is actually a kind of conceptual comedy of errors turning on heuristic misapplication and metacognitive neglect. Such appeals to ‘implicit intentionality,’ in other words, are appeals to the very thing BBT denies. They assume the sufficiency of the very metacognitive intuitions that positions such as my own call into question. The Intentionalist charge of performative contradiction simply begs the question. It amounts to nothing more than the bald assertion that intentionality cannot be eliminated because intentionality is ineliminable.
The ‘Presupposition Complaint’ is pretty clearly empty as an argumentative strategy. In dialogical terms, however, I think it remains the single biggest obstacle to the rational prosecution of the Intentionalist/Eliminativist debate—if only because of the way it allows so many theorists to summarily dismiss the threat of Eliminativism. Despite its circularity, the Presupposition Complaint remains the most persistent objection I encounter—in fact, many critics persist in making it even after its vicious circularity has been made clear. And this has led me to realize the almost spectacular importance of the notion of the implicit plays in all such debates. For many thinkers, the intentional nature of the implicit is simply self-evident, somehow obvious to intuition. This is certainly how it struck me before I began asking the kinds of questions motivating the present piece. After all, what else could the implicit be, if not the intentional ‘ground’ of our intentional ‘practices’?
In what follows, I hope to show how this characterization of the implicit, far from obvious, actually depends, not only on ignorance, but on a profound ignorance of our ignorance. On the account I want to give here, the implicit, far from naming some spooky ‘infraconceptual’ or ‘transcendental’ before of thought and cognition, simply refers to what we know is actually occluded from metacognitive appraisals of experience: namely, nature as described by science. To frame the issue in terms of a single question, what I want to ask in this post and its sequels is, What warrants the Intentionalist’s claims regarding implicit normativity, say, over an Eliminativist’s claims of implicit mechanicity?
So what is the implicit? Given the crucial role the concept plays in a variety of discourses, it’s actually remarkable how few theorists have bothered with the question of making the implicit qua implicit explicit (Stephen Turner and Eugene Gendlin are signature exceptions in this regard, of course). Etymologically, ‘implicit’ derives from the Latin, implicitus, the participle of implico, which means ‘to involve’ or ‘to entangle,’ meanings that seem to bear more on implicit’s perhaps equally mysterious relatives, ‘imply’ or ‘implicate.’ According to Wikitionary, uses that connote ‘entangled’ are now obsolete. Implicit, rather, is generally taken to mean, 1) “Implied directly, without being directly expressed,” 2) “Contained in the essential nature of something but not openly shown,” and 3) “Having no reservations or doubts; unquestioning or unconditional; usually said of faith or trust.” Implicit, in other words, is generally taken to mean unspoken, intrinsic, and unquestioned.
Prima facie, at least, these three senses are clearly related. Unless spoken about, the implicit cannot be questioned, and so must remain an intrinsic feature of our performances. The ‘implicit,’ in other words, refers to something operative within us that nonetheless remains hidden from our capacity to consciously report. Logical or material inferential implications, for instance, guide subsequent transitions within discourse, whether we are conscious of them or not. The same might be said of ‘emotional implications,’ or ‘political implications,’ or so on.
Let’s call this the Hidden Constraint Model of the implicit, the notion that something outside conscious experience somehow ‘contains’ organizing principles constraining conscious experience. The two central claims of the model can be recapitulated as:
1) The implicit lies in what conscious cognition neglects. The implicit is inscrutable.
2) The implicit somehow constrains conscious cognition. The implicit is effective.
From inscrutability and effectiveness, we can infer at least two additional features pertaining to the implicit:
3) The effective constraints on any given moment of conscious cognition require a subsequent moment of conscious cognition to be made explicit. We can only isolate the biases specific to a claim we make subsequent to that claim. The implicit, in other words, is only retrospectively accessible.
4) Effective constraints can only be consciously cognized indirectly via their effects on conscious experience. Referencing, say, the ‘implicit norms governing interpersonal conduct’ involves referencing something experienced only in effect. ‘Norms’ are not part of the catalogue of nature—at least as anything recognizable as such. The implicit, in other words, is only inferentially accessible.
So consider, as a test case, Hume’s famous meditations on causation and induction. In An Enquiry Concerning Human Understanding, Hume points out how reason, no matter how cunning, is powerless when it comes to matters of fact. Short of actual observation, we have no way of divining the causal connections between events. When we turn to experience, however, all we ever observe is the conjunction of events. So what brings about our assumptive sense of efficacy, our sense of causal power? Why should repeating the serial presentation of two phenomena produce the ‘feeling,’ as Hume terms it, that the first somehow determines the second? Hume’s ‘skeptical solution,’ of course, attributes the feeling to mere ‘custom or habit.’ As he writes, “[t]he appearance of a cause always conveys the mind, by a customary transition, the idea of an effect” (ECHU, 51, italics my own).
All four of the features enumerated above are clearly visible in the above. Hume makes no dispute of the fact that the repetition of successive events somehow produces the assumption of efficacy. “On this,” he writes, “are founded all our reasonings concerning matters of fact or existence” (51). Exposure to such repetitions fundamentally constrains our understanding of subsequent exposures, to the point where we cannot observe the one without assuming the other—to the point where the bulk of scientific knowledge is raised upon it. Efficacy is effective—to say the least!
But there’s nothing available to conscious cognition—nothing observable in these successive events—over and above their conjunction. “One event follows another,” Hume writes; “but we never can observe any tie between them. They seem conjoined, but never connected” (49). Efficacy, in other words, is inscrutable as well.
So then what explains our intuition of efficacy? The best we can do, it seems, is to pause and reflect upon the problem (as Hume does), to posit some X (as Hume does) reasoning from what information we can access. Efficacy, in other words, is only retrospectively and inferentially accessible.
We typically explain phenomena by plugging them into larger functional economies, by comprehending how their precursors constrain them and how they constrain their successors in turn. This, of course, is what made Hume’s discovery—that efficacy is inscrutable—so alarming. When it comes to environmental inquiries we can always assay more information via secondary investigation and instrumentation. As a result, we can generally solve for precursors in our environments. When it comes to metacognitive inquiries such as Hume’s, however, we very quickly stumble into our own incapacity. “And what stronger instance,” Hume asks, “can be produced of the surprising ignorance and weakness of the understanding, than the present?” (51). Efficacy, the very thing that binds phenomena to their precursors, is itself without precursors.
Not surprisingly, the comprehension of cognitive phenomena (such as efficacy) without apparent precursors poses a special kind of problem. Given efficacy, we can comprehend environmental nature. We simply revisit the phenomena and infer, over and over, accumulating the information we need to arbitrate between different posits. So how, then, are we supposed to comprehend efficacy? The empirical door is nailed shut. No matter how often we revisit and infer, we simply cannot accumulate the data we need to arbitrate between our various posits. Above, we see Hume rooting around with questions, (our primary tool for making ignorance visible) and finding no trace of what grounds his intuitions of empirical efficacy. Thus the apparent dilemma: Either we acknowledge that we simply cannot understand these intuitions, “that we have no idea of connexion or power at all, and that these words are absolutely without any meaning” (49), or we elaborate some kind of theoretical precursor, some fund of hidden constraint, that generates, at the very least, the semblance of knowledge. We posit some X that ‘reveals’ or ‘expresses’ or ‘makes explicit’ the hidden constraint at issue.
These ‘X posits’ have been the bread and butter of philosophy for some time now. Given Hume’s example it’s easy to see why: the structure and dynamics of cognition, unlike the structure and dynamics of our environment, do not allow for the accumulation of data. The myriad observational opportunities provided by environmental phenomena simply do not exist for phenomena like efficacy. Since individual (and therefore idiosyncratic) metacognitive intuitions are all we have to go on, our makings explicit are pretty much doomed to remain perpetually underdetermined—to be ‘merely philosophical.’
I take this as uncontroversial. What makes philosophy philosophy as opposed to a science is its perennial inability to arbitrate between incompatible theoretical claims. This perennial inability to arbitrate between incompatible theoretical claims, like the temporary inability to arbitrate between incompatible theoretical claims in the sciences, is in some important respect an artifact of insufficient information. But where the sciences generally possess the resources to accumulate the information required, philosophy does not. Aside from metacognition or ‘theoretical reflection,’ philosophy has precious little in the way of informational resources.
And yet we soldier on. The bulk of traditional philosophy relies on what might be called the Accessibility Conceit: the notion that, despite more than two thousand years of failure, retrospective (reflective, metacognitive) interrogations of our activities somehow access enough information pertaining to their ‘intrinsic character’ to make the inferential ‘expression’ of our implicit precursors a viable possibility. Hope, as they say, springs eternal. Rather than blame their discipline’s manifest institutional incapacity on some more basic metacognitive incapacity, philosophers generally blame the problem on the various conceptual apparatuses used. If they could only get their concepts right, the information is there for the taking. And so they tweak and they overturn, posit this precursor and that, and the parade of ‘makings explicit’ grows and grows and grows. In a very real sense, the Accessibility Conceit, the assumption that the tools and material required to cognize the implicit are available, is the core commitment of the traditional philosopher. Why show up for work, otherwise?
The question of comprehending conscious experience is the question of comprehending the constitutive and dynamic constraints on conscious experience. Since those constraints don’t appear within conscious experience, we pay certain people called ‘philosophers’ to advance speculative theories of their nature. We are a rather self-obsessed species, after all.
Advancing speculative hypotheses regarding each other’s implicit nature is something we do all the time. According to Robin Dunbar, some two thirds of human communication is devoted to gossip. We are continually replaying, revisiting—even our anticipations yoke the neural engines of memory. In fact, we continually interrogate our emotionally charged interactions, concocting rationales, searching for the springs of others’ actions, declaring things like ‘She’s just jealous,’ or ‘He’s on to you.’ There is, you might say, an ‘Everyday Implicit’ implicit in our everyday discourse.
As there has to be. Conscious experience may be ‘as wide as the sky,’ as Dickinson says, but it is little more than a peephole. Conscious experience, whatever it turns out to be, seems to be primarily adapted to deliberative behaviour in complex environments. Among other things, it operates as a training interface, where the deliberative repetition of actions can be committed to automatic systems. So perhaps it should come as no surprise that, like behaviour, it is largely serial. When peephole, serial access to a complex environment is all you have, the kind of retrospective inferential capacity possessed by humans becomes invaluable. Our ability to ‘make things explicit’ is pretty clearly a central evolutionary design feature of human consciousness.
In a fundamental sense, then, making-explicit is just what we humans do. It makes sense that with time, especially once literacy allowed for the compiling of questions—an inventory of ignorance, you might say—that we would find certain humans attempting to make making explicit itself explicit. And since making each other explicit was something that we seemed to do with some degree of reliability, it makes sense that the difficulty of this new task should confound these inquirers. The Everyday Implicit was something they used with instinctive ease, reliably attributing all manner of folk-intentional properties to individuals all the time. And yet, whenever anyone attempted to make this Everyday Implicit explicit, they seemed to come up with something different.
No one could agree on any canonical explication. And yet, aside from the ancient skeptics, they all agreed on the possibility of such a canonical explication. They all hewed to the Accessibility Conceit. And since the skeptics’ mysterian posit was as underdetermined as any of their own claims, they were inclined to be skeptical of the skeptics. Otherwise, their Philosophical Implicit remained the only game in town when it came to things human and implicit. They need only look to the theologians for confirmation of their legitimacy. At least they placed their premises before their conclusions!
But things have changed. Over the past few decades, cognitive scientists have developed a number of ingenious experimental paradigms designed to reveal the implicit underbelly of what we think and do. In the now notorious Implicit Association Test, for instance, the time subjects require to pair concepts is thought to indicate the cognitive resources required, and thus provide an indirect measure of implicit attitudes. If it takes a white individual longer to pair stereotypically black names with positive attributes than it does white names, this is presumed to evidence an ‘implicit bias’ against blacks. Actions, as the old proverb has it, speak louder than words. It does seem intuitive to suppose that the racially skewed effort involved in value identifications tokens some kind of bias. Versions of this of this paradigm continue to proliferate. Once the exclusive purview of philosophers, the implicit has now become the conceptual centerpiece of a vast empirical domain. Cognitive science has now revealed myriad processes of implicit learning, interpretation, evaluation, and even goal-setting. Taken together, these processes form what is generally referred to as System 1 cognition (see table below), an assemblage of specialized cognitive capacities—heuristics—adapted to the ‘quick and dirty’ solution of domain specific ‘problem ecologies’ (Chow, 2011; Todd and Gigerenzer, 2012), and which operate in stark contrast to what is called System 2 cognition, the slow, serial, and deliberate problem solving related to conscious access (defined in Dehaene’s operationalized sense of reportability)—what we take ourselves to be doing this very moment, in effect.
DUAL PROCESS THEORIES IN PSYCHOLOGY
System 1 Cognition (Implicit) | System 2 Cognition (Explicit) |
Not conscious | Conscious |
Not human specific | Human specific |
Automatic | Deliberative |
Fast | Slow |
Parallel | Sequential |
Effortless | Effortful |
Intuitive | Reflective |
Domain specific | Domain general |
Pragmatic | Logical |
Associative | Rulish |
High capacity | Low capacity |
Evolutionarily old | Evolutionarily young |
* Adapted from Frankish and Evans, “The duality of mind: A historical perspective.”
What are called ‘dual process’ or ‘dual system’ theories of cognition are essentially experimentally driven complications of the crude dichotomy between unconscious/implicit and conscious/explicit problem solving that has been pondered since ancient times. As granular as this emerging empirical picture remains, it already poses a grave threat to our traditional explicitations of the implicit. Our cognitive capacities, it turns out, are far more fractionate, contingent, and opaque than we ever imagined. Decisions can be tracked prior to a subject’s ability to report them (Haynes, 2008; or here). The feeling of willing can be readily tricked, and thus stands revealed as interpretative (Wegner, 2002; Pronin, 2009). Memory turns out to be fractionate and nonveridical (See Bechtel, 2008, for review). Moral argumentation is self-promotional rather than truth-seeking (Haidt, 2012). Various attitudes appear to be introspectively inaccessible (See Carruthers, 2011, for extensive review). The feeling of certainty has a dubious connection to rational warrant (Burton, 2008). The list of such findings continually grows, revealing an ‘implicit’ that consistently undermines and contradicts our traditional and intuitive self-image—what Sellars famously termed our Manifest Image.
As Frankish and Evans (2009) write in their historical perspective on dual system theories:
“The idea that we have ‘two minds’ only one of which corresponds to personal, volitional cognition, has also wide implications beyond cognitive science. The fact that much of our thought and behaviour is controlled by automatic, subpersonal, and inaccessible cognitive processes challenges our most fundamental and cherished notions about personal and legal responsibility. This has major ramifications for social sciences such as economics, sociology, and social policy. As implied by some contemporary researchers … dual process theory also has enormous implications for educational theory and practice. As the theory becomes better understood and more widely disseminated, its implications for many aspects of society and academia will need to be thoroughly explored. In terms of its wider significance, the story of dual-process theorizing is just beginning.” 25
Given the rhetorical constraints imposed by their genre, this amounts to the strident claim that a genuine revolution in our understanding of the human is underway, one that could humble us out of existence. The simple question is, Where does that revolution end?
Consider what might be called the ‘Worst Case Scenario’ (WCS). What if it were the case that conscious experience and cognition have evolved in such a way that the higher dimensional, natural truth of the implicit utterly exceeds our capacity to effectively cognize conscious experience and cognition outside a narrow heuristic range? In other words, what if the philosophical Accessibility Conceit were almost entirely unwarranted, because metacognition, no matter how long it retrospects or how ingeniously it infers, only accesses information pertinent to a very narrow band of problem solving?
Now I have a number of arguments for why this is very likely the case, but in lieu of those arguments, it will serve to consider the eerie way our contemporary disarray regarding the implicit actually exemplifies WCS. People, of course, continue using the Everyday Implicit the way we always have. Philosophers continue positing their incompatible versions of the Philosophical Implicit the way they have for millennia. And scientists researching the Natural Implicit continue accumulating data, articulating a picture that seems to contradict more and more of our everyday and philosophical intuitions as it gains dimensionality.
Given WCS, we might expect the increasing dimensionality of our understanding would leave the functionality of the Everyday Implicit intact, that it would continue to do what it evolved to do, simply because it functions the way it does regardless of what we learn. At the same time, however, we might expect the growing fidelity of the Natural Implicit would slowly delegitimize our philosophical explications of that implicit, not only because those explications amount to little more than guesswork, but because of the fundamental incompatibility of intentional and the causal conceptual registers.
Precisely because the Everyday Implicit is so robustly functional, however, our ability to gerrymander experimental contexts around it should come as no surprise. And we should expect that those invested in the Accessibility Conceit would take the scientific operationalization of various intentional concepts as proof of 1) their objective existence, and 2) the fact that only more cognitive labour, conceptual, empirical, or both, is required.
If WCS were true, in other words, one might expect that cognitive sciences invested in the Everyday and Philosophical Implicit, like psychology, would find themselves inexorably gravitating about the Natural Implicit as its dimensionality increased. One might expect, in other words, that the Psychological Implicit would become a kind of decaying Necker Cube, an ‘unstable bi-stable concept,’ one that would alternately appear to correspond to the Everyday and Philosophical Implicit less and less, and to the Natural Implicit more and more.
Part Two considers this process in more detail.
Hi Scott, I’ve been following your blog with interest and I have a clarification question about your view about the mind. As far as I know, dual process theorists like Keith Frankish are critical of massive modularity, since, the level of higher, reflective cognition is not modular, but only lower-level cognition is. Is your eliminativist position committed to massive modularity? (The main characters in Neuropath seemed to be advocates of that view.) In that case I think you should reject outright the dual process theory. Also, even if the dual-system view shows that “our cognitive capacities are far more fractionate, contingent, and opaque than we ever imagined”, it also leaves room for the idea that sometimes we are in control of our conscious doxastic deliberations, decision-making and actions. Maybe the higher-order reasoning is less authoritative and central than philosophers have traditionally claimed, but it exists nonetheless. In other words, it seems to me that our Cartesian intuitions about the mind are partially vindicated by the dual-process theory, whereas your theory absolutely rejects them. So, basically, I wonder whether your eliminativist position commits you to massive modularity and to rejecting the dualist process view outright.
Great question, Axl. Welcome to the board! I think that system 1 cognition is characterized by massive heuristic modularity, and system 2 cognition is something altogether different, but in no way ‘Cartesian.’ It’s not as though ‘system 2’ is ‘domain general’ in any straightforward way. I personally look at it as an ‘exaptation machine,’ something keyed to broadcasting novel problems to a variety of heuristic systems, fishing for solutions – similar to Carruthers, if I remember aright. For me the important (and uncontroversial) thing is that its heuristics up and down: the degrees to which they overlap or are encapsulated is something that will be sorted out in due course. Otherwise, the fact remains that conscious cognition picks up information from the black and broadcasts it back to the blackness, sometimes to work it over again, sometimes not, and there is simply no way for any biologically realistic notion of metacognition to get a handle on any of the ‘third variables’ involved, let alone all of the ‘essential’ ones. ‘Self-presentation,’ of course, is simply magic. This strongly suggests that the actual functions discharged run orthogonal to metacognition – and it allows you to explain a whole heap of philosophical mischief! Either way, it means the Cartesian has to explain how the brain is supposed to cognize itself in anything other than a radically heuristic way. I’m open to hearing accounts.
The only encapsulation that matters is the Platonic Cave encapsulation suffered by any act of conscious metacognition, the brain’s inability to solve itself in the same high dimensional way it solves its environments, for simple want of perspective and information.
Always strange how critics of the eliminativist arguments you harbor always fall back on tautologies as you point out (“It amounts to nothing more than the bald assertion that intentionality cannot be eliminated because intentionality is ineliminable.”).
Interesting essay, makes sense of certain issues. So unlike Brandom and his normative hatchet chopping theories of making explicit, you see it as just what ‘is’ – this is what we do rather than what we think we do, etc. More of a pragmatic realization of the brain’s processes rather than some philosophical bric-a-brac grounding or condition of thinking, etc.
By the way ran across this work by Michael Tomasello A Natural History of Thinking in which he argues for some kind of ‘shared intentionality’ hypothesis, etc. Was looking on your site to see if you had a critique but was unable to find anything. Have you thoughts? Seems to be closer to some kind of cooperative or cultural brain notion, somewhat like the distributed intelligence ideas floating around. I haven’t invested in the book but though I’d see if you had thoughts?
Also reading Flash Boys by Michael Lewis about the guys that cracked the black coders who have been running the pirating of hundreds of billions thorough time hacking techniques in shaving front runs of buys and sales on Wall-Street. What’s interesting about this is not the economic aspects, but the underlying mechanisms of the brain that allowed a unique set of individuals to investigate and uncover the complexity and solve the threat. I’m still trying to figure out how to incorporate a post-intentional mode of thought or perspective in descriptive writing to make explicit such unusual processes and interactions. I mean we’re talking about the interaction of brain and machine in these high-speed fiber optic algorithmic networks among warring parties vying with each other over economic agendas and human freedom. Economics is driving this research in neurosciences and mathematics. In many ways these guys are already developing the AI algorithms of everyday life that are beyond any one person to understand, yet a particular singular individual not only uncovered the complexity but discovered a way to beat them at their own game.
The brain sciences on this score will probably open a whole new can of worms in the coming years as they continue to transpose theory with practice in both humans and machines.
Reading Brandom back in the 90’s was a revelation to me, simply because it truly seemed to offer a way out, but the more I worked with it, the more it came to seem to be more normative metaphysics – which it is, or course. What’s worse, it doesn’t even deliver on the things it purports to deliver even if you grant its assumptions.
Yeah, I’m reading that very book along with Malafouris’s How Things Shape the Mind, which I find interesting in the same ‘halfway there’ I find all skeptically minded enactivist approaches. I actually have a post in the blocks on a single quote from Tomasello, which for me sums the way he’s really just trading in occultisms. You would like Malafouris much more, I think.
I’ve long thought that the real ‘Skynet Doomsday Scenario’ will involve financial AI systems, which are hungry to hoover up and ‘comprehend’ as much information as they can get their circuits on. The crazy thing is the way the AI seems to metastatizing throughout the system, so that you now have systems to automatically report ‘financial news’ for consumption by HFT systems. The whole thing bears very careful watching because the way the inefficiencies of the human are being purged from these systems could very well provide the model for the way the human will be purged elsewhere. For me all this is simply proof of concept: the more machines do semantic double-duty, the more apparent it becomes that the semantic is an illusion.
Good! I’ll need to check Malafouris’s How Things Shape the Mind out…
Yes, and now that Google is going at this AI from the consumer angle the whole end to end ecommerce nexus will be in place stripped of its human equation. Instead of talking heads we’ll have our friendly talking machines… slavery with a machinic smile. The semantic black hole be dammed. haha…
In a non-scientific way the intentionalists have a point. The idea that “intentionality cannot be eliminated because intentionality is ineliminable” forces us to ask whether intentionality is ineliminable because we can’t do without it. It might turn out that even if it does not exist our societies can’t do without it, so we’ll have to continue pretending it does. That having been said, the effort being made by science and the business world to hack into our system one processes directly suggests that at some point in the not too distant future all of us will be forced to face the contradiction between what we are and what we believe/prefer ourselves to be. There may be a backlash of some sort. It may be severe. Have any studies ever been done about how the subjects of experiments like the ‘implicit association test’ feel about what the experiments reveals to them about themselves?
Damn good question. I’m sure someone has taken a looksee: there’s a bunch of depressing evidence to the effect that knowing your implicit biases actually makes you more likely to run afoul them!
Regarding your first point, I would draw a distinction between the two instances of ineliminability you cite. The Intentionalist is arguing that their normative metaphysics is ineliminable because it is true (in some wonky sense). The question of whether the masses can live without their traditional delusions is a different one entirely. I certainly don’t think so, which is why I think society is becoming more and more ‘Akratic,’ split between nihilistic managerial tactics and fantastical ‘man-in-the-street’ belief-systems.
As I interpreted our exchange on the presupposition objection, Scott, it was left as a stalemate, not as a checkmate, because you don’t define some of your fundamental terms and so it’s unknown whether at the end of the day you replace folk conceptions (including references to normativity, purpose, and meaning) with non-folk ones.
As interesting as your formulations here are, I’m not sure the point about presuppositions has to be that complicated. It’s just a question of whether one set of terms really is eliminated in favour of another one. Are they replaced, when all the definitions are finally spelled out, or are we playing whack-a-mole?
You don’t really talk yet as much about science as I’d have hoped. But here’s how a presupposition objection might run in this context: “The eliminativist says there’s no such thing as semantic meaning, even though our talk about such meaning serves various functions. But in terms of how we understand the real world, that world doesn’t contain anything like the meaning of symbols. Instead, there’s the scientific picture which replaces the folk one. But how do we understand the scientific picture, in turn, without presupposing the meaning of symbols? What literally does a scientific theory consist of if not meaningful symbols? The eliminativist owes us an “account” of scientific understanding which doesn’t itself refer to the very concepts that are supposed to be eliminated. If we say that science is all about efficacy, it’s hard to see how science could differ from any other natural process, since there’s power (causal determination) just about everywhere we look. And yet science does so differ. Thus, without the concept of semantic meaning, we may not know what we’re talking about when we say that the scientific view supersedes the folk one.”
I must say that I’m troubled by one paragraph in particular in this article. It’s the one that starts with “I take this as uncontroversial.” First, you say that philosophy has trouble arbitrating between competing theoretical claims. This assumes a science-centered view of philosophy. I don’t think that the offering of theories of the facts is philosophy’s main job. To be sure, before modern science philosophers did offer such theories, because there was no clear distinction between philosophers and scientists. But even in ancient Greece, philosophy served what you’d call non-cognitive functions, such as the function of making people more like Socrates: skeptical of mass delusions and thus alienated from society. Philosophy acts as a sort of curse. This is its ethical function, and the existential question is whether enlightenment is worth the social penalty.
Also, you say that science has only a temporary inability to arbitrate such claims, because scientists can keep gathering information, whereas philosophers are limited to the “peephole of conscious experience.” But that peephole is a limit only on direct self-knowledge; otherwise, our folk models of other minds and our scientific models of distant galaxies can be justified in similar ways. Why can’t ordinary folks justify the concept of personhood in roughly the way that scientists justify their theoretical concepts? These concepts refer to entities that are only indirectly observed, because what we directly observe is, as you say, strictly speaking very limited, as Hume also pointed out. Kant’s response is adequate: we have innate tools of reasoning, so we needn’t rely just on that peephole. So why is this point about limited direct access to information so crucial? What’s wrong with indirect access? We observe certain behaviour and we posit mental capacities to explain it.
Moreover, we needn’t commit the genetic fallacy of saying that folk psychology is dubious because it derives from very limited mental programs. Instead, the question is the pragmatic one, as in science: does folk psychology work? You agree that it does, since the social heuristics evolved to keep us alive. We can say, then, that the folk concept of the self is useful even though the self may not really exist. The same pragmatic attitude can be taken towards every single scientific concept. So eliminativism becomes equivalent to pragmatism/instrumentalism, as distinguished from realism.
Are you a realist about scientifically-posited entities? You answer this elsewhere by talking about scientific efficacy, which does indeed sound pragmatic. The question is how you can be pragmatic about science but dismissive of folk psychology. Clearly, you must think that folk psychology isn’t as useful as scientific theories. But does this mean you have a standard of utility, whereas your eliminativism “implies” that there are no such things as standards? That would lead again to the presupposition objection.
I’m not sure how this is at all relevant to the problem of begging the question. So you need to make that explicit to me. Otherwise, even if I were to grant you that relevance, I’m not sure how it helps your case. Certainly my account would be dialectically more appealing if I could lay out the neural circuitry and associated ethologies in detail. But you’re not only in the same boat, because you have no non-circular way of defining your own, occult terms, you have the added burden of explaining all the apparent violations of physics! Accusing the eliminativist of occultism doesn’t strike me as a viable approach.
Regarding “You don’t really talk yet as much …” paragraph. I’m not sure where the ‘undue complication’ comes in: Your prospective interlocutor is begging the question. I dispute any interpretation of normative concepts that sees them as intrinsically normative, as belonging to some occult ontological order apart from the natural. I have a very detailed account of why we compulsively do this, and I challenge the intentionalist to adduce their evidence, to source the information that warrants their claims, because this, ultimately, is the only way for them to make their case. If they don’t, then normativism amounts to nothing more than foot-stomping, doesn’t it? Everytime you say I’m denying the ‘existence’ of normative concepts you’re conflating a notorious historically intractable interpretation of what normative concepts are with ‘normative concepts’ – begging the question. I’m not saying we don’t use normative concepts, only that normative concepts aren’t what the normativist says they are. Why do I have any burden beyond pointing this out?
Most normativists are cognitivists of some stripe. For those who are noncognitivist, that’s well and fine. I’m a fantasy writer after all! 😉 But (as I’ve asked you before) the question becomes: if a philosopher isn’t trying to convince you of anything, then why not read fantasy instead?
I agree entirely (though I don’t get how the genetic fallacy applies). I think folk-psychology works splendidly. But the question here is the question of what folk-psychology is, and I don’t think this question belongs to the set of questions that folk psychology can answer. And I have a detailed, empirically plausible explanation for why this is so.
‘Pragmatism/instrumentalism’ is typically a normativist position, one that adopts deflationary rhetoric to import a whole raft of second-order (ie, metaphysical) normative concepts – this is why I shy from these terms. The ‘efficacy’ I refer to is mechanical efficacy. Why? Because on the highest dimensional (most informed) view we possess, this is the only efficacy we can hang our cognitive cap on. I shy from ‘realism’ for similar reasons: on BBT, realism as traditionally understood runs afoul our heuristic limits. ‘Objectivity’ and ‘subjectivity’ are laid out on the same cognitive axis, rather than being opposed to one another. Otherwise, I’m actually not sure we’re capable of fathoming issues at this level of generality – or that we need to.
The relevance of the stalemate point is that you and the transcendentalist each say the other begs the question and we haven’t figured out a way to resolve that issue. The transcendentalist says BBT begs the question about whether semantics, normativity, and teleology are ineliminable, since BBT uses words whose conventional meanings are consistent with the naive self-image and BBT’s fundamental definitions aren’t made explicit. BBT says the transcendentalist begs the question by assuming that BBT’s use of certain conventional English words requires the objectionable, folk definitions, as opposed to the nonstandard, mechanistic ones that BBT says it’s putting forward. The transcendentalist responds that there’s no proof yet that there are alternative definitions to the folk ones, since again BBT’s fundamental terms aren’t explicitly defined. And round and around it goes.
But I’m not so interested in that line of argument anymore, since I’ve satisfied myself in my later articles for TPB, that there are bridges between our philosophies even if transcendentalism is a dead end.
As for violating physics, I don’t think my accounts of meaning, normativity, or purpose are supernatural. Moreover, the occultism charge seems anachronistic after quantum mechanics and the like. Hume could make that charge against metaphysicians when they were still working within the mechanistic Newtonian paradigm. After quantum indeterminism and the rest, it sounds a little silly contrasting naturalism with magic. I’m not saying anything goes now. But I’m afraid a little more leeway is permitted now to humanists, whereas Hume thought he could just toss all those non-empirical books into the flames. For example, there’s chaos theory and emergentism which counter reductionism. Thus, property dualism is an option.
I don’t say anything is “intrinsically normative.” In the paper I left with you I argue that nature might be inherently aesthetic, or at least that it’s bound to punish or to reward those who take up the aesthetic attitude towards its processes, which attitude is practically the same as scientific objectivity and is thus quite consistent with naturalism.
I think that both philosophers and artists try to convince people of something. But there are different ways of doing so since we can appeal directly to reason or indirectly through the emotions, intuitions, or instincts. In the latter case we’re being rhetorical rather than strictly logical. Again, the semantic issue would be about the extent of “the cognitive.”
I agree that folk psychology can’t say what it is objectively and factually speaking, if the latter are defined in terms of the outputs of scientific methods. But once again I believe BBT would be much strengthened by a consideration of the nature of science, given BBT. I say this here because if we take a pragmatic view of science, we can speak of scientific models that overlap but which need not be entirely consistent. This is how physics currently stands with respect to relativity theory and quantum mechanics. Scientists use those two theories, because they work, but they’re not commensurate with each other.
So why couldn’t mechanists talk about the mind in their terms while the folk talk about it in theirs? You say that’s fine, but only the mechanistic account will tell us what’s really happening. I’m saying that assumes a realistic rather than a pragmatic view of science. Pragmatically, the issue of the one true account of reality is neither here nor there; in fact, it might be a Platonic fairytale. If a theory works it must pick up on some real aspect of the phenomenon. So different models can work according to different purposes, even if they talk slightly past each other. So be it, says the pragmatist, but you want more.
Again, I’d like to see you hash out this realism-pragmatism issue. You seem to be taking Rorty’s line with it, which is fine. You say we don’t need a deeper answer here, but I think BBT does, because you keep insisting that folk psychology doesn’t supply us with knowledge even though it’s useful. That seems to assume realism rather than pragmatism. But BBT undermines the realist’s categories of truth and meaning. So there’s a pickle there.
I fear I still don’t see the stalemate. The transcendentalist is accusing the eliminativist of incoherence, not begging the question. The eliminativist is disputing the transcendentalists account of normative phenomena. The transcendentalist is responding by saying that it is unintelligible to dispute their account, because the eliminativist has to agree with their account to dispute it. If that’s not a bad argument, I don’t know what is! The pickle the eliminativist finds themselves in is rhetorical, not argumentative. The real problem is that intentionalism is the coin of the realm, nothing more. They certainly don’t have any track record of explanatory success to fall back on!
I’m not sure I see the argument in the passage. If anything, quantum mechanics argues for BBT. The fact that causal reason breaks down at microscopic levels means that causal reason breaks down at microscopic levels. It raises the possibility that it breaks down at macroscopic levels in a manner friendly to intentionalism, but nothing more. What it does show, rather definitively, is that human cognition is heuristic, and that we should expect ‘heuristic snarls,’ to find ourselves running afoul our own problem-solving capacity as a result. The only reason that the second-order theoretical apparatus of quantum mechanics won acceptence was its efficacy. What problems had Brandom’s normative metaphysics solved recently? Meanwhile mechanical explanation continues to leverage more and more power at the macroscopic level. BBT represents a way to see the millennial confusion surrounding intentionality as a series of heuristic snarls. It also offers a number experimental possibilities – a way to test for efficacy. This is the only way it will be able to overcome it’s counterintuitivity in the long haul. The intentionalist on the other hand are arguing that humans regularly violate physics in some kind of special, functional, emergent way. Sounds like occultism to me! Just one that flatters our intuition.
BBT actually provides a way to understand theoretical underdetermination as a variant of functionalism. Many different kinds of mechanism can converge upon similar functions. Either way, I’m not sure how you’re doing much more than inserting the term ‘realism’ into my account, which is more interested in the dimensionality of the information available for problem-solving. Physical nature is the highest dimensional info source we have, and our causal heuristic systems, as trained for and articulated within institutionalized science, have far and away the widest ‘problem ecology’ humanity has ever known. Given this, BBT can explain away our perennial second-order confusion regarding the intentional as something analogous to visual illusions. This strikes me as more than enough – as more than anybody else has been able to come up with at least! Heaping speculation on the ‘ultimate nature’ of reality or science would do anything BUT provide additional warrant, I think.
“…and there is simply no way for any biologically realistic notion of metacognition to get a handle on any of the ‘third variables’ involved, let alone all of the ‘essential’ ones.”
Scott,
When you mentioned “third variables”, it reminded me of Rorty’s rejection of the notion that truth as some (tertium quid) between meanings of words and the way the world is; a third way to see the inside and outside which combines causal interactions with the environment and pre-epistemological rules for action. Do you think Rorty’s rejected “tertia” are what those “third variables” appear to be after metacognition theorizes them as a semiotic triadicity of description, code function and transcendental glue in between called correspondence? In other words, do “third variables” end up being theorized by metacognition as something like Wittgenstein’s wheel in the machine that is turning by the machine without moving anything itself, i.e., the magical background of true and false? Thanks.
Pretty much. Rorty’s semantic tertia are the way the tradition has ontologized what amounts to ignorance. Medial neglect means that all the enabling machinery of cognition goes uncognized, leaving only the metacog intuition of some kind of systematic covariance, without any access to the genuine nature of that connection. It comes across as a magical systematicity, which philosophy, operating under the accessibility conceit, then attempts to rationalize as a kind positive feature (commits the accomplishment fallacy), in terms of ‘aboutness,’ ‘transparency,’ ‘reference,’ ‘truth,’ ‘meaning’ and so forth. The Wittgensteinian critique of the ‘picture theory’ works insofar as it shifts focus to the embodied efficacies of language use, only to run afoul the metacog illusions pertaining to normativity, generating yet another second-order theoretical apparatus on the basis of mistaking low-D wisps for high-D realities. Ultimately he entirely elides the ‘third-variables problem’ as well, positing pragmatic functions given what little information available for verbal reporting – functions that only make sense given the absence of certain dimensions of information.
Can folk-psychological ‘states of mind’ be mapped onto neurological ‘states of brain’ such that any state of mind can be induced by inducing the appropriate state of brain and any subjectively perceived state of mind has a corresponding objectively perceived (using whatever imaging techniques become available and defining brain to include everything neurologically wired to it) state of brain? If so then the language used to describe states of mind can be replaced by the language used to describe states of brain. Intentionality and its colleagues will be eliminable. Whether such mapping turns out to be possible is an empirical issue on which scientific progress is apparently being made. It may therefore be an issue about which philosophy is nearing the end of its usefulness. If the mapping does prove to be possible folk psychology will still be useful in the same way that the folk physics an outfielder uses to judge the trajectory of a baseball is still useful. There is an enormous amount of mathematical calculation implicit in the act of chasing down and catching a long fly ball, but the outfielder probably does not have the ability to make those calculations explicit, and almost certainly not in time to make the catch. The language of mathematical physics is more powerful and more general but performing the calculations implicitly and delivering the results as sensorimotor outputs is more useful during the game.
The fly ball example is a good one because it turns out that the brain uses a very powerful ‘quick and dirty’ heuristic rather than doing all those calculations. Rather than map the ball through space, it simply tracks the ball in the visual field, keeps the catcher moving constantly to keep it where it needs to be. It’s a perspectivally grounded shortcut – like intentionality – incredibly powerful, but not in a high-dimensional way. If the brain had solved the ball problem by calculating velocity, trajectory, and atmospheric conditions, the information could be fed to a missle-system to shoot the ball down. This is basically what intentionalists are doing: confusing perspective-dependent fixes for a-perspectival fixes. Using folk vocabulary is well an fine: it’s the second order ontologization or quasi-ontologization that’s the problem.
I think the mapping of folk-psychological states into brain states might be easier said than done. In fact, I can’t even understand how progress can be claimed toward a task that is based on a misunderstanding of the way psychological concepts work. I’m just gonna sketch two traditional philosophical arguments in this connection. First, relations between psychological states are conceptual and all folk-psychological notions are interconnected. For instance, the relation between my intention to go to the library and my subsequent action is more than causal. Also, the relation between my belief that men are mortal and that Socrates is mortal is logical, not causal. Put differently, relations between desires, beliefs and actions are constitutive. Good luck trying to map this constitutive relations in terms of causal, contingent relations between brain states! The point is the one made by Peter Hacker in his “Philosophical Foundations of Neuroscience”: folk-psychological vocabulary cannot be eliminated by science because it is the vocabulary in which we make sense of neuroscience. It is a frame of reference or basic criterion in which we can understand the language of neuroscience. Similar points are made even by naturalists like Lewis, in his classic paper “Mad pain and Martian Pain.”
The second argument is the danger of what Peter Hacker calls “the mereological fallacy”: that is, attributing to the brain properties or abilities of the person/agent. The brain makes decisions, believes, imagines etc. In that case, neuroscience is reduced to nonsense (the brain is an organ, not an agent). Also, the intentional vocabulary is not eliminated by made use of in talking about that brain as if it were an agent. This use of psychological concepts is parasitic upon the ordinary use of talking about persons, so the ordinary use cannot be eliminated since it is presupposed.
So, all in all, while I understand the excitement some people feel about the new “discoveries” of neuroscience, I cannot help but smile sadly thinking this is just old nonsense wrapped in a shiny new package, by scientists who skipped their analytic philosophy classes.
First, let’s get clear about what is being ‘eliminated’ here. What isn’t being eliminated is the first-order use of intentional terms, but rather the kinds of theoretical interpretations philosophers like Hacker are prone to give them. The thesis is anti-philosophical, not anti-everyday. What we want to understand is the systematicity of these first-order usages of norm-talk. The question is where we are likely to find this understanding. Is it by, a) theoretical norm-talk, or is it by b) theoretical nature-talk? I’m saying (b). Now for Hacker to argue, as he does, that the answer has to be (a) because (b) presupposes (a) is simply to beg the question. You do see this?
The presupposition argument is bumpkus. All Hacker really has is a competing theoretical account of norm-talk couched in the idiom of norm-talk. So then, given that it is a fact that human mindreading capacities are heuristic, and that all heuristic systems are keyed to specific problem-ecologies, one obvious question becomes, ‘How do you know that the problem of norm-talk belongs to the problem-ecology of norm-talk?’ because you have to admit, the perennial inability of intentionalist philosophy to agree on anything is precisely the kind of situation one might expect if it were the case that the problem of norm-talk (what is it, how does it function) did not belong to the problem ecology of norm-talk.
So then, how might the normativist go about answering this question? What evidential base can they appeal to make their case that everyday first-order norm-talk can only be solved by theoretical second-order norm-talk? There’s no way to avoid the question of metacognition at this juncture, and you don’t have to read much neuroscience to realize this isn’t a comfortable place for the normativist to be. The situation isn’t clear cut, given that the metacognitive access we do possess is geared to some problem-ecology, but how do we know that problem-ecology includes the theoretical problem of what norm-talk is. BBT says, ‘Look. You see the astronomically complicated mess the brain is. How is the brain supposed to metacognize that astronomically complicated mess short of adapting a plurality of problem-specific fixes? C’mon, Hacker, is it really just a coincidence that we find all these questions/issues so tremendously mysterious, or could it be the case that ‘philosophical reflection’ is simply applying ‘norm-talk’ out of school when it applies it to the problem of norm-talk?’
The ironic thing is that this ‘problem-ecology problem’ is not at all that different from the ‘mereological fallacy,’ which argues that the adaptive problem-ecology of norm-talk is people, not brains. Yes! BBT says. Exactly! So why do you keep applying norm-talk to the problem of norm-talk? All of these ‘language games’ and ‘deontic scorekeepings’ and so on are examples of what might be called the ‘holistic fallacy,’ the attempt to solve a problem ecology – what is the nature of norm-talk – that can only be solved via cause-talk using norm-talk. (This is why I largely agree, minus the Wittgensteinian normative metaphysics of course, with his criticism of Dennett).
So back to your, “Good luck trying to map this constitutive relations in terms of causal, contingent relations between brain states!” What ‘constitutive relations’? The ones you just think you see in your armchair? Or the one that guy thinks he sees over in his armchair? Tell me, How is it that you see them at all? For that matter, how do you know that you’re looking at you think you’re looking at – I would love to see the metacognitive system capable of doing that! one that can accurately intuit remarkable, physics defying, ‘acausal functions’ through the murk of trillions of neurons. Meanwhile, doesn’t it the least bit trouble you that metacognition almost certainly suffers causal neglect, a profound inability to intuit its operations the way it intuits its environments?’
BBT not only waltzes through Hacker’s position, it actually explains why his position seems to make the sense it does! He, like Wittgenstein, understands the ecological specificity of heuristic cognition, the fact that norm-talk is adapted to specific domains, but expresses this intuition within a theoretical normative vocabulary, and so simultaneously violates it. He then canonizes this violation, and cites the inability of natural-talk to ‘explain’ the resulting metacognitive illusions as evidence of the inadequacy of that talk! It really is an ingenious theoretical scheme, but like all magic tricks it depends on what isn’t seen (metacognitive incapacity, in this case). It had me fooled long enough, that’s for bloody sure…
Scott, I’ll have to think more about your reply, but just a quick comment about the first paragraph: You say, “First, let’s get clear about what is being ‘eliminated’ here. What isn’t being eliminated is the first-order use of intentional terms, but rather the kinds of theoretical interpretations philosophers like Hacker are prone to give them. The thesis is anti-philosophical, not anti-everyday. What we want to understand is the systematicity of these first-order usages of norm-talk. The question is where we are likely to find this understanding. Is it by, a) theoretical norm-talk, or is it by b) theoretical nature-talk? I’m saying (b). Now for Hacker to argue, as he does, that the answer has to be (a) because (b) presupposes (a) is simply to beg the question.”
I think Hacker rejects both (a) and (b) because on his view folk-psychology is not a theory of any kind. And viewing as a theory is a distortion to the way we use psychological concepts. So, if you can try to clarify what you mean by “the systematicity of first order uses of norm-talk”, that would be great.
To understand as thoroughly as possible the structure and dynamics of norm-talk. Insofar as Hacker thinks folk-psychology isn’t a theory (as I do as well) he has a theory of folk-psychology (as I do), and thus has to settle on (a), as he indeed does, or (b), as I do. The thing to remember is that it’s our actual instances of norm-talk we’re trying to understand, not our theories about those instances. So one of the big things that the theorist needs to explain is how norm-talk can be such an effective problem-solver in some contexts (everyday), and yet prove so ineffective in others (philosophy). It’s worth keeping in mind here that no one disagrees that human mind-reading capacities turn on neurology somehow. The question is whether we need something special over and above brain mechanisms. In this sense, BBT really does occupy the dialectical high-ground, since it says we can make of all these confounding intentional phenomena without some special X. It’s the rhetorical ground where it finds itself in the trenches! We have a pronounced attachment to our ‘special X’s,’ we humans!
Ok, Scott, so you claim that the intentional vocabulary plays a heuristic role, in the sense that it helps us roughly predict and explain our behavior and that of others. So, there’s an evolutionary story to be told regarding how our mind reading capacities came about.
One initial problem with this idea is that it distorts the way we use psychological concepts in our everyday lives. If I see someone crying, I know they’re in pain. If someone raises their voice I know they’re angry. These are things I’m certain of, they’re not responses to previous problems regarding the minds of others. Consciously, we never pose these problems, we just learn how to use psychological terms. It’s true that sometimes we have difficulty finding out what other people think, but these are sophisticated problems which came later one, more complex language-games. They are not the building-blocks of our “mind reading abilities”. Moreover, I’m rarely looking for third-person type of explanations for my own conscious psychological states. So, the first-person use of psychological terms becomes mysterious if we only emphasize the predictive and explanatory functions of folk-psychology.
Now, let’s say that all this heuristic reasoning takes place at an unconscious, implicit level. I actually agree this is possible, but I only want to repeat what Dennett and Brandom say: the intentionality of implicit cognitive systems or modules is derivative from original intentionality, which is the province of language-using, theory-building social creatures. Or we can make reference to the logic and designs of Mother-Nature. But, in my view, the intentionality of sub-personal systems is parasitic on linguistic intentionality.
So, I agree that Hacker would accept a weak version of the claim that our mind-reading abilities depend on neurology and the brain, but the important philosophical issue is how we describe those abilities. On his view, those abilities are nothing more than linguistic abilities of using certain psychological words which are in a criterial relation with certain types of behavior. And, both him and Wittgenstein would agree that having a well-functioning brain is factually important for acquiring language and participating in our complex form of life. But, other than that, this fact is not philosophically illuminating, since we can easily imagine making attributions of mental states to agents who don’t in fact have brains. So, those are only, factual, contingent, not criterial connections.
Scott,
The transcendentalist is responding by saying that it is unintelligible to dispute their account, because the eliminativist has to agree with their account to dispute it. If that’s not a bad argument, I don’t know what is!
Sounds like a very good argument to me. Traditionally argument is outside the subject that’s argued – in which case someone having to agree with the others account in order to argue is a bad argument. But once argument itself becomes the subject of argument, saying it’s a bad argument just seems a reference to the traditional situation where that is true, when this isn’t the traditional situation. Usually we bang nails with hammers – when it comes to banging hammers with hammers, someone bashing the hammer in your hand with their own hammer does make a fair sound. What are you going to hammer their hammer with?
I’m not agreeing with the argument, I’m just saying actually it’s a really good one and deserves more than the traditional dismissal, as that dismissal doesn’t really apply.
Can we develop a way of speaking without speaking, on this matter? For example, if I describe a corridor that propels the other person forward toward a flamethrower that fires across the corridor. And there is a button before that that appears to dissable the flamethrower – well, I haven’t said/argued ‘press the button to get through safely!’. The example avoids particular semantic meaning/avoided argument.
Constructions like that could bypass the problem/put down the hammer, yet still strike the other hammer.
Unless someone puts down, it does seem mutually reinforcing. The eliminativist simply already believes certain conditions (conditions which are part of his argument) to be true. So although to the eliminativist it would seem like science already does things without using meaning (brain surgery would be an example) and he doesn’t need to use such a rhetoric, really that’s just expecting the intentionalist to already believe the meat of his argument in order to then be convinced by that argument. That’s probably some bad argument.
*Waits to be pelted with MC Hammer puns…*
If at some time T=0 you do not have a conscious intent to go to the library and at some subsequent time T=1 you do have a conscious intent to go to the library has your brain changed in the interval between T=0 and T=1 in a way that corresponds to the change in your mind between “I do not have an intention to go to the library” and “I have an intention to go to the library” or can you have a change in your mind independent of a change in your brain? If you have had a change in your brain that corresponds to the change in your mind from “I do not have an intention to go the library” to “I have an intention to go to the library” then a complete description of that change in your brain (in terms of neuron firing rates, neurotransmitter levels and so on) is equivalent to “I formed an intention to go to the library” but free of intentional language. If you believe that you can have a change in your mind independent of a change in your brain then I would ask if you believe mind exists independent of brain. If you believe a mind can exist independent of a brain how do you explain the way the mind seems to cease to exist when the brain dies? If the mind does not cease to exist when the brain dies what happens to it? Does it go to heaven? Once you separate the mind from the brain you’re on your way to doing theology.
Regarding the “mereological fallacy” I would ask a similar set of questions. Can a person/agent form an intention to go to the library without some corresponding change in the state of his brain? Do person/agents exist independent of the bodies/brains in which they are incarnated? Can a person/agent persist after the body/brain in which he is incarnated ceases to exist? Can this disembodied person/agent scare kids at Halloween? Perhaps the reason why the person/agent always ceases to exist whenever the brain/body in which the person/agent is incarnated dies is because the person/agent and the brain/body are the same thing.
Regarding Socrates one might reasonably argue that the logical relation exampled by:
All men are mortal
Socrates is a man
Therefore Socrates is mortal
exists and is valid independent of your feelings about syllogisms. However your belief in the validity of this syllogism and your ability to apply syllogistic logic to novel major and minor premises exists in your brain. Where else could it exist? You have no idea what a gibbledwang is and you have no idea how to bwandwark but if you know that
All gibbledwangs can bwandwark
Maurice is a gibbledwang
you conclude that
Maurice can bwandwark.
The process by which you concluded that Maurice can bwandwark took place in your brain. Where else could it have taken place?
Hi Michael, unfortunately it seems that you’ve missed the point of my two arguments. Your type of mind to brain reductionism is very crude and nobody really defends it in quite those terms. Those types of simple reductions where rejected by both philosophers and psychologists early on in the last century. If there’s anything going on in my brain when I intend something, it has nothing to do with the nature of intention as a cognitive state. People had been using psychological concepts for ages, without any knowledge of the brain. Moreover, intentional states do not refer to inner experiences. There’s no consistent phenomenology of believing or intending. Whatever we feel inside is just an accompanying phenomenon with no intrinsic connection to the nature of the cognitive state. This argument was made famous by the later Wittgenstein in his “Philosophical Investigations” (1954). The same argument can be run with regard to brain states (like Peter Hacker and Hans-Johann Glock do). So, maybe there’s neurological accompaniments to mental states that have a distinctive phenomenology, but our concepts of intentions and beliefs are not concepts of specific inner experiences. All in all, as I said before, rather than running to the lab to perform misguided experiments, neuroscientists should just get clear about the concepts they work with and and maybe read a history of the philosophy of mind book. There’s still a lot of armchair work to be done.
If there’s anything going on in my brain when I intend something, it has nothing to do with the nature of intention as a cognitive state.
So how would you take it if someone had a metal pin in your brain and say you had the intention to go to the library – they say this button will block that impulse and you see them press the button and…you don’t really feel like going to the library anymore after all. Then they say another press removes the impulse blocker and they press and…the library seems like a good idea again?
You’d say that’s all because you’d decided it and that pin didn’t matter, it was just affecting your brain and not the real you?
The closest current world account I have of the above is someone who had had their corpus callosum severed (a horrific act in my opinion where the connection between the two halves of the brain is severed). They put on a blinker that stopped him seeing both sides of the room. On one side a scientist held up a sign saying ‘please stand up and approach me’. He did. They asked him why he did that – he said he wanted a coke. Ie, he confabulated a personal volition on the matter.
This in reply to Michael and Callan. I’m not saying that there are no important things biology and neuroscience can teach us about the mind. Just that we have to be clear about how psychological notions work in our language before we engage in any meaningful experimental/scientific work.
About the example Callan uses regarding someone manipulating my intention to go to the library by controlling a pin in my brain. Here again, the concept of intention is different from the concept of desire. An intention, is something over and above a simple want. I may want to have a beer but form no intention to have one. Intention involves also the capacity to plan ahead and use instrumental reasoning based on your beliefs. So, even if I accept, for the sake of argument that someone can control my appetites and wants, that still doesn’t give them control over what I intend. I can still intend to go to the library even if I don’t feel like it. I may have a strong sense of duty and a dislike for breaking rules.
Maybe it appears easier to map desires into brain states because they have a more prominent experiential aspect. But, as contentful cognitive states, they are still defined by their relations with intentions, beliefs, hopes and so on. For instance, maybe you can trigger a state in my brain which leads me to open a can of beer and decide that’s the desire for beer. But, later on, when you trigger the state again, I don’t get up to have a beer. Is it because I think I’m out of beer? Or was the desire you triggered just for one beer as opposed to many? Or maybe the desire is not strong enough to lead to action? Or maybe, I get up to have a beer when another area of my brain lights up. So, there’s a lot of guess-work and interpretation involved. And the observer has to have a framework of interpretation and a Davidsonian-type principle of charity which, in fact, involves, implicitly or explicitly, the whole background of folk-psychology.
Axl,
So you’d assume the experiment wont really get at you – just ups or downs desires, but doesn’t get at the other thing – intent?
Could you give me an example of going to get/do something, like a beer, or go to the library, or getting an epinephrine needle, etc that doesn’t just become ‘that was just a desire’.
I mean, if they all just become desires then that’s my point made anyway.
Right now it seems you will make any example of an intent into an example of a desire so as to make your point. You could do that forever, but it doesn’t really address an example of an intent that was controlled by a needle and wire. If you don’t accept any of my or Michaels examples as involving intent, can you give an example where you will say ‘ah, that’s not just desire involved, that’s intent, right there!’
I did not tag my last remark as a reply to you and I should have, so just to ensure you were aware of it:
I agree, Axl. My sort of reductionism is crude and no longer popular (if it ever was), but leaving aside Wittgenstein, Hacker and Glock, what do you think? Do minds or person/agents exist independent of the bodies/brains in which they are incarnated? Can minds or person/agents do things or be things for which there are no neurological/biological correlates? Crude and unpopular does not necessarily equal wrong. I asked you a series of yes-or-no questions here and in my previous remarks. You are of course under no obligation (moral, philosophical or otherwise) to respond with yes-or-no answers but I have found that making stark choices often helps me to clarify my thinking.
[…] – R. Scott Bakker, Three Pound Brain […]
“But, other than that, this fact is not philosophically illuminating, since we can easily imagine making attributions of mental states to agents who don’t in fact have brains.”
The problem isn’t that we can easily imagine attributing schematic models of attentional awareness and mental states to physical objects like trees, puppets and people. The problem is that folks also imagine attributing agency and mental states to disembodied minds which are themselves nothing more than imagined models of brainless, non-physical attribution. Then, they claim to introspect this attribution-of-attribution as the special nature of human reflexivity which can’t be naturalized like trees and puppets.
I agree, Axl. My sort of reductionism is crude and no longer popular (if it ever was), but leaving aside Wittgenstein, Hacker and Glock, what do you think? Do minds or person/agents exist independent of the bodies/brains in which they are incarnated? Can minds or person/agents do things or be things for which there are no neurological/biological correlates? Crude and unpopular does not necessarily equal wrong. I asked you a series of yes-or-no questions here and in my previous remarks. You are of course under no obligation (moral, philosophical or otherwise) to respond with yes-or-no answers but I have found that making stark choices often helps me to clarify my thinking.
When asking if someone believes disembodied X exists, it helps to know whether they define existence as a condition or a property. Otherwise, an intentionalist (especially an equivocating theist) might have you chasing your own tail discussing the existence of existence without even realizing why you’ve gotten dizzy.
Aboutness: The something it is like to be about something.
It pains me that even Rorty died an apostate to normativity after selling out to Davidson & Ramberg’s transcendental vocabulary of intentional states. The only place I can find something to read that doesn’t end up being haunted by one sneaky ghost or another seems to be Three Pound Brain.
[…] human cognition, the more alien to our traditional assumptions it becomes. We already possess a mountainous case for what might be called ‘ulterior functionalism,’ the claim that actual cognitive functions […]
[…] ‘recollect’ them in subsequent moments of experience and cognition, in effect, and realizes (as Hume did regarding causality, say) that the information available cannot account for the sum of […]
[…] “The Eliminativistic Implicit I,” we saw how the implicit anchors the communicative solution of humans and their activities. […]