Dennett’s Black Boxes (Or, Meaning Naturalized)
by rsbakker
“Dennett’s basic insight is that there are under-explored possibilities implicit in contemporary scientific ideas about human nature that are, for various well understood reasons, difficult for brains like ours to grasp. However, there is a familiar remedy for this situation: as our species has done throughout its history when restrained by the cognitive limitations of the human brain, the solution is to engineer new cognitive tools that enable us to transcend these limitations. ”
—T. W. Zadwidzki, “As close to the definitive Dennett as we’re going to get.”
So the challenge confronting cognitive science, as I see it, is to find some kind of theoretical lingua franca, a way to understand different research paradigms relative to one another. This is the function that Darwin’s theory of evolution plays in the biological sciences, that of a common star chart, a way for myriad disciplines to chart their courses vis a vis one another.
Taking a cognitive version of ‘modern synthesis’ as the challenge, you can read Dennett’s “Two Black Boxes: a Fable” as an argument against the need for such a synthesis. What I would like to show is the way his fable can be carved along different joints to reach a far different conclusion. Beguiled by his own simplifications, Dennett trips into the same cognitive ‘crash space’ that has trapped traditional speculation on the nature of cognition more generally, fooling him into asserting explanatory limits that are apparent only.
Dennett’s fable tells the story (originally found in Darwin’s Dangerous Idea, 412-27) of a group of researchers stranded with two black boxes, each containing a supercomputer with a database of ‘true facts’ about the world, one in English, the other in Swedish. One box has two buttons labeled alpha and beta, while the second box has three lights coloured yellow, red, and green. Unbeknownst to the researchers, the button box simply transmits a true statement from the one supercomputer when the alpha button is pushed, which the other supercomputer acknowledges by lighting the red bulb for agreement, and a false statement when the beta button is pushed, which the bulb box acknowledges by lighting the green bulb for disagreement. The yellow bulb illuminates only when the bulb box can make no sense of the transmission, which is always the case when the researcher disconnect the boxes and, being entirely ignorant of any of these details, substitute signals of their own.
The intuitive power of the fable turns on the ignorance of the researchers, who begin by noting the manifest relations above, how pushing alpha illuminates red, pushing beta illuminates green, and how interfering with the signal between the boxes invariably illuminates yellow. Until the two hackers who built the supercomputers arrive, they have no way of explaining why the three actions—alpha pushing, beta pushing, and signal interfering—illuminate the lights they do. Even when they crack open the boxes and begin reverse engineering the supercomputers within, they find themselves no closer to solving the problem. This is what makes their ignorance so striking: not even the sustained, systematic application of mechanical cognition paradigmatic of science can solve the problem. Certainly a mechanical account of all the downstream consequences of pushing alpha or beta or interfering with the signal is possible, but this inevitably cumbersome account nevertheless fails to explain the significance of what is going on.
Dennett’s black boxes, in other words, are actually made of glass. They can be cracked open and mechanically understood. It’s their communication that remains inscrutable, the fact that no matter what resources the researchers throw at the problem, they have no way of knowing what is being communicated. The only way to do this, Dennett wants to argue, is to adopt the ‘intentional stance.’ This is exactly what Al and Bo, the two hackers responsible for designing and building the black boxes, provide when they finally let the researchers in on their game.
Now Dennett argues that the explanatory problem is the same whether or not the hackers simply hide themselves in the black boxes, Al in one and Bo in the other, but you don’t have to buy into the mythical distinction between derived and original intentionality to see this simply cannot be the case. The fact that the hackers are required to resolve the research conundrum pretty clearly suggests they cannot simply be swapped out with their machines. As soon as the researchers crack open the boxes and find two human beings are behind the communication the whole nature of the research enterprise is radically transformed, much as it is when they show up to explain their ‘philosophical toy.’
This underscores a crucial point: Only the fact that Al and Bo share a vast background of contingencies with the researchers allows for the ‘semantic demystification’ of the signals passing between the boxes. If anything, cognitive ecology is the real black box at work in this fable. If Al and Bo had been aliens, their appearance would have simply constituted an extension of the problem. As it is, they deliver a powerful, but ultimately heuristic, understanding of what the two boxes are doing. They provide, in other words, a black box understanding of the signals passing between our two glass boxes.
The key feature of heuristic cognition is evinced in the now widely cited gaze heuristic, the way fielders fix the ball in their visual field while running to keep the ball in place. The most economical way to catch pop flies isn’t to calculate angles and velocities but to simply ‘lock onto’ the target, orient locomotion to maintain its visual position, and let the ball guide you in. Heuristic cognition solves problems not via modelling systems, but via correlation, by comporting us to cues, features systematically correlated to the systems requiring solution. IIR heat-seeking missiles, for instance, need understand nothing of the targets they track and destroy. Heuristic cognition allows us to solve environmental systems (including ourselves) without the need to model those systems. It enables, in other words, the solution of environmental black boxes, systems possessing unknown causal structures, via known environmental regularities correlated to those structures.
This is why Al and Bo’s revelation has the effect of mooting most all of the work the researchers had done thus far. The boxes might as well be black, given the heuristic nature of their explanation. The arrival of the hackers provides a black box (homuncular) ‘glassing’ of the communication between the two boxes, a way to understand what they are doing that cannot be mechanically decomposed. How? By identifying the relevant cues for the researchers, thereby plugging them into the wider cognitive ecology of which they and the machines are a part.
The communication between the boxes is opaque to the researchers, even when the boxes are transparent, because it is keyed to the hackers, who belong to the same cognitive ecology as to the researchers—only unbeknownst to the researchers. As soon as they let the researchers in on their secret—clue (or ‘cue’) them in—the communication becomes entirely transparent. What the boxes are communicating becomes crystal clear because it turns out they were playing the same game with the same equipment in the same arena all along.
Now what Dennett would have you believe is that ‘understanding the communication’ is exhausted by taking the intentional stance, that the problem of what the machines are communicating is solved as far as it needs to be solved. Sure, there is a vast, microcausal story to be told (the glass box one), but it proves otiose. The artificiality of the fable facilitates this sense: the machines, after all, were designed to compare true or false claims. This generates the sense of some insuperable gulf segregating the two forms of cognition. One second the communication was utterly inscrutable, and the next, Presto! it’s transparent.
“The debate went on for years,” Dennett concludes, “but the mystery with which it began was solved” (84). This seems obvious, until one asks whether plugging the communication into our own intentional ecology answers our original question. If the question is, ‘What do the three lights mean?’ then of course the question is answered, as well it should be, given the question amounts to, ‘How do the three lights plug into the cognitive ecology of human meaning?’ If the question is, ‘What are the mechanics of the three lights, such that they mean?’ then the utility of intentional cognition simply provides more data. The mystery of the meaning of the communication is dissolved, sure, but the problem of relating this meaning to the machinery remains.
What Dennett is attempting to provide with this analogy is a version of ‘radical interpretation,’ an instance that strips away our preconceptions, and forces us to consider the problem of meaning from ‘conceptual scratch,’ you might say. To see the way his fable is loaded, you need only divorce the machines from the human cognitive ecology framing them. Make them alien black-cum-glass boxes and suddenly mechanical cognition is all our researchers have—all they can hope to have. If Dennett’s conclusions vis a vis our human black-cum-glass boxes are warranted, then our researchers might as well give up before they begin, “because there really is no substitute for semantic or intentional predicates when it comes to specifying the property in a compact, generative, explanatory way” (84). Since we don’t share the same cognitive ecology as the aliens, their cues will make no implicit or homuncular sense to us at all. Even if we could pick those cues out, we would have no way of plugging them into the requisite system of correlations, the cognitive ecology of human meaning. Absent homuncular purchase, what the alien machines are communicating would remain inscrutable—if Dennett is to be believed.
Dennett sees this thought experiment as a decisive rebuttal to those critics who think his position entails semantic epiphenomenalism, the notion that intentional posits are causally inert. Not only does he think the intentional stance answers the researchers’ primary question, he thinks it does so in a manner compatible (if not consilient) with causal explanation. Truthhood can cause things to happen:
“the main point of the example of the Two Black Boxes is to demonstrate the need for a concept of causation that is (1) cordial to higher-level causal understanding distinct from an understanding of the microcausal story, and (2) ordinary enough in any case, especially in scientific contexts.” “With a Little Help From my Friends,” Dennett’s Philosophy: A Comprehensive Assessment, 357
The moral of the fable, in other words, isn’t so much intentional as it is causal, to show how meaning-talk is indispensible to a certain crucial ‘high level’ kind of causal explanation. He continues:
“With regard to (1), let me reemphasize the key feature of the example: The scientists can explain each and every instance with no residual mystery at all; but there is a generalization of obviously causal import that they are utterly baffled by until they hit upon the right higher-level perspective.” 357
Everything, of course, depends on what ‘hitting upon the right higher level perspective’ means. The fact is, after all, causal cognition funds explanation across all ‘levels,’ and not simply those involving microstates. The issue, then, isn’t simply one of ‘levels.’ We shall return to this point below.
With regard to (2), the need for an ‘ordinary enough’ concept of cause, he points out the sciences are replete with examples of intentional posits figuring in otherwise causal explanations:
“it is only via … rationality considerations that one can identify or single out beliefs and desires, and this forces the theorist to adopt a higher level than the physical level of explanation on its own. This level crossing is not peculiar to the intentional stance. It is the life-blood of science. If a blush can be used as an embarrassment-detector, other effects can be monitored in a lie detector.” 358
Not only does the intentional stance provide a causally relevant result, it does so, he is convinced, in a way that science utilizes all the time. In fact, he thinks this hybrid intentional/causal level is forced on the theorist, something which need cause no concern because this is simply the cost of doing scientific business.
Again, the question comes down to what ‘higher level of causal understanding’ amounts to. Dennett has no way of tackling this question because he has no genuinely naturalistic theory of intentional cognition. His solution is homuncular—and self-consciously so. The problem is that homuncular solvers can only take us so far in certain circumstances. Once we take them on as explanatory primitives—the way he does with the intentional stance—we’re articulating a theory that can only take us so far in certain circumstances. If we confuse that theory for something more than a homuncular solver, the perennial temptation (given neglect) will be to confuse heuristic limits for general ones—to run afoul the ‘only-game-in-town-effect.’ In fact, I think Dennett is tripping over one of his own pet peeves here, confusing what amounts to a failure of imagination with necessity (Consciousness Explained, 401).
Heuristic cognition, as Dennett claims, is the ‘life-blood of science.’ But this radically understates the matter. Given the difficulties involved in the isolation of causes, we often settle for correlations, cues reliably linked to the systems requiring solution. In fact, correlations are the only source of information humans have, evolved and learned sensitivities to effects systematically correlated to those environmental systems (including ourselves) relevant to reproduction. Human beings, like all other living organisms, are shallow information consumers, sensory cherry pickers, bent on deriving as much behaviour from as little information as possible (and we are presently hellbent on creating tools that can do the same).
Humans are encircled, engulfed, by the inverse problem, the problem of isolating causes from effects. We only have access to so much, and we only have so much capacity to derive behaviour from that access (behaviour which in turn leverages capacity). Since the kinds of problems we face outrun access, and since those problems are wildly disparate, not all access is equal. ‘Isolating causes,’ it turns out, means different things for different kinds of problem solving.
Information access, in fact, divides cognition into two distinct families. On the one hand we have what might be called source sensitive cognition, where physical (high-dimensional) constraints can be identified, and on the other we have source insensitive cognition, where they cannot.
Since every cause is an effect, and every effect is a cause, explaining natural phenomena as effects always raises the question of further causes. Source sensitive cognition turns on access to the causal world, and to this extent, remains perpetually open to that world, and thus, to the prospect of more information. This is why it possesses such wide environmental applicability: there are always more sources to be investigated. These may not be immediately obvious to us—think of visible versus invisible light—but they exist nonetheless, which is why once the application of source sensitivity became scientifically institutionalized, hunting sources became a matter of overcoming our ancestral sensory bottlenecks.
Since every natural phenomena has natural constraints, explaining natural phenomena in terms of something other than natural constraints entails neglect of natural constraints. Source insensitive cognition is always a form of heuristic cognition, a system adapted to the solution of systems absent access to what actually makes them tick. Source insensitive cognition exploits cues, accessible information invisibly yet sufficiently correlated to the systems requiring solution to reliably solve those systems. As the distillation of specific, high-impact ancestral problems, source insensitive cognition is domain-specific, a way to cope with systems that cannot be effectively cognized any other way.
(AI approaches turning on recurrent neural networks provide an excellent ex situ example of the indispensability, the efficacy, and the limitations of source insensitive (cue correlative) cognition (see, “On the Interpretation of Artificial Souls“). Andrei Cimpian, Klaus Fiedler, and the work of the Adaptive Behaviour and Cognition Research Group more generally are providing, I think, an evolving empirical picture of source insensitive cognition in humans, albeit, absent the global theoretical framework provided here.)
Now then, what Dennett is claiming is first, that instances of source insensitive cognition can serve source sensitive cognition, and second, that such instances fulfill our explanatory needs as far as they need to be fulfilled. What triggers the red light? The communication of a true claim from the other machine.
Can instances of source insensitive cognition serve source sensitive cognition (or vice versa)? Can there be such a thing as source insensitive/source sensitive hybrid cognition? Certainly seems that way, given how we cobble to two modes together both in science and everyday life. Narrative cognition, the human ability to cognize (and communicate) human action in context, is pretty clearly predicated on this hybridization. Dennett is clearly right to insist that certain forms of source insensitive cognition can serve certain forms of source sensitive cognition.
The devil is in the details. We know homuncular forms of source insensitive cognition, for instance, don’t serve the ‘hard’ sciences all that well. The reason for this is clear: source insensitive cognition is the mode we resort to when information regarding actual physical constraints isn’t available. Source insensitive idioms are components of wide correlative systems, cue-based cognition. The posits they employ cut no physical joints.
This means that physically speaking, truth causes nothing, because physically speaking, ‘truth’ does not so much refer to ‘real patterns’ in the natural world as participate in them. Truth is at best a metaphorical causer of things, a kind of fetish when thematized, a mere component of our communicative gear otherwise. This, of course, made no difference whatsoever to our ancestors, who scarce had any way of distinguishing source sensitive from source insensitive cognition. For them, a cause was a cause was a cause: the kinds of problems they faced required no distinction to be economically resolved. The cobble was at once manifest and mandatory. Metaphorical causes suited their needs no less than physical causes did. Since shallow information neglect entails ignorance of shallow information neglect—since insensitivity begets insensitivity to insensitivity—what we see becomes all there is. The lack of distinctions cues apparent identity (see, “On Alien Philosophy,” The Journal of Consciousness Studies (forthcoming)).
The crucial thing to keep in mind is that our ancestors, as shallow information consumers, required nothing more. The source sensitive/source insensitive cobble they possessed was the source sensitive/source insensitive cobble their ancestors required. Things only become problematic as more and more ancestrally unprecedented—or ‘deep’— information finds its way into our shallow information ambit. Novel information begets novel distinctions, and absolutely nothing guarantees the compatibility of those distinctions with intuitions adapted to shallow information ecologies.
In fact, we should expect any number of problems will arise once we cognize the distinction between source sensitive causes and source insensitive causes. Why should some causes so effortlessly double as effects, while other causes absolutely refuse? Since all our metacognitive capacities are (as a matter of computational necessity) source insensitive capacities, a suite of heuristic devices adapted to practical problem ecologies, it should come as no surprise that our ancestors found themselves baffled. How is source insensitive reflection on the distinction between source sensitive and source insensitive cognition supposed to uncover the source of the distinction? Obviously, it cannot, yet precisely because these tools are shallow information tools, our ancestors had no way of cognizing them as such. Given the power of source insensitive cognition and our unparalleled capacity for cognitive improvisation, it should come as no surprise that they eventually found ways to experimentally regiment that power, apparently guaranteeing the reality of various source insensitive posits. They found themselves in a classic cognitive crash space, duped into misapplying the same tools out of school over and over again simply because they had no way (short exhaustion, perhaps) of cognizing the limits of those tools.
And here we stand with one foot in and one foot out of our ancestral shallow information ecologies. In countless ways both everyday and scientific we still rely upon the homuncular cobble, we still tell narratives. In numerous other ways, mostly scientific, we assiduously guard against inadvertently tripping back into the cobble, applying source insensitive cognition to a question of sources.
Dennett, ever the master of artful emphasis, focuses on the cobble, pumping the ancestral intuition of identity. He thinks the answer here is to simply shrug our shoulders. Because he takes stances as his explanatory primitives, his understanding of source sensitive and source insensitive modes of cognition remains an intentional (or source insensitive) one. And to this extent, he remains caught upon the bourne of traditional philosophical crash space, famously calling out homuncularism on the one side and ‘greedy reductionism’ on the other.
But as much as I applaud the former charge, I think the latter is clearly an artifact of confusing the limits of his theoretical approach with the way things are. The problem is that for Dennett, the difference between using meaning-talk and using cause-talk isn’t the difference between using a stance (the intentional stance) and using something other than a stance. Sometimes the intentional stance suites our needs, and sometimes the physical stance delivers. Given his reliance on source insensitive primitives—stances—to theorize source sensitive and source insensitive cognition, the question of their relation to each other also devolves upon source insensitive cognition. Confronted with a choice between two distinct homuncular modes of cognition, shrugging our shoulders is pretty much all that we can do, outside, that is, extolling their relative pragmatic virtues.
Source sensitive cognition, on Dennett’s account, is best understood via source insensitive cognition (the intentional stance) as a form of source insensitive cognition (the ‘physical stance’). As should be clear, this not only sets the explanatory bar too low, it confounds the attempt to understand the kinds of cognitive systems involved outright. We evolved intentional cognition as a means of solving systems absent information regarding their nature. The idea then—the idea that has animated philosophical discourse on the soul since the beginning—that we can use intentional cognition to solve the nature of cognition generally is plainly mistaken. In this sense, Intentional Systems Theory is an artifact of the very confusion that has plagued humanity’s attempt to understand itself all along: the undying assumption that source insensitive cognition can solve the nature of cognition.
What do Dennett’s two black boxes ultimately illuminate? When two machines functionally embedded within the wide correlative system anchoring human source insensitive cognition exhibit no cues to this effect, human source sensitive cognition has a devil of a time understanding even the simplest behaviours. It finds itself confronted by the very intractability that necessitated the evolution of source insensitive systems in the first place. As soon as those cues are provided, what was intractable for source sensitive cognition suddenly becomes effortless for source insensitive cognition. That shallow environmental understanding is ‘all we need’ if explaining the behaviour for shallow environmental purposes happens to be all we want. Typically, however, scientists want the ‘deepest’ or highest dimensional answers they can find, in which case, such a solution does nothing more than provide data.
Once again, consider how much the researchers would learn were they to glass the black boxes and find the two hackers inside of them. Finding them would immediately plug the communication into the wide correlative system underwriting human source insensitive cognition. The researchers would suddenly find themselves, their own source insensitive cognitive systems, potential components of the system under examination. Solving the signal would become an anthropological matter involving the identification of communicative cues. The signal’s morphology, which had baffled before, would now possess any number of suggestive features. The amber light, for instance, could be quickly identified as signalling a miscommunication. The reason their interference invariably illuminated it would be instantly plain: they were impinging on signals belonging to some wide correlative system. Given the binary nature of the two lights and given the binary nature of truth and falsehood, the researchers, it seems safe to suppose, would have a fair chance of advancing the correct hypothesis, at least.
This is significant because source sensitive idioms do generalize to the intentional explanatory scale—the issue of free will wouldn’t be such a conceptual crash space otherwise! ‘Dispositions’ are the typical alternative offered in philosophy, but in fact, any medicalization of human behaviour examples the effectiveness of biomechanical idioms at the intentional level of description (something Dennett recognizes at various points in his oeuvre (as in “Mechanism and Responsibility”) yet seems to ignore when making arguments like these). In fact, the very idiom deployed here demonstrates the degree to which these issues can be removed from the intentional domain.
The degree to which meaning can be genuinely naturalized.
We are bathed in consequences. Cognizing causes is more expensive than cognizing correlations, so we evolved the ability to cognize the causes that count, and to leave the rest to correlations. Outside the physics of our immediate surroundings, we dwell in a correlative fog, one that thins or deepens, sometimes radically, depending on the physical complexity of the systems engaged. Thus, what Gerd Gigerenzer calls the ‘adaptive toolbox,’ the wide array of heuristic devices solving via correlations alone. Dennett’s ‘intentional stance’ is far better understood as a collection of these tools, particularly those involving social cognition, our ability to solve for others or for ourselves. Rather than settling for any homuncular ‘attitude taking’ (or ‘rule following’), we can get to the business of isolating devices and identifying heuristics and their ‘application conditions,’ understanding both how they work, where they work, and the ways they go wrong.
Sorry, I accidentally posted this on the Great Ordeal thread, where you can delete it:
Here’s where I get confused, and I want to apologize in advance if this question is naive because I haven’t read enough of your stuff. It seems there are two things you cannot say about a “source sensitive” account: you cannot say we adopt it because “it works,” because that is the norm of heuristic, source insensitive cognition. It’s not good enough because it’s too source insensitive.
But you also cannot say of your meta-account “This is true,” because the norm of “truth” just masks a process that is causal/mechanistic through and through, i.e. it is “source insensitive.” At least, that’s what I take you to be implying above.
So what exactly is the justification for the source sensitive account? What makes it better or truer or whatever third thing we actually CAN say?
I hope the brevity of the question contributes to its clarity rather than the opposite, if not I’ll try to say something more elaborate but I think you must get this question a lot so maybe I don’t have to.
The issue isn’t one of escaping source insensitive cognition outright, but of escaping the confounds arising from it’s misapplication. Why would saying it ‘works’ count as a misapplication? That’s a far different question, one possessing far more productive answers (though I admit that I could be running afoul an only-game-in-town effect here).
So there’s no problem saying my account is true, so long as we understand the lay of the cognitive ecology, so to speak.
Granted this causes traditional eyes to roll, but only because they’re not thinking in terms of concrete application. So, to sharpen your apparent tu quoque, on my account it is entirely true to say there is no such thing as truth, because the application of truth talk (within a wider correlative economy) presupposes no ontology of truth. Physically speaking, semantic nihilism is true.
This makes eyes roll because reflection on posits seems to be reflexively ontological. If there is no such thing as truth, how can claims possess the property of being true? The thing to note, however, is how this conditional shifts between cognitive modalities, beginning with a question of what is (which is generally open to source sensitive cognition), then pivoting to an intentional question. It should come as no surprise that we are all but blind to this pivot. The distinction offered here, however, allows one to make a good deal of sense of it and some otherwise deeply perplexing things. In fact (as I argue in the JCS paper), it makes it difficult to understand how things could be any other way. We’re baffled by ourselves for this very reason: we can improvise applications absent any inkling of the limits of those applications.
“The issue isn’t one of escaping source insensitive cognition outright, but of escaping the confounds arising from it’s misapplication. Why would saying it ‘works’ count as a misapplication?”
I thought that you were contending that the critique of (source in/sensitive) reason cannot be conducted from the standpoint of source insensitive reason. But if I understand you correctly you’re saying that we can adopt the valuative standpoint of source insensitive reason without accepting its source insensitivity? So reason is there to do things that work is one point and what usually works is heuristic shortcuts is a separate point, and we can accept the former without extending the latter?
I’m saying that intentional cognition, as a source insensitive cognitive system, is misapplied to the question of cognition more generally. It just ain’t effective, as I think the history of philosophy amply attests! Is this second order application of intentional cognition to questions of ‘cognitive virtue’ also misapplied? Maybe. I don’t think so because these kinds of ‘What works best’ questions are the very kinds of questions that intentional cognition effectively solves.
It all comes down to the problem-ecology (as Gigerenzer calls them) of the source insensitive shortcut at issue. Practical problems fall within the intentional cognitive wheelhouse, but theoretical problems? Not so much.
Note that this doesn’t mean that ‘better’ is a mysterious ‘property’ of behaviours or things: as soon as you begin talking in these terms you’ve jumped the grammatical shark, and begun theorizing source insensitive components in source sensitive terms.
Think of it in these terms: everyone agrees that cognition (like, say, memory) is far more complicated than it appears upon reflection. All I’m saying is that this complication, as a biological product, is ecological. If it is ecological, then application has to be a primary issue, and misapplication a pressing problem. This is what I’m trying to map out.
The way I’d describe it is that unless there’s a concrete pay off (or perceived potential for one) stemming from a ‘truth’ (usually a resource aquirement), there is no truth. If there’s no payoff, it ceases being truth and becomes ‘truth’.
I think I’d call it ‘Source speculative’ rather than ‘Source sensitive’ – source sensitive is what you think you get, if I understand this post somewhat, when source insensitive thinking is applied against source insensitive thinking. The convincing sense of having plumbed a system to it’s utter depths, for using that very same system to measure it. I don’t really get the ‘consider how much the researchers would learn’ paragraph otherwise – why would they learn these things about the amber light, for example? But if they speculate then they may propose the idea what they are doing is affecting the devices, test that theory and over a number of tests, find the same result time and again. There’s no sensitivity in that – just speculation, testing and maybe under the same set of test circumstances, the same result coming up again and again. The real contrast is that without source speculation, they would be perpetually baffled by the yellow light – they’d never consider the idea of themselves as part of the lights source. To me it’s not a matter of how much the researchers would learn, but how much they wouldn’t learn (ever) if they did not speculate.
Whether it helps communication to present it that way, I dunno, but I found this post a bit harder to follow than usual.
that we can use intentional cognition to solve the nature of cognition generally is plainly mistaken.
This may be causing a problem – you might have no real idea about it being mistaken (as much as the other side might have no real idea about their meaning being some kind of thing that cannot be reduced). Have you done some science on it? If intentional cognition isn’t good for solving, how would it be good for identifying mistakes? Surely it’d be inapplicable toward each? And keep in mind that question might not be answerable because it involves an application of intentional cognition itself.
It might be a real problem calling things a mistake – it may just get the tar of ones interlocutor upon oneself? How can you call them out for being smokers by blowing smoke rings in their face? But how to engage in discourse in light of this rather than be muted in the human communication channels – yeah, fair enough, dunno just yet.
If you weren’t so irretrievably wedded to your idiosyncratic views on explanation and bog-standard physicalist views on questions of objectivity and realism you’d have an easier time of this. You won’t accept any conclusion but the most radical of skepticism, and since this is your point of departure rather than a conclusion you’ll keep spinning your wheels trying to make psychological cake out of “naturalistic” flour.
Related: accusing Dennett of a “homuncular” theory of cognition suggests you should read “Real Patterns” again. This isn’t a mistake he makes; it falls out of your idiosyncratic views on explanation and causation, not to mention the question of realism, but it isn’t a commitment Dennett actually holds.
I’d like to know what’s ‘standard’ about my physicalist views! If this is how you’re reading me then you’ve missed something important.
IST is homuncular by design: you do realize this. Stances provide a (faulty, I’m arguing here) way to map homuncularisms, good and bad. I’ve been told to go back and read RP many, many times now. No one who has suggested I do this has been able to tell me why I need to do this–what it is I’m apparently missing. I’m sure I am. But so far no one has obliged me.
RSB, check yer email
“I’d like to know what’s ‘standard’ about my physicalist views! If this is how you’re reading me then you’ve missed something important.”
Perhaps I have, but your own remarks in this very post state that there is a level of higher-dimensional reality that cognition (as you understand it) cannot access. You take this as grounds to assert that what cognition understands is somehow less-real or less-genuine than the physical realm.
I suppose you take your ecological theory to distinguish you from mainstream physicalism, which it does to a point, but in this explanatory sense it’s no better than the bog-standard dualism of a material noumena and psychological phenomena.
“IST is homuncular by design: you do realize this.”
You don’t get to make this claim by bare stipulation. Unfortunately Dennett does not say this. Dennett takes great care to make sure he doesn’t say this.
The choice you pose here is between what Dennett has actually written and what you stipulate that he does. You’ve made no argument for this point. Do you see why this is not convincing?
“I’ve been told to go back and read RP many, many times now. No one who has suggested I do this has been able to tell me why I need to do this–what it is I’m apparently missing.”
You’ve apparently come away from “Real Patterns” with the view that Dennett is postulating inner states belonging to a cohesive ego. That’s a strong hint that you missed something crucial.
Another important point in that paper is his view on what a realism might consist in, which departs from your own bog-standard views on physicalistic explanation and questions of realism.
If you have some other sense of “homuncular” in mind, by all means elaborate. But Dennett is quite clear that the intentional stance is adopted “from the outside”, in the spirit of a Quinean and Davidsonian theorist of behavior. There are no states posited “within”, only explanations of behavior. Explanations which, as Dennett points out, are quite useful even within a naturalistic attitude — and here again we are back at the limits of your own physicalism and idiosyncratic views on explanation.
(Also, Dennett elaborates on the very point you tried to make against him in this blog post. Have a look at “Three kinds of intentional psychology” in TIS, especially the section on subpersonal cognitive processing and its relation to psychology. On pp.64-65 he addresses this exact issue.)
“I suppose you take your ecological theory to distinguish you from mainstream physicalism, which it does to a point, but in this explanatory sense it’s no better than the bog-standard dualism of a material noumena and psychological phenomena.”
The big temptation is to feed my approach into the very misapplications it warns against, and this is the one I regard the most fundamental (to the philosophical tradition at least). Think about it: we have no direct way of sourcing the information broadcast for (potential) conscious report, so we require some indirect way, a mode that systematically neglects the actual mechanics of perception, ideation, and communication. ‘About-talk’ accomplishes this, ‘quasi-sourcing’ percepts, thoughts, and claims in a source insensitive matter. If you confuse the heuristic system underwriting about-talk as ontologically basic (which metacognition seems to do reflexively) then you end up with dichotomous confounds like the ‘subject-object paradigm.’ For all its practical utility, ‘about talk’ is theoretically disastrous, simply because it’s a tool for handling our natural continuity while remaining oblivious to it.
So to be clear, there is no subject/object dichotomy on my account, only nature, inside and out.
“You don’t get to make this claim by bare stipulation. Unfortunately Dennett does not say this. Dennett takes great care to make sure he doesn’t say this.”
My guess, then, is that you’re assuming his critique of the Cartesian theatre is a critique against homuncularism more generally, because I really don’t see this as all that controversial. I guess what I’m saying could be construed along the lines of Bennett and Hacker’s critique of Dennett… But I suspect the problem is simply terminological: On my view, effective homuncular explanations are simply source insensitive explanations, which is to say, explanations that leverage our position within a system. This is as good a coarse-grained way to understand the intentional stance as any, I think.
“You’ve apparently come away from “Real Patterns” with the view that Dennett is postulating inner states belonging to a cohesive ego. That’s a strong hint that you missed something crucial.”
As my first reply makes clear (hopefully), nothing could be further from the case! I see Dennett, like Davidson, standing outside the traditional realism/anti-realism debate. And I see that you do indeed have a particular notion of homuncularism in mind–a mentalistic one, no less. Small wonder you found my claim confusing! But precisely because IST has so little in the way of theoretical resources, I think he has no means of naturalistically accounting for heuristic cognition, and so no real handle on the applications/misapplications of ‘real-talk.’
I’ve read “Three Kinds,” but I’ll definitely take another look.
“Think about it: we have no direct way of sourcing the information broadcast for (potential) conscious report, so we require some indirect way, a mode that systematically neglects the actual mechanics of perception, ideation, and communication.”
This is the exact level of the challenge I’m posing for you when I say your view is bog-standard physicalism. I don’t just mean this as an ontological claim. It’s primarily a methodological issue that stems right from your idiosyncratic vies on explanation itself.
You’ve introduced a distinction between genuine material reality and the error-ridden apparatus of perception and cognition at the level of the method. That’s the concern here, not any metaphysical detritus per se.
The problem here is whether you’re entitled to make that move, and there’s little reason to think that you are. The main reason is that you have no choice but to appeal to your own explanatory framework to justify your methodology. Alright; but the difficulty you keep butting heads against is just why should this convince anyone else that the austerity you want to impose matters.
“On my view, effective homuncular explanations are simply source insensitive explanations, which is to say, explanations that leverage our position within a system. This is as good a coarse-grained way to understand the intentional stance as any, I think. “
In this post you define “source insensitive explanations” as “where physical (high-dimensional) constraints can [not] be identified”. The entire case against Dennett, then, comes down to charging him with what amounts to a variety of mysticism. If you are right, he is positing entities constructed inappropriately from the flux of information, and thus lacking any “real” traction with how things are. The constraints are inaccessible vis-a-vis the explanation.
Setting aside for the moment that this only reinforces the dualism at the level of explanation (per above), the trouble is that Dennett does not do even this. This is why I encouraged you to read “Real Patterns” again. The interplay between microphysical subpersonal cognitive processing and then the level of human behavior to be explained avoids exactly this difficulty because i) mentalistic entities are not entities at all and ii) Dennett’s ontology (such as it is) expressly allows abstract entities with the proviso that they figure into scientific explanations.
You’re charging him with making an “intentionalist” move, but this only makes sense with your physicalist and explanatory constraints in place. Dennett doesn’t take it that microphysical events exhaust the criteria for “real”; nor does he take it that explanations depend on the exclusive provision of microphysical hypotheses.
What’s strange about your move is that you grant there is a “higher dimensional” ontology at work which the sciences discover. This is fine! What goes awry is your temptation to focus on the microphysical mechanism as exclusive and exhaustive. But a truly naturalistic stance cannot do this; sciences are sciences, and the attempt to enforce some sort of unity of explanation or to extrapolate into metaphysical claims about what does this or that is to go beyond the science into both methodological and metaphysical excess.
“But precisely because IST has so little in the way of theoretical resources, I think he has no means of naturalistically accounting for heuristic cognition, and so no real handle on the applications/misapplications of ‘real-talk.’”
I think quite the opposite: Dennett’s account is if anything richer in explanatory resources. It can appeal not only to cognitive and neurophysiological mechanism, but also the level of psychological explanation, resources in computer science and AI, and still more beside. At this point even Ryle and Quine were happy to allow history and to some extent literature into the range of explanatory resources — and if we’re being good pragmatists, why not?
To put it another way: there’s now an explanatory burden to discharge at the level of method and of explanation itself. It takes philosophy to eliminate a good source of cognitive resources by imposing artificial speculative contrivances about The World. What moves us to take the austere account, and one with the most unwieldy predictions? That’s not good science.
“The problem here is whether you’re entitled to make that move, and there’s little reason to think that you are. The main reason is that you have no choice but to appeal to your own explanatory framework to justify your methodology. Alright; but the difficulty you keep butting heads against is just why should this convince anyone else that the austerity you want to impose matters.”
Ultimately the justification has to be abductive, doesn’t it? If it can explain more with less in ways more consilient with the biological sciences, then given our present ignorance and disarray, it’s worth a serious, open minded look. The problem you raise here is the problem any general theory of cognition faces.
Let’s pan back before things metastatize: I never said IST didn’t have flexibility or myriad naturalistic inputs. I’ve always loved it for that very reason. I said that taking stances as explanatory primitives limits the explanatory power of the theory. There’s no reason I can fathom why we should bother with them if we have ways to go beyond them, let alone understand the liability they represent. Brandom examples the ease with which they can be put to speculative mischief. Stances give a low resolution, mechanically incompatible way of conceptualizing heuristic application in a manner amenable to the shape of cognitive experience–the ‘psychological.’ Heuristic neglect gives a somewhat higher resolution, mechanically compatible version of the same. Instead of seeing an intentional system as any system that can be predicted via the intentional stance, we see an intentional system as a system containing a system that can be predicted via intentional cognition.
If you disagree with this, then you owe me examples. Show me how IST gives a more austere, more nuanced theoretical account of the two black boxes.
“If you disagree with this, then you owe me examples. Show me how IST gives a more austere, more nuanced theoretical account of the two black boxes.”
If you’ll pan back up to the unfortunately format-mangled comment previous to this one, you’ll see that I moved in that direction.
But to reiterate the problem, the question isn’t about austerity, at least not outside of your own view of explanation. That’s a philosophical commitment, one you’ve evidently inherited from the old Hempel model of hypothetico-deductive explanation and a corresponding unity-of-science methodology.
The question itself is, as you rightly note, about explanatory power: so on what abductive grounds do we accept this formulation of the problem:
“Stances give a low resolution, mechanically incompatible way of conceptualizing heuristic application in a manner amenable to the shape of cognitive experience–the ‘psychological.’ Heuristic neglect gives a somewhat higher resolution, mechanically compatible version of the same. Instead of seeing an intentional system as any system that can be predicted via the intentional stance, we see an intentional system as a system containing a system that can be predicted via intentional cognition.”
To even put the problem in this way assumes your account of explanation and the methodologies which follow from it. But where is this stratification coming from? The view that Dennett argues for is pluralistic in explanatory scope. He isn’t denying the microphysical level has pragmatic usefulness; he’s just denying it the sort of explanatory primacy you want to accord it.
The level of psychological explanation figures in here because it allows us to explain more. If we follow you into philosophical reductionism in both the theoretical and ontological scope, what abductive power is gained? You’re asking us to deliberately shrink our cognitive and explanatory resources; Dennett’s telling us we get all of it, and more besides.
That’s the burden you have to meet here.
To be honest, I’m not even really clear what you’re arguing for anymore, CA. I don’t give a damn about ‘reductionism,’ myself, apart from whatever it is scientists do when they reverse-engineer natural processes. Appeals to methodological pluralism are well and fine, but I really don’t understand how they are even relevant to inquiry into deeper accounts. I don’t see how my view shuts down appealing to any of the heuristics effectively used in science–which is to say, isn’t every bit as ‘pluralistic’ as IST. It simply provides a way to understand what these heuristics are, how they function, and why they trip us up they way they do. Your counter-argument, as far as I can tell, is that we’re better off… not knowing this? Isn’t there always a deeper story, always something we’re missing? If it turns out that functional analyses in psychology, for instance, can be understood as neither mentalistic nor mere ‘mechanism sketches,’ but as experimentally regimented applications of source insensitive cognition, then understanding as much will allow us to refine these applications, minimize confounds, and retask our theoretical resources, will it not?
Either way, the compatibility of IST with pluralism does not redound in IST’s abductive favour–that I know for sure. You may prefer the county fair, but that doesn’t change the fact that big tents, like evolutionary theory, bring home the receipts. If IST has more explanatory resources (as opposed to compatibilities) then show us with the black boxes. I’ve already made my demonstration.
“Your counter-argument, as far as I can tell, is that we’re better off… not knowing this?”
My point is that we’re better off knowing the systems-theoretical explanation, the neurophysiological explanation, the cognitive-theoretical explanation, the psychological, the sociological, the historical, and on down the list.
Pluralism means pluralism. Explanatory power is enriched when psychology, history, sociology, &c. are part of the story alongside the systems-theoretical account.
You’ve set up the whole problem so that you can’t let go of “heuristic cognition” and a systems-theoretical explanation of such as basic features of the story. Psychology can’t be anything but thin and toothless speculations of hopelessly confused organisms. But this creates more troubles than it solves at the conceptual level; and as an explanation it has no advantages.
Why? Because you’ve got to rig the game from the first move to even get started, both with the “heuristics” explanans and the interpretation of psychology (“cognition”) that you take as the explanandum. Both of these moves already rule out any actual pluralism that isn’t locked in the microphysical basement.
Dennett on the other hand is quite capable of addressing everything you have to say about subpersonal cognitive processes and microphysical mechanisms. But he can do it without sliding into eliminativist incoherence and the methodological solipsism that stems from trying to explain mental life via properties of individual beings. If explanatory power is our standard, Dennett’s got not only the bigger tent — which is really a misdirection anyway; parsimony is the deal-maker only if we take your variety of pluralism which is really not pluralistic — but the more useful story.
Yeah… I’m sorry, but I see these kinds of gestures as institutional rearguard rationalizations, the attempt to transform what is pretty clearly theoretical confusion into theoretical virtue–an argument for ignorance, like ‘irreducibility’ and so on and so on (what’s wrong with admitting we just don’t know, as opposed to asserting certain domains unknowable?). Even within these plural enclaves theorists squabble over their formulations, let alone their explanations! Maybe this is the way it HAS to be, but then maybe not. I just don’t know how one goes about using the speculative presumption of the former to warrant shutting down theoretical explorations of the latter. I just don’t get it. Are you suggesting we dismantle the modern evolutionary synthesis? If not, if you recognize the value of that synthesis, then how can you argue against any other potential synthesis?
I’m not sure how the ‘locking us in the microphysical basement’ notion isn’t a canard: mechanical explanations generalize across levels. It’s part of what makes them so powerful, as well as the reason Dennett makes them his concern in “Mechanism and Responsibility” (which I see as his most prescient paper).
I also don’t get the “Dennett is quite capable” bit: The argument is about IST’s theoretical resources, not Dennett’s more generally.
“You’ve set up the whole problem so that you can’t let go of “heuristic cognition” and a systems-theoretical explanation of such as basic features of the story. Psychology can’t be anything but thin and toothless speculations of hopelessly confused organisms. But this creates more troubles than it solves at the conceptual level; and as an explanation it has no advantages.”
I set this aside because I was wondering if you could expand on the assertions here. Given the conceptual mire of psychology (let alone it’s ongoing empirical crisis), how would heuristic neglect theory ‘create more trouble than it solves’? In what way does it fail to offer explanatory advantages? It certainly provides an elegant way to explain Dennett’s black boxes (as well as much, much more). How does my explanation, for instance, fall short Dennett’s IST explanation?
“Yeah… I’m sorry, but I see these kinds of gestures as institutional rearguard rationalizations, the attempt to transform what is pretty clearly theoretical confusion into theoretical virtue–an argument for ignorance, like ‘irreducibility’ and so on and so on (what’s wrong with admitting we just don’t know, as opposed to asserting certain domains unknowable?).”
Two things:
1. Where do you get “an argument for ignorance, like ‘irreducibility’ and so on and so on” out of an express pluralism of method?
2. Your psychological scruples (“sorry, but I see…”) aren’t anyone’s concerns but your own. If you’re going to rest your case on “well I don’t like it”, I expect you’ll have located the source of much of the resistance you claim to encounter.
To wit:
“Even within these plural enclaves theorists squabble over their formulations, let alone their explanations! Maybe this is the way it HAS to be, but then maybe not. I just don’t know how one goes about using the speculative presumption of the former to warrant shutting down theoretical explorations of the latter. I just don’t get it. Are you suggesting we dismantle the modern evolutionary synthesis? If not, if you recognize the value of that synthesis, then how can you argue against any other potential synthesis?”
You say you’re fine with methodological pluralism and aren’t endorsing any variety of physicalistic explanation; and yet here you are rejecting the former and upholding the latter!
Regarding the modern synthesis: as I asked above, how do you get to this — “Are you suggesting we dismantle the modern evolutionary synthesis?” — from the claim that scientific tools and research programs are themselves pluralistic? I don’t have to “dismantle the modern synthesis” to point out that theoretical physics is also useful in explaining other phenomena. Pluralism means pluralism. I’ve made no call to shut down any area of inquiry, rather to place them alongside one another, according to their areas of inquiry, pragmatic usefulness, and perhaps more besides. This is because a good account of explanation itself isn’t shackled with your — as it now appears, only implicit and unrecognized — need to find The Right Level(s) of explanatory unity.
Secondly, as I’m no doubt you’re aware, the modern synthesis itself is (perhaps) undergoing a revision of its own in light of further discoveries in biology and subsequent methodological refinements and the production of new cognitive-explanatory tools. That’s what science does! It progresses, fragments, and pluralizes. You seem to get this on one level, only to retreat into The Right Level thinking. Only you do it inconsistently; basic physics is fine (isn’t it?), but trading up the chain to psychology is a no-no. But no similar hesitations about astrophysics? What about quantitative economics? Perceptual psychology? Theories of learning and child development? Linguistics?
“I set this aside because I was wondering if you could expand on the assertions here. Given the conceptual mire of psychology (let alone it’s ongoing empirical crisis), how would heuristic neglect theory ‘create more trouble than it solves’? In what way does it fail to offer explanatory advantages? It certainly provides an elegant way to explain Dennett’s black boxes (as well as much, much more). How does my explanation, for instance, fall short Dennett’s IST explanation?”
By “psychology” I mean the term in its broadest sense, not as constrained to the field of empirical inquiry as such. When I say “creates more trouble than it solves” I mean at the level of utter implausibility by way of carrying on human lives. If psychological “posits” (which is already the wrong way to look at mentalistic language) are empty speculations lacking theoretical power, then it’s one huge mystery as to how human beings communicate and reach agreement on anything.
I don’t disagree with your general point that the intellect and rationality are very much oversold, but the reaction into materialism and the second coming of structuralism via systems theory is very much an over-reaction. That we don’t often, or even usually, reason, think, perceive veridically, etc. etc. is not itself an immediate argument against the fact that we can.
The explanatory advantage of Dennett’s IST is that it keeps psychology in the picture by explaining it in mechanistic terms (as you rightly note), but without eliminating psychology as useful categories. The varieties of pattern that he identifies with psychological talk explain why this talk is useful, even if it doesn’t accurately track the deeper mechanisms at work. That by itself widens the explanatory tent and captures something that your view cleaves off as a fundamental error.
This is all in keeping with Zawidski’s passage quoted in the OP:
“as our species has done throughout its history when restrained by the cognitive limitations of the human brain, the solution is to engineer new cognitive tools that enable us to transcend these limitations.”
Dennett’s toolbox has a much wider kit of these cognitive tools; you seem quite happy to leave out a range of potent, not to mention important, cognitive tools because they’re not new and shiny, and because they don’t meet your explanatory standards. But the question, as I’ve pressed through this entire exchange, is just what is so important about your explanatory standards? You seem to vacillate between eliminating anything that has so much as a whiff of non-physicalism while nodding your head at the prospects of methodological pluralism. My point is that these are inconsistent, if not quite incommensurable positions on explanation itself.
I’m far from alone in my appraisal of psychology, as I’m sure you well know. The inability to agree on formulations, let alone explanations, is generally a good sign of collective inquiry gone wrong. I don’t know what to say, except that all institutions are prone to recast their vulnerabilities as strengths.
I think psychology labours–rather obviously so–under some kind of profound misapprehension. Now I think it’s fair to say this is the primary claim at issue between us. You believe that the disarray is largely an artifact of the diversity and complexity of our tools and our domains: it’s just the way it’s gotta be because that’s just how it is. This conserves the status quo, provides cover for the disarray and so preserves the interests of the status quo.
And it entails that humanity never mechanically cognize the plurality of cognitive modes.
I always get suspicious when philosophy kicks shut empirical doors. I don’t know what else to say. You’re very articulate, CA, but I think your case is clearly an apologetic one. Engage me on the details: tell me why my account misses aspects of the two black boxes that IST does not. If you’re right about this high-altitude stuff, then my reading has to go wrong somehow. Show me that.
“When I say “creates more trouble than it solves” I mean at the level of utter implausibility by way of carrying on human lives. If psychological “posits” (which is already the wrong way to look at mentalistic language) are empty speculations lacking theoretical power, then it’s one huge mystery as to how human beings communicate and reach agreement on anything.”
Call it ‘mind talk’ if ‘posits’ make you uncomfortable. I admit, “then it’s one huge mystery as to how human beings communicate and reach agreement on anything” perplexes me simply because it IS one huge mystery as to how human beings communicate and reach agreement on anything. Why would you think otherwise?
The important thing is that seeing mind talk as a component of a larger heuristic ecology the way I do provides a way of understanding mind talk without resorting to mental-talk. I still find that threatening, so I’m not surprised that you do as well. Moving past mind talk is a scary prospect–horrifying even.
The thing to realize is this: the fact that mind talk is a mandatory feature of a wide variety of social reports implies very little about the nature of mind talk. All things being equal, we should expect that we evolved source insensitive modes of cognizing others and ourselves incompatible with deep source sensitive accounts of nature. As components of a source insensitive system, we should expect mind talk to become increasingly unreliable as more source sensitive information saturates our cognitive ecologies.
“The inability to agree on formulations, let alone explanations, is generally a good sign of collective inquiry gone wrong.”
It can be, but as any account of explanation must acknowledge, explanations come to an end somewhere. Past a certain point, explanation becomes inseparable from brute description. You seem to have much more confidence in the power of human enquiry to reach a consensus-based settling point, and one far removed from the “ordinary world” of talk and perception, than I.
I find that more than a little puzzling given your entirely justified skepticism about the prospects for cognition to achieve much of anything beyond the service of pragmatic interests. The motivation for my disagreeing with you at all is exactly because I agree with much of what you say about “the mind”, but the implications of that skepticism also strongly tell against anything like a meaningful “consensus” unless we bring in metaphysical and explanatory-cum-metaphysical premises — exactly what I want to reject!
“Now I think it’s fair to say this is the primary claim at issue between us. You believe that the disarray is largely an artifact of the diversity and complexity of our tools and our domains: it’s just the way it’s gotta be because that’s just how it is. This conserves the status quo, provides cover for the disarray and so preserves the interests of the status quo.”
To a point. I’m certainly conservative in some ways, but only to a point. I don’t see any plausible way to jettison certain basic features of the story, like “there are human beings” and “human beings operate in an environment”. My commitments here are modest: we can get things right with regards to our orientation in “the world”. The latter includes the physical and the social environment.
Obviously we’re keyed into certain features in virtue of our physical and cognitive constitution, but that only reinforces the point: without appealing to prior metaphysical and explanatory premises, what is it that makes the salient features with which we engage less real? The antirealist consequences don’t fall out of any strictly abductive argument. Why can I say this? Becuase causal-mechanistic explanation has its limits, and is not exhaustively applicable to any and every science. If we take seriously the diversity of the sciences, then this explanatory paradigm isn’t sufficient for the work you’re asking it to do.
“I always get suspicious when philosophy kicks shut empirical doors.”
As I’ve tried to stress, I’ve done just the opposite. I’ve tried to underscore that the empirical doors need to be widened, as far as we can pry them open. You seem to take this as antagonistic to your position; my only point is that we need to admit more, rather than falling into the traps of reductionism and foundationalism.
My point is that we can’t put a priori limitations on explanation, as the deductive-nomological and causal-mechanistic accounts attempt (and largely fail) to do. Explanation is a much wider, more pragmatic concept, and one which can aim at a multiplicity of cognitive goals.
I’m hostile to empiricism, but given what you have to say elsewhere you should be, too.
“Engage me on the details: tell me why my account misses aspects of the two black boxes that IST does not. If you’re right about this high-altitude stuff, then my reading has to go wrong somehow. Show me that.”
The critical point is that it mistakes what Dennett is up to. You’re assimilating him to your view of explanation, but (as I’ve labored to point out) he’s much more aligned with a pragmatist account. He’s happy to grant that there are subpersonal cognitive processes at work in the brain, as with the alien-brand black box pair. But since he’s not pushing a narrow reductionism about the nature and role of mental-talk, it’s entirely consistent to say that these remain meaningful patterns of behavior (“behavior” here wide enough to include verbal utterances, other forms of symbol manipulation and information-exchange). There’s a logical space within the physical order in which these are intelligible to appropriately constructed beings.
Now, is it a surprise that there are conceptual schemes belonging to other kinds of being which we can’t grasp (in your terminology, different cognitive ecologies)? No, not at all. What’s surprising, from an evolutionary perspective, about the fact that different beings “bump into” the world in systematically distinctive ways? Why should we be surprised in the least that intelligent extraterrestrial organisms may well have radically different perspective? Or for that matter, any kind of intelligent agent we might happen to build? This isn’t quite a trivial point, but it does come quite easily once we jettison the pretensions of classical epistemology.
Dennett’s view here doesn’t fall into the “intentionalism” you accuse him of because of these points. There’s nothing of the “intentional” in the mechanistic level of explanation. The intentional in the IST is entirely dependent on patterns of activity which characterize intelligent beings. There’s no magical level of mentalistic phenomena here.
The positive point is that your view has no room for this variety of explanation over above the microphysical level of mechanism. You’ve got an account of heuristics and their function in an organism’s environment. Good; that’s interesting stuff. The trouble is that you can’t account for the understanding and action of the beings at the macro-level because that account depends on bridging the heuristic cognitive process with the microphysical information-crunching in the brain. This does not fall out of the account of heuristics alone. Yet you take the heuristic account to impugn the reality of both the personal-level description and any output of personal-level beings. If that’s going to stick to Dennett’s IST, it requires some extra work: that there is a coherent homunculus, as you put it in the OP; that this homunculus is “intentional” in the way you attack; and that personal-level outputs are thereby systematically and constitutionally erroneous.
These are not Dennett’s premises. The “homunculus” is a non-starter; there’s no mental states or even a mental point of view in IST to stand as the object of that charge. Dennett’s view has more in common with Ryle’s behaviorism than any variety of internalist cognitivism, so this charge is simply wrong. The point about systematic error at the personal level is likewise overwrought. Yes, human beings can be misled by all manner of systematic distortions, but error only makes sense within a wider context of correctness. One doesn’t need an introspectively transparent mind to know that “I’m hungry”; one doesn’t need infallible sense-data to know that “the cup on my desk is empty”, or “the hazy shape on the horizon looked like a bird, but I found out it was really a cat”. These are products of organisms bumping into the world; there’s nothing mysterious about this by adding the word “heuristic”. We go wrong in a variety of ways, and moreso the farther we get from cases of immediate perception and judgement. But this is only a case for expanding our cognitive tools, not replacing them on the grounds that they’re irretrievably flawed.
The black boxes, even the alien manufactured models, can be explained in terms like this. Maybe they won’t share anything like folk psychology, or if there’s any analog, maybe it will be radically different from ours. We can allow a wide range of divergences without sliding into an unrecoverable global skepticism. There are no “mental states”, sure. Truth? Well, if you mean an inflated metaphysical notion of correspondence, sure, ditch it. But we aren’t talking about that anymore; we’re talking about the basic fact that “here’s a thing which does stuff” and “here’s how it bumps into its environment”. Some transparent, deflationary notion of correctness is at work.
You get some of that, but unfortunately the stuff you leave out — the stuff you write off as a delusional function of “neglect” — is exactly what we need to make sense of the story. Otherwise: what’s significant about the fact that two opaque systems can engage in reciprocal communication? Why even talk about that rather than the patterns of heat exchange or acoustic vibration? The descriptive scheme can’t be entirely divorced from the “person” level, even if there’s lots of neat stuff to learn about the subpersonal. How can we even so much as describe any of this as more than a delusion? The answer is we can’t; and we see this because you routinely have to sneak “truth”, even the trivial deflationary sense, in through the back door.
(I know, you’re going to kick and scream about this, but it’s one of those things you just have to do: either your theory is right and there is some sense of “correctness” in play, or else there is nothing “correct” because everything is interpretation, and in that case who cares? But you clearly think there are regress-stoppers, so why play the game?)
“The important thing is that seeing mind talk as a component of a larger heuristic ecology the way I do provides a way of understanding mind talk without resorting to mental-talk. I still find that threatening, so I’m not surprised that you do as well. Moving past mind talk is a scary prospect–horrifying even.
The thing to realize is this: the fact that mind talk is a mandatory feature of a wide variety of social reports implies very little about the nature of mind talk. All things being equal, we should expect that we evolved source insensitive modes of cognizing others and ourselves incompatible with deep source sensitive accounts of nature. As components of a source insensitive system, we should expect mind talk to become increasingly unreliable as more source sensitive information saturates our cognitive ecologies.
I think the source of disagreement here is that you’re committed to seeing mind-talk as something like the “theory theory”, whereas (to the extent I agree with Dennett) I see the speculative and problem-solving features of mind-talk as either applied to something else entirely (e.g., there is a “real” level of social explanation constituted by “real patterns” that doesn’t boil down to causal or law-governed mechanisms) or (more radically, and probably unpalatable to you) not concerned with problem solving at all. Either way, the point I’m pressing is that just because we aren’t getting things 100% right by the lights of the microphysical story doesn’t imply or even stand as evidence for the conclusion that we are utterly wrong.
But again this turns on deeper points of disagreement about the nature of explanation, both in terms of how to do the explaining and what we’re explaining. I think Dennett has the leg up here; you clearly see things otherwise.
“I think the source of disagreement here is that you’re committed to seeing mind-talk as something like the “theory theory”, whereas (to the extent I agree with Dennett) I see the speculative and problem-solving features of mind-talk as either applied to something else entirely (e.g., there is a “real” level of social explanation constituted by “real patterns” that doesn’t boil down to causal or law-governed mechanisms) or (more radically, and probably unpalatable to you) not concerned with problem solving at all. Either way, the point I’m pressing is that just because we aren’t getting things 100% right by the lights of the microphysical story doesn’t imply or even stand as evidence for the conclusion that we are utterly wrong.”
You’re still misunderstanding me at a pretty basic level if you think I see ‘mind talk’ in ‘theory theory’ terms, CA. But your misreading strikes me as a natural one for those who’ve yet to grasp the gestalt of my position–it’s probably worthwhile posting on “Real Patterns” simply to block extraneous exits. The funny thing is, you’re actually misreading me in a manner which intentionalists misread Dennett (or interpretivists more generally). For me, the key to understanding novel positions lies in setting aside critique until one can reliably guess what a position would say regarding a matter, otherwise you find yourself in the Fodorian trap of critiquing Toyotas because their parts can’t make your Dodge run.
Sorry for the opportunistic nature of my responses, CA. It’s been a busy weekend!
“It can be, but as any account of explanation must acknowledge, explanations come to an end somewhere. Past a certain point, explanation becomes inseparable from brute description. You seem to have much more confidence in the power of human enquiry to reach a consensus-based settling point, and one far removed from the “ordinary world” of talk and perception, than I.”
Yes and no. For me, the big driver is simply a pessimistic induction (contra the pessimistic induction in philosophy of science) regarding source insensitive cognition. We once understood almost everything save local physics in source insensitive terms. Science has since progressively scrubbed intentional explanations from the world, but, given the complexities of cognition, found itself crashing at behaviour. Meaning remained the only game in theory town. Cognitive science is a game changer–as pretty much everyone but continentalists seem to realize. The question is how. All things being equal, we should assume our traditional discourses of the soul will fare about as well as traditional discourses of the world: traditional theories of meaning are doomed. The intentionalist, of course, assumes things are not equal, presumeably because we have some kind of privileged access to our own functions. (Check out “Back to Square One“).
More below…
“Obviously we’re keyed into certain features in virtue of our physical and cognitive constitution, but that only reinforces the point: without appealing to prior metaphysical and explanatory premises, what is it that makes the salient features with which we engage less real? The antirealist consequences don’t fall out of any strictly abductive argument. Why can I say this? Becuase causal-mechanistic explanation has its limits, and is not exhaustively applicable to any and every science. If we take seriously the diversity of the sciences, then this explanatory paradigm isn’t sufficient for the work you’re asking it to do.”
I agree entirely that causal explanation has its limits. I actually think there’s a good chance that source insensitive cognition will progressively displace source sensitive cognition as we farm out more and more ‘pattern detection’ to AI. And this brings us to an important point which I considered working into the post but elided, fearing it would simply muddy the waters: There’s countless ways to utilize correlations, and not all of them are equal, and only a sliver of them fall within the bailiwick of human psychology. For me, Dennett’s argument (and by extension, yours) amounts to asserting the perpetual primacy of this sliver. I think it’s clearly the case this sliver is already dwindling in empirical significance and will continue to do so…
Rereading this, I’m actually impressed by how clear it sounds, but it bumps across what I see as a big pothole in my view: the need to provide a fully fleshed out account of the differences between source insensitive modes (between, say, statistical and intentional cognition) and source sensitive cognition. The reason I don’t think this counts against my approach is simply that I think this question is one no other theory can even pose.
“Dennett’s view here doesn’t fall into the “intentionalism” you accuse him of because of these points. There’s nothing of the “intentional” in the mechanistic level of explanation. The intentional in the IST is entirely dependent on patterns of activity which characterize intelligent beings. There’s no magical level of mentalistic phenomena here.”
But this isn’t my criticism. My criticism isn’t that Dennett runs afoul mentalism, but that he ultimately relies on intentional cognition to theorize intentional cognition, and as a result, lacks the theoretical resources required to make naturalistic sense of intentional cognition. Recall my earlier response: “Instead of seeing an intentional system as any system that can be predicted via the intentional stance, we see an intentional system as a system containing a system that can be predicted via intentional cognition.” I believe this is what Dennett has been aiming at all along, but that the nature of the institution, which forces theorists to perpetually defend, led him to prematurely become entrenched. This critique of mine is brand spanking new, as far as I can tell. But I’m not a professional philosopher, and I’m far more interested in exploring this crazy theoretical gestalt I’ve discovered while writing my novels than working things up for publication.
Dennett indeed says exactly that (pg 74 of The I.S.) when he claims that the Intentional Stance *is homuncular insofar as the design stance is homunculurwith the former being a special limit case (under optimality) of the latter*.
I knew it was somewhere! Thanks Void. You Kobo or something?
You’re still misunderstanding me at a pretty basic level if you think I see ‘mind talk’ in ‘theory theory’ terms, CA.
Call it whatever you like; I’ve no attachments to the “theory theory” labeling. The point is that your entire conception of the problem takes place within a physicalistic framework. For you, mental talk doesn’t track what’s “really” happening in that milieu because it’s merely heuristic. What I’m questioning is both the “merely” and the antirealist inference you take to follow naturally from that.
There are no conditions on mental vocabularies that they pick out real things in the world. I think you agree with this, and it’s certainly what Dennett takes to be the case. It’s the implications of this that are at stake.
Science has since progressively scrubbed intentional explanations from the world, but, given the complexities of cognition, found itself crashing at behaviour.
I’ve tried to stress that this is FAR too broad a characterization of science and of scientific explanation. Physics, okay. Mechanistic biology, sure.
But I’ll ask again: without introducing physicalistic and reductionist premises, where exactly does fundamental physics get authority over linguistics, perceptual psychology, child development, sociology….?
You seem to appeal to a Peircean method of enquiry, or at least something bound up in achieving consensus. Where do you find the grounds for your argument here, when there’s no such consensus to be found among scientists (keeping in mind that this category is far broader than the mechanistic view of neurophysiology you’re working with)?
Cognitive science is a game changer…All things being equal, we should assume our traditional discourses of the soul will fare about as well as traditional discourses of the world: traditional theories of meaning are doomed.
This follows if we stick to your simplistic dichotomy between ruthless physicalism and meaning-centric interpretation. But again: whence the explanatory power of sticking to these contours? Even Carnap gave up on this.
If you take your argument as a reductio against classical epistemological inquiry, sure. I’m all for that. But you are generalizing this to consequences that just aren’t there, outside your own idiosyncratic take on the whole scope of the problem-space containing human behaviors in the broadest sense.
As I’ve tried to reiterate, the best accounts of both scientific methodology and explanation itself do not track your scruples. There’s nothing to be said for what amounts to your metaphysical claim that “our traditional discourses of the world” are doomed. What exactly are these? What excludes them from a robust and pluralistic explanatory account? You’ve got nothing here but sweeping generalizations in your argument for sweeping-away and for what is being lost!
And this brings us to an important point which I considered working into the post but elided, fearing it would simply muddy the waters:There’s countless ways to utilize correlations, and not all of them are equal, and only a sliver of them fall within the bailiwick of human psychology. For me, Dennett’s argument (and by extension, yours) amounts to asserting the perpetual primacy of this sliver. I think it’s clearly the case this sliver is already dwindling in empirical significance and will continue to do so…
Dennett’s not making this point. Allowing a place for psychological explanation as a useful tool in a cognitive tool-kit is not identical to the claim of explanatory primacy. You’re eliding the latter into the former.
…what I see as a big pothole in my view: the need to provide a fully fleshed out account of the differences between source insensitive modes (between, say, statistical and intentional cognition) and source sensitive cognition. The reason I don’t think this counts against my approach is simply that I think this question is one no other theory can even pose.
Let’s take this apart into two claims:
1. No other theory poses this question
2. The need to provide such an account
To handle (2) first, where does this need come from if not from your physicalism and the account of explanation you don’t seem to acknowledge is even in play here? It isn’t coming from scientific methodology, and it isn’t coming from a (good) abductive argument. What generates this demand?
Returning to (1), we can ask here i) why does any theory at all need to address this question? You can get around this one pretty easily, but what poses the real trouble is ii) the fact that you’re wrong: in fact this very question was asked by the author of the Critique of Pure Reason.
“Source sensitive cognition” as you’ve defined it is “where physical (high-dimensional) constraints can be identified”, and the negation holds for the source insensitive variant. The former is a transcendental limit on conception itself; the latter is what cannot be otherwise. High-dimensional physical constraints act as de facto noumena as against the mere phenomena of “source insensitive” cognition.
Pushing a scientific and materialistic version of neo-Kantianism with a little Luhmann and Varela & Maturana is novel in that nobody’s put them together quite that way, but you’re wrong to assert that the theoretical dimensions of the account are novel.
My criticism isn’t that Dennett runs afoul mentalism, but that he ultimately relies on intentional cognition to theorize intentional cognition, and as a result, lacks the theoretical resources required to make naturalistic sense of intentional cognition.
Dennett isn’t doing this. The “homunculus” you find in his view is a pragmatic placeholder pending future engineering efforts. Don’t take my word for it, here’s a passage from TIS:
One can view the intentional stance as a limiting case of the design stance: one predicts by taking on just one assumption about the design of the system in question: whatever the design is, it is optimal. This assumption can be seen at work whenever, in the midst of the design stance proper, a designer or design investigator inserts a frank homunculus (an intentional system as subsystem) in order to bridge a gap of ignorance. The theorist says, in effect, “I don’t know how to design this subsystem yet, but I know what it’s supposed to do, so let’s just pretend there is a demon there who wants nothing more than to do that task and knows just how to do it.” One can then go on to design the surrounding system with the simplifying assumption that this component is “perfect.” One asks oneself how the rest of the system must work, given that this component will do its duty.
The problem is a matter for reverse-engineering: you take an intentional system as a subsystem and work out what it does — in the special case of a human agent, you figure out the behavioral inputs and outputs — on the provisional, working, pragmatic, assumption that there’s a homunculus doing the work.
But here the homunculus isn’t doing the explanatory work you’ve placed on it: there’s nothing explanatorily basic in the function it serves. The most we could make of your attack on Dennett is that he’s using intentional cognition to explain why an intentional placeholder is put to use in a larger cognitive-theoretical account. Okay. What you haven’t said is exactly why this is a problem, given that i) the cognitive engineers don’t have the foggiest idea how to build and thereby specify the operations of the “black box”; ii) but we do have a fairly good idea of how the thing operates in that context, via a bewildering range of psychological, sociological, literary, poetic, historical (etc. etc.) understandings; and iii) the placeholder is going to give way, eventually, to a non-intentionalistic understanding.
In other words: so what? Even an eliminativist has to strain credulity to make this charge stick. And now we have to ask: What’s the explanatory advantage of this move? What’s gained by throwing out a useful tool, one which has application in the sciences no less?
And at this point why shouldn’t a skeptic look at all of this as motivated by your scruples about mentalism rather than anything stemming from scientific findings or scientific methodologies?
I’m not sure what to say for most of this, CA, except that in your rush to defend Dennett, you keep reading old foe’s positions into mine–but this is how it goes, crying ‘Reduction!’ while reducing. Viz: “Pushing a scientific and materialistic version of neo-Kantianism with a little Luhmann and Varela & Maturana is novel in that nobody’s put them together quite that way, but you’re wrong to assert that the theoretical dimensions of the account are novel.” If this is what you think, I fear you’re still reading with blinders on. You don’t get it, but I am pulling together a piece on “Real Patterns,” which I hope you’ll read with less an eye for reducing me to your pre-existing knowledge base, and more for understanding… It’s get’s exhausting, batting away straw!
This is interesting though:
But here the homunculus isn’t doing the explanatory work you’ve placed on it: there’s nothing explanatorily basic in the function it serves. The most we could make of your attack on Dennett is that he’s using intentional cognition to explain why an intentional placeholder is put to use in a larger cognitive-theoretical account. Okay. What you haven’t said is exactly why this is a problem, given that i) the cognitive engineers don’t have the foggiest idea how to build and thereby specify the operations of the “black box”; ii) but we do have a fairly good idea of how the thing operates in that context, via a bewildering range of psychological, sociological, literary, poetic, historical (etc. etc.) understandings; and iii) the placeholder is going to give way, eventually, to a non-intentionalistic understanding.
I don’t get it. The claim in the post is that IST lacks the resources to do what HNT does vis a vis the black boxes. Here you cite the explanation HNT gives for ISTs incapacity suggesting that it misconstrues the ‘placeholder nature’ of IST. What could be wrong with applying intentional cognition provisionally the way Dennett does, you ask. Well, for one, it generates misconstruals of the black boxes. For another, it lacks the explanatory power of HNT. Those who do regard it as provisional should be happy that we can now move on.
This is why I keep asking you to actually engage the argument in the post, rather than continually leap to ‘the very idea’ level as you have in all your responses. If you’re right, then IST should do a better job explaining the Black Boxes than my scheme. Once again, CA, show us.
“It’s get’s exhausting, batting away straw!”
I imagine this is a hazard when one’s position is so powerful that it changes according to the critic!
“This is why I keep asking you to actually engage the argument in the post, rather than continually leap to ‘the very idea’ level as you have in all your responses. If you’re right, then IST should do a better job explaining the Black Boxes than my scheme. Once again, CA, show us.”
Oh but we’ve been down this road! Recall that I asked you, just in the previous comment, how you could so much as discriminate black-box communication from simple vibration or heat transfer without some sense of an agent at work; a question you ignored because you can’t answer it.
You’ll tut your finger again and wave in the general direction of “heuristic neglect”, but this doesn’t handle this problem. It says why some things register as meaningful phenomena for black boxes operating in an otherwise inacessible noumena. But you have no way to say what makes a description of a physical process into a description of information processing or communication between black boxes while holding on to your radical anti-realism.
If you keep the former via a story about heuristics, then you’re granting a minimal sketch of an agent and giving up on the “world is ending” nihilism. If you keep the anti-realism, then you concede the explanatory leg up that you think you have over Dennett: you will have no way to explain why intentions work as explanations and predictions of observed behaviors. You don’t get both.
For that matter, I’ve been pressing you about just these issues in explanation and abductive reasoning since my first comment. What is it about IST that “misconstrues” black boxes, in the absence of good answers to these questions? You’ll understand that your say-so isn’t near enough. Yet you’ve ignored those questions, too!
If you want to “engage the argument”, why haven’t you gone first instead of trying — and as I hope is clear, failing — to kick the ball over to me?
“I imagine this is a hazard when one’s position is so powerful that it changes according to the critic!”
I apologize if I’ve offended you. I don’t get this.
“Oh but we’ve been down this road! Recall that I asked you, just in the previous comment, how you could so much as discriminate black-box communication from simple vibration or heat transfer without some sense of an agent at work; a question you ignored because you can’t answer it.”
I apologize for missing this, but again you have me scratching my head for reasons other than you seem to presume: random signals in black box communication simply could not be distinguished short plugging into some shared cognitive ecology with some shared set of heuristic systems. I’m not clear how this engages any of my local claims in the piece…
“You’ll tut your finger again and wave in the general direction of “heuristic neglect”, but this doesn’t handle this problem. It says why some things register as meaningful phenomena for black boxes operating in an otherwise inacessible noumena. But you have no way to say what makes a description of a physical process into a description of information processing or communication between black boxes while holding on to your radical anti-realism.”
You’ll need to unpack this: it strikes me that you’re finally pushing in the tu quoque direction… arguing that I have to adopt the intentional stance. Which would be unfortunate. Applications of intentional cognition require applications of intentional cognition to communicate and troubleshoot–of course. So? How does that put me on the hook for the intentional stance? In other words, how does using meaning-talk commit me to Dennett’s view rather than my own. Look, the fact that I need to resort to meaning-talk to solve certain kinds of problems simply shows that meaning-talk is a component of the systems involved solving those kinds of problems. What you need to explain is what the ‘intentional stance’ adds to this picture I’m peddling. Why do we need the theoretical application of intentional cognition to understand applications of intentional cognition? I’m saying we don’t. I’m saying we have good reason to think using intentional cognition to solve theoretical problems is going to cause problems, the least of which is preventing the possibility of cognitive scientific synthesis. The how and why of this is laid out pretty clearly in the piece…
The bottom-line is you haven’t gone back to any local claim I make in the piece (despite my repeated requests). You’re not arguing I’m wrong on the source-sensitive versus source-insensitive distinction. You’re not arguing I’m wrong insisting on always keeping the cognitive ecological dimension in view. You’re not arguing against my characterization of heuristic cognition. You’re on the deck of the boat stomping your boot insisting the cargo is all wrong. It’s time to jump off the carousel and take me on a roller coaster ride. (How that for mixing metaphors! ;))
Give me something genuinely critical, something with genuine bite. Or even honest questions. In the meantime, the “real patterns” post is almost done…
random signals in black box communication simply could not be distinguished short plugging into some shared cognitive ecology with some shared set of heuristic systems.
I think you’ve misunderstood: my concern in that remark is the bare fact that there are signals, produced by box A and interpreted by box B, and that this triad of “speaker”, “listener”, and act of communication is so much as intelligible.
You seem to think you can spell this out on purely physical (or at least “non-intentional”) terms. But it isn’t at all clear how you can do this without the sleight-of-hand I’m warning against. You say that (i) it’s all mechanism, with cognition local to some frame (“ecology”) but (ii) we can still distinguish a speaker-speech-listener triad yet (iii) this implies intentional cognition is (a) incapable of “problem solving” while still being (b) a theoretically useful posit *as you interpret it*.
What I’m pointing out is that pulling off (ii) while holding both arms of (iii) just isn’t a plausible *explanation* compared to Dennett’s intentional stance. You seem to be confusing the fact that an intentional stance works with the theoretical posit of intentionality, so that Dennett is somehow sneaking “intention” back into the physical picture.
But he isn’t doing this. All Dennett is saying is that ascriptions of intentional states are useful in explaining and predicting observations. That’s it. So (iii)(a) is false; intentional cognition is capable of problem solving, albeit in a modest and limited sense. This isn’t a function of any theory: it’s an empirical observation. Now you get part of this, by relativizing the “works” to an ecological perspective, hence (iii)(b).
The problem is, as I’ve repeated, this is no argument against the explanatory power of the claim nor for the falsehood of intentional claim unless further, tacit, philosophical premises are affirmed. What are these? As I’ve suggested, you’re quite keen to seal off the individual qua brain in a “phenomenal” world of local-ecological perspective, sealed off from an extra-perspectival “noumena” described by mechanistic systems-theory.
But we don’t get traction on that description from within “science” (whatever that’s meant to be; and in even putting it that way we find part of the difficulty) — in fact it requires at least three philosophical assertions: (a) there exists an agent-independent reality; (b) agent-independent reality is cut off from or inaccessible to things as they appear to the agent; and (c) agent-independent reality has explanatory primacy over the agent-dependent reality. If you don’t like the word “agent” here, substitute your terms for cognitive system and the source-(in)sensitive distinction; the structure still holds.
How does that put me on the hook for the intentional stance? In other words, how does using meaning-talk commit me to Dennett’s view rather than my own.
Look, the fact that I need to resort to meaning-talk to solve certain kinds of problems simply shows that meaning-talk is a component of the systems involved solving those kinds of problems.
Why do we need the theoretical application of intentional cognition to understand applications of intentional cognition?
Let’s run this two ways.
The first is because it works, and it has explanatory power because it works. You’re acting like this is some grave metaphysical sin to point out that the mere fact that you and I can communicate in a common language, make sense of beliefs and motivations, and so on — and that we can use that independent of any speculative-cum-scientific theorizing — is somehow a theoretical excess.
You refuse to comment on why you’ve adopted this explanatory noose, but it’s a bizarre and needless constraint.
The second is that we aren’t, strictly speaking, doing anything like how you’ve put it here. It honestly reads as if you think the word “intentional” in “intentional stance” commits Dennett to actual theoretical posits of intentional states. There’s no other charitable way to read “theoretical application of intentional cognition to understand applications of intentional cognition”. But this is a non-starter of an argument; here you’ve already assumed (i) what intentional cognition is and (ii) that it cannot tell us anything interesting.
But (ii) is clearly false; it does work, which is the whole point in studying it to find out why. If mind-talk didn’t work, what would be the point of investigating it? Back to (i), you’re going to fire back that you’re entitled to your interpretation, which is true as it goes, but you aren’t entitled to it for just any reason whatsoever.
You’ve got a reason for that interpretation, and here we’re back to the issue that you won’t touch with a 10 foot pole: what are the abductive grounds that generate this hypothesis as the best hypothesis, given that Dennett’s view can give us some headway in explaining ordinary mind-talk within a cogntive-systems perspective *and* retain what is valuable in it, whereas your obsession with ruthless austerity explains mind-talk at the cost of what is valuable in it?
Again, note that no one, least of all Dennett, is disputing that mind-talk is ultimately explicable in terms of cognitive and neurophysiological mechanisms. The divergence here is at the level of explanatory primacy and the implications of that explanatory primacy for mind-talk as a useful and valuable tool in scientific inquiry. This is why you attract “reductionist” charges, if you are curious. Explanatory pluralism means taking seriously the differences between research programs and the variety of explanatory accounts — neither of which you do. What you need is to spend more time with Cartwright, Hacking, Dupre, Salmon, Achinstein, Stich, and Wimsatt.
You’re on the deck of the boat stomping your boot insisting the cargo is all wrong.
If you say so. There’s a whole lot of unanswered questions that you’re trying very hard to avoid.
You keep forgetting the thesis of the post, which is that IST doesn’t possess the theoretical resources required to naturalistically understand meaning. I fear I need to make it a condition of your next reply that you try to be more charitable, that you request clarification before attributing silly views/claims/assumptions to me.
Viz., nowhere do I see Dennett committed to original intentionality. “There’s no other charitable way to read “theoretical application of intentional cognition to understand applications of intentional cognition”” shouts the fact that you don’t understand (by a long shot!), which at this point tells me you were never interested in understanding in the first place. How, pray-tell, does referencing Dennett’s application of heuristic cognition to heuristic cognition commit me ‘on any charitable view’ to attributing original intentionality to Dennett? Oi.
My recommendation at this point is that you do what those critics who don’t understand Dennett are inclined to do: blame Dennett! Or in this case, me.
And there’s nothing in my post or in my position that amounts to a failure to “take seriously the differences between research programs and the variety of explanatory accounts.” You pretty clearly want to essentialize pluralism, to suggest the only way to be ‘serious’ is to… I’m guessing not even hint at the possibility of synthesis (which is not reduction). Does buying into evolution amount to failing to take the differences between genetics and ethology seriously? I’m sorry, CA, but like I’ve said in response to all the earlier incarnations of this tack, I just don’t get. I don’t see how it’s anything other than rhetorical, unless you are committed to metaphysical pluralism. Hard to argue religion!
You really don’t have any charitable way of disputing my position do you? If not, you’re just wasting both of our time. Sorry dude.
It’s a shame to see that Vox Day was right about everything he said about you. You are really just a slippery intellectual snake of a mid-wit academic wannabe.
Good luck with your blog!
Oof. Well, if he was right about me then maybe he’s right about rolling back women’s right to vote, as well. Nice of you to identify your tribe, CA. Explain’s quite a bit, actually.
But this creates more troubles than it solves at the conceptual level;
I’d think that’d be a feature – it’s not so much causing troubles as wrecking prior conceptual models.
It’ll just seem like a new concept must be compatible with established concepts rather than run roughshod over them by the dozen.
But why argue what is true – why not just hypothetically take it the new concepts just wipe out dozens of established concepts. Don’t have to believe it’s the case, just a matter of grasping the model and engaging the model as part of regular discussion.
and as an explanation it has no advantages.
Probably not in the course of regular human interaction, for what it’s worth I’d agree.
https://mitpress.mit.edu/blog/five-minutes-olaf-sporns
Reading, it felt like he was building me a nest… definitely a journal I will watch.
http://link.springer.com/article/10.1007%2Fs11229-016-1269-8
Active inference, enactivism and the hermeneutics of social cognition
This looks downright scrumptious! Thanks Dirk
Are you aware of this Donald Hoffman fellow’s argument? He seems to take your premises and maybe jack them up a notch and winds up in idealism:
I’ve been following Hoffman for years now, and I see him as a kind of inverted Graziano, an example of how intentionality ties philosophically naïve theorists (though this applies to Graziano less and less) into philosophically naïve knots (as opposed to the way it ties philosophically savvy theorists into philosophically savvy knots). I have a piece on him around here somewhere…
Yes, he is definitely philosophically naive. I’m not sure how he doesn’t wind out kicking out the support of his theory (evolution) when he winds up at the Berkeleyian conclusion that all there is, is minds…
Medial Neglect: “We evolved intentional cognition as a means of solving systems absent information regarding their nature.” That is the kernel of your diagnosis.
Why is it so difficult for so many to realize we learn through ignorance by trial and error, with brains that for the most part don’t have the answers, and furthermore don’t actually need all these thousand and one explanatory databanks of accumulated information to carry on what it was originally programmed to do: survive and reproduce. All else is the excess of human ingenuity and invention – a need to have more, to know more than we need. Isn’t this it? Are we just not satisfied with the basic premise of life: to eat and fuck? Instead we make up a million lies to war over, all human troubles begin and end in this fictive inability to accept the natural in ourselves. We want to be the unnatural creature who demands an answer, a meaning to it all. And when one isn’t brought forward by the Universe we fall back own our ignorance and invent out of it the Pandora’s Box of idiocy we see around us.
Sad. Bakker’ s message is so simple, though couched in a prism of a myriad of vocabularies and essays. Essentially, like all great philosophers, he is the progenitor of one central insight that he delivers in a thousand flowers, blooming. Yet, people will argue till doomsday because they would rather have a much more complex story told.
But the thing is, intentional cognition makes us better at survival and reproduction. How else can we explain the fact that humans have seized most of the world’s biological resources for themselves and are fairly quickly exterminating the species that competed with us for them. One of the comments to a previous post noted that agriculture may have started as a way to keep a labor force available for temple building, so even the beliefs that appear not to be true can conduce to eating and fucking success.
In fairness, this success may be about to blow up in our faces, but you have to admit, when it comes to eating and fucking, humanity has done more of both in the last ten to twenty thousand years than any other large vertebrate species.
Man, you guys really reduce it to a pair of primary colours!
Hahah… hard to tell whether you’re venting on Bakker, disgusted with humanity; or, both. 🙂
I do think a good number of people will come around eventually: BBT is no panacea, but it gets an awful lot of mileage out of a few drips of ontology. What you said a few weeks back about pulling together a monograph is probably on the money, tho…
Excellent! Can’t wait for that one…
“Even when they crack open the boxes and begin reverse engineering the supercomputers within, they find themselves no closer to solving the problem. This is what makes their ignorance so striking: not even the sustained, systematic application of mechanical cognition paradigmatic of science can solve the problem.”
Does Dennett mean to say that these researchers can reverse-engineer supercomputers but can’t decode the transmissions between them? That peculiar combination of technical skill and incompetence doesn’t strike me as at all plausible. The fable seems to me to be rigged. Most, if not all machine to machine communication has syntax, that is to say rules governing how the sending and receiving machines will parse the bit streams being sent between them. This shared syntax allows a sending machine to construct a bit stream from a photograph and a receiving machine to reconstruct the photograph from the bit stream. Dennett seems to be saying that person to person communication requires something more, semantics in addition to syntax, and the researchers require a semantic assist from the two hackers in order to make sense of the communications between the computers. I think the seeming need for semantics is merely foisted on the researchers by the way Dennett insists on them being able to apply mechanical cognition to the computers but unable to apply mechanical cognition to the signals being sent between them.
I suppose Dennett might reply that even if the researchers could decode the signals, that is to say recreate the true or false statements in English from the bit streams, they would still need a semantic understanding of the statements in order to evaluate their truth or falsehood in order to determine whether the signal from the sending computer will cause the red, green or yellow light to flash on the receiving computer. This implies that the receiving computer has a semantic understanding of the signals being sent by the sending computer. Of course if the computers simply have lists of true and false statements on their hard drives and the receiving computer compares the signal it received to its lists the receiving computer has no need for semantics. It can simply compare received bit stream to stored bit stream without any need to reconstruct natural language statements from the bit streams at all. If that’s all the computers are doing, then the researchers can, in the course of reverse engineering the computers, find the lists of stored bit streams on each computer and verify how the receiving computer responds to each bit stream sent from the sending computer. In this case neither the receiving computer nor the researchers need to do any bit stream to natural language translation to determine the relationship between the sent bit streams and which bulbs light. Therefore nothing semantic is needed.
If the sending computer has the ability to formulate original statements rather than simply transmit prerecorded statements from its database and the receiving computer has the ability to evaluate those statements to determine their truth or falsehood the computers are sentient beings and the researchers have no right to be reverse engineering them at all. Whether it’s possible for computers to formulate original statements is an argument for another day. Whether its possible for human beings to formulate original statements is an argument for the day after that.
t can simply compare received bit stream to stored bit stream without any need to reconstruct natural language statements from the bit streams at all.
Oh yeah. Maybe the idea is they can only attain a sort of Chinese room understanding – they know this area of magnetically stored differing states is transmitted wirelessly and received by the other computer and compared against another area of magnetically stored states and if they match (after a translation process), then a green light is shown.
It’s probably the broadness of enactable information that makes it seem like the hackers are needed for meaning. For example, if the computers information was about where on a small section of beach gold ingots are buried, then it wouldn’t take long to figure out how the magnetic states correlate with gold ingot placement. But broaden the enactable information base involved and it seems like you need those hackers, like some kind of meaning Rosetta stone.
” one in English, the other in Swedish” and there is the reason they need the semantics. Given this statement, it is safe to assume that the other computer is translating the information as it comes in to compare the statements. There are no identical bit streams. They’d see a sent bit stream matched to a different stored bit stream, hence the need for a syntax element as well.
“one in English…” Fair enough. If the researchers can copy the various true and false statements from the sending computer and send them to the receiving computer they can still determine which bit streams from the sending computer light which lights in the receiving computer without needing to translate them into either natural language. If the goal of the researchers is to determine that the sending computer sends true and/or false statements and the receiving computer responds with a red and/or green light and responds with a yellow light when it receives a signal which is neither a true nor a false statement then the task is more difficult. If the researchers speak either Swedish or English they will be able to determine the nature of the information stored on one of the computers and solve the problem in a straightforward way. If they speak neither language they have the additional task of translating the information stored on one or both of the computers into a natural language. The success that scholars have had translating Egyptian Hieroglyphics, Cuneiform etc. suggests that component of the problem is not intractable. The main point I tried to make in my comment is that Dennett’s thought experiment is rigged. Even if the researchers do not know English or Swedish if they have the sort of skill they need to reverse engineer supercomputers and they have access to the signals being transmitted from the sending computer to the receiving computer it seems unreasonable to believe they will not be able to determine what’s going on without the help of Trurl and Klapaucius. A rigged philosophical thought-experiment shows the same intellectual bad faith as a rigged chemistry experiment.
This makes no difference – the ‘translation’ is just a series of processing steps applied to states of the information received. This is no different from encrypting information then decoding it – the change between encrypting and decoding is just a bunch of steps.
If instead of going from one language to another we were taking each letter of a word and changing the letter to the next letter of the alphabet (so ‘next’ would become ‘ofyu’) the streams of data are essentially still the same, just with a translation process to run in between. Language translation is more complex yet really no different from that. No syntax element is needed – they’d soon enough determine that one machines ‘next’ is another machines ‘ofyu’ and that these are the same bit streams, essentially. You don’t need to understand syntax to figure out the system.
http://phys.org/news/2016-12-brain-machine-learning-spontaneously-aspects-human.html
[…] necessitating intentional cognition out of view. Recall the example of the gaze heuristic from my prior post, how fielders essentially insert—functionally entangle—themselves into the pop fly system to […]