Skyhook Theory: Intentional Systems versus Blind Brains
So I recently finished Michael Graziano’s Consciousness and the Social Brain, and I’m hoping to provide a short review in the near future. Although he comes nowhere near espousing anything resembling the Blind Brain Theory, he does take some steps in its direction – but most of them turn out to be muddled versions of the same steps taken by Daniel Dennett decades ago. Since Dennett’s position remains the closest to my own, I thought it might be worthwhile to show how BBT picks up where Dennett’s Intentional Systems Theory (IST) ends.
Dennett, of course, espouses what might be called ‘evolutionary externalism’: meaning not only outruns any individual brain, it outruns any community of brains as well, insofar as both are the products of the vast ‘design space’ of evolution. As he writes:
“There is no way to capture the semantic properties of things (word tokens, diagrams, nerve impulses, brain states) by a micro-reduction. Semantic properties are not just relational but, you might say, super-relational, for the relation a particular vehicle of content, or token, must bear in order to have content is not just a relation it bears to other similar things (e.g., other tokens, or parts of tokens, or sets of tokens, or causes of tokens) but a relation between the token and the whole life–and counterfactual life–of the organism it ‘serves’ and that organism’s requirements for survival and its evolutionary ancestry.” The Intentional Stance, 65
Now on Dennett’s evolutionary account we always already find ourselves caught up in this super-relational totality: we attribute mind or meaning as a way to manage the intractable complexities of our natural circumstance. The patterns corresponding to ‘semantic properties’ are perspectival, patterns that can only be detected from positions embedded within the superordinate system of systems tracking systems. As Don Ross excellently puts it:
“A stance is a foregrounding of some (real) systematically related aspects of a system or process against a compensating backgrounding of other aspects. It is both possible and useful to pick out these sets of aspects because (as a matter of fact) the boundaries of patterns very frequently do not correspond to the boundaries of the naive realist’s objects. If they always did correspond, the design and intentional stances would be worthless, though there would have been no selection pressure to design a community in which this could be thought; and if they never corresponded, the physical stance, which puts essential constraints on reasonable design- and intentional-stance accounts, would be inaccessible. Because physical objects are stable patterns, there is a reliable logical basis for further order, but because many patterns are not coextensive with physical objects (in any but a trivial sense of ‘‘physical object’’), a sophisticated informavore must be designed to, or designed to learn to, track them. To be a tracker of patterns under more than one aspectualization is to be a taker of stances.” Dennett’s Philosophy, 20-21.
Dennett, in other words, is no mere instrumentalist. Attributions of mind, he wants to argue, are real enough given the reality of the patterns they track – the reality that renders the behaviour of certain systems predictable. The fact that some patterns can only be picked out perspectivally in no way impugns the reality of those patterns. But certainly the burning question then becomes one of how Dennett’s intentional stance manages to do this. As Robert Cummins writes, “I have doubts, however, about Dennett’s ‘intentional systems theory’ that would have us indulge in such characterization without worrying about how the intentional characterizations in question relate to characterization based on explicit representation” (The World in the Head, 87). How can a system ‘take the intentional stance’ in the first place? What is it about systems that render them explicable in intentional terms? How does this relation between these capacities ‘to take as’ and ‘to be taken as’ leverage successful prediction? These would seem to be both obvious and pressing questions. And yet Dennett replies,
“I propose we simply postpone the worrisome question of what really has a mind, about what the proper domain of the intentional stance is. Whatever the right answer to this question is – if it has a right answer – this will not jeopardize the plain fact that the intentional stance works remarkably well as a prediction method in these other areas, almost as well as it works in our daily lives as folk psychologists dealing with other people. This move of mine annoys and frustrates some philosophers, who want to blow the whistle and insist on properly settling the issue of what a mind, a belief, a desire is before taking another step. Define your terms, sir! No, I won’t. That would be premature. I want to explore first the power and extent of application of this good trick, the intentional stance.” Intuition Pumps, 79
Dennett goes on to describe this strategy as one of ‘nibbling,’ but he never really explains what makes it a good strategy aside from murky suggestions that sinking our teeth into the problem is somehow ‘premature.’ To me this sounds suspiciously like trying to make a virtue out of ignorance. I used to wonder why it was Dennett has consistently refused to push his imagination past the bourne of IST, why he would cling to intentional stances as his great unexplained explainer. Has he tried, only to retreat baffled, time and again? Or does he really believe his ‘prematurity thesis,’ that the time, for some unknown reason, is not ripe to press forward with a more detailed analysis of what makes intentional stances tick? Or has he planted his flag on this particular boundary and foresworn all further imperial ambitions for more political reasons (such as blunting the charge of eliminativism)?
Perhaps there’s an element of truth to all three of these scenarios, but more and more, I’m beginning to think that Dennett has simply run afoul of the very kind of deceptive intuition he is so prone to pull out of the work of others. Since this gaff is a gaff, it prevents him from pushing IST much further than he already has. But since this gaff allows him to largely avoid the eliminativist consequences his view would have otherwise, he really has no incentive to challenge his own thinking. If you’re going to get stuck, better the top of some hill.
BBT, for its part, agrees with the bulk of the foregoing. Since the mechanical complexities of brains so outrun the cognitive capacities of brains, managing brains (other’s or our own) requires a toolbox of very specialized tools, ‘fast and frugal’ heuristics that enable us to predict/explain/manipulate brains absent information regarding their mechanical complexities. What Dennett calls the ‘taking the intentional stance’ occurs whenever conditions trigger the application of these heuristics to some system in our environment.
But because heuristics systematically neglect information, they find themselves bound to specific sets of problems. The tool analogy is quite apt: it’s awful hard to hammer nails with a screwdriver. Thus the issue of the ‘proper domain’ that Dennett mentions in the above quote: to say that intentional problem-solving is heuristic is to say that it possesses a specific problem-ecology, a set of environments possessing the information structure that a given heuristic is adapted to solve. And this is where Dennett’s problems begin.
Dennett agrees that when we adopt the intentional stance we’re “finessing our ignorance of the details of the processes going on in each other’s skulls (and in our own!)” (Intuition Pumps, 83), but he fails to consider this ‘ignorance of the details’ in any detail. He seems to assume, rather, that these heuristics are merely adapted to the management of complexity. Perhaps this is why he never rigorously interrogates the question of whether the ‘intentional stance’ itself belongs to the set of problems the intentional stance can effectively solve. For Dennett, the fact that the intentional stance picks out ‘real patterns’ is warrant enough to take what he calls the ‘stance stance,’ or the intentional characterization of intentional characterization. He doesn’t think that ‘finessing our ignorance’ poses any particular problem in this respect.
As we saw above, BBT characterizes the intentional stance in mechanical terms, as the environmentally triggered application of heuristic devices adapted to solving social problem-ecologies. From the standpoint of IST, this amounts to taking another kind of stance, the ‘physical stance.’ From the standpoint of BBT, this amounts to the application of heuristic devices adapted to solving causal problem-ecologies, or in other words, the devices underwriting the mechanical paradigm of the natural sciences.
So what’s the difference? The first thing to note is that both ways of looking at things, heuristic application versus stance taking, involve neglect or the ‘finessing of ignorance.’ Heuristic application as BBT has it counts as what Carl Craver would call a ‘mechanism sketch,’ a mere outline of what is an astronomically more complicated picture. In fact, one of the things that make causal cognition so powerful is the way it can be reliably deployed across multiple ‘levels of description’ without any mystery regarding how we get from one level to the next. Mechanical thinking, in other words, allows us to ignore as much or as little information as we want (a point Dennett never considers to my knowledge).
This means the issue between IST and BBT doesn’t so much turn on the amount of information neglected as the kinds of information neglected. And this is where the superiority of the latter, mechanical characterization leaps into view. Intentional cognition, as we saw, turns on heuristics adapted to solving problems in the absence of causal information. This means that taking the ‘stance stance,’ as Dennett does, effectively shuts out the possibility of mechanically understanding IST.
IST thus provides Dennett with a kind of default ‘skyhook,’ a way to tacitly install intentionality into ‘presupposition space.’ This way, he can always argue with the intentionalist that the eliminativist necessarily begs the very intentionality they want to eliminate. If BBT argues that heuristic application is what is really going on (because, well, it is – and by Dennett’s own lights no less!), IST can argue that this is simply one more ‘stance,’ and so yank the issue back onto intentional ground (a fact that Brandom, for instance, exploits to bootstrap his inferentialism).
But as should be clear, it becomes difficult at this point to understand precisely what IST is even a theory about. On BBT, no one ever has or ever will ‘take an intentional stance.’ What we do is rely on certain heuristic systems adapted to certain problem-ecologies. The ‘intentional stance,’ on this account, is what this heuristic reliance looks like when we rely on those self-same heuristics to solve it. Doubtless there are a variety of informal problem-ecologies that can be effectively solved by taking the IST approach (such as blunting charges of eliminativism). But for better or worse, the natural scientific question of what is going on when we solve via intentionality is not one of them. Obviously so, one would think.
Consciousness Explained and Intuition Pumps are the two titles that pop up continuously around me. What are some other primers for Dennett and is he really the only philosopher you’ve encountered close enough towards positing a mechanistic and “ignocentric” philosophic theory, if we could call BBT such (I mean, philosopher aside yourself)?
For instance, I’ve seen Mysterianism mentioned once or twice here but I don’t remember any explicit TPB references to it.
Have you had any more thoughts on identifying problem-ecologies and adaption-specific families of heuristics?
I’ve been thinking about rough structural specificity and how the densities of certain neural groupings (say, in the auditory cortex in general or something like the fusiform gyrus in specific) might imply asymptotic limits in the sense of their neuron specific complexity being constrained by their connections. Aside, it seems to offer some insight into the dysfunction or degeneration of connections like those between Broca’s and Wernicke’s.
Also, I haven’t thought of an example yet to bind the two but these thoughts led to me to wonder about the way in which scientific experimentation has defined heuristics and biases so far and how that might change when framed contextually by BBH. Take the representativeness heuristic and the availability heuristic – subtly different but similar enough to imply constraint by structural specificity (and I just briefly did a couple literature searches but I can’t find any of the necessary mundane stuff that would distinguish any neural-correlates). If BBH can assume the strong adaption of heuristic to problem-ecologies of structural specificity and the weak adaption of heuristics to the problem-ecologies that might arise between structures, I’m inclined to hazard that traditional heuristics would come to be defined by different familial distinctions. For instance, heuristics which are employed by BB based on the communicative lag between different structures?
Anyhow, very much fodder for TPB’s blackboard. Hope all is well with the Bakkers and with TUC.
Naw man, there’s also Metzinger. Although I’ve never read his works at depth, unlike Dennett’s. I think Scott has directly addressed Being No One on his blog before.
Truth. Among others. I feel like, the way it is expressed here on TPB, that Dennett remains the only foil that Bakker’s found worthy of mowing BBT’s grass at this point? Am I wrong?
Metzinger’s notion of autoepistemic closure (most thoroughly espoused in his awesome Being No One) is as close as he gets to theorizing the potential role of neglect a la BBT. Though this concept comes close, Thomas (unlike Dennett) remains a committed representationalist. So long as this is the case, there’s only so much that he can do with it. One of the things I find most compelling about BBT is the austere way it can explain ‘selfhood’ without having to resort to things like Metzinger’s ‘phenomenal self model’ or PSM. On BBT, there’s not only not any ‘self’ as traditionally conceived, there’s no representational model of one either! It’s entirely a metacognitive artifact, which is precisely why the ancients seemed to have such a baffling ‘self-understanding.’
Someone I recently stumbled into who seems to be coming close is Tad Zawidzki. But the bottomline is either I’m stark raving mad or I really have managed to do an end run around contemporary thought on these subjects. I suspect the former, but prefer the latter…
“For example, ‘Take the Best’ is a well-known fast and frugal heuristic. It requires that one recall criteria previously used to distinguish between alternatives in some domain, determine which criterion distinguished best, and use that criterion on one’s current decision. For example, when asked which of two German cities is larger, one might recall that, previously, having a professional soccer team distinguished best between larger and smaller cities, and so one asks which, if either, has a professional soccer team. If neither or both do, one then proceeds to the next best criterion. In order to avoid intractable search, “Take the Best” has a “stopping rule” that suspends search if it cannot arrive at an answer after some small, finite number of iterations (Gigerenzer et al. 1999; Carruthers 2006)” (Zawidzki, Theory of Mind, Computational Tractability, and Mind Shaping, 2009)
I read the first paper that popped up in my search. Sketching the same boundaries as BBT, maybe, but claims (especially philosophic ones) like this lack some rigor. This essentially seems a gloss amalgamation of heuristics like those I mention below and otherwise seems to miss including aspects like retrieval induced-forgetting, which in sense homogenizes our memory of specific representative exemplar memories.
I should probably pick up Being No One.
The Intentional Stance remains the essential Dennett, but the consistency of his view over the years is quite amazing. I still can’t believe that he manages to reference Fodor in all of his talks after all of these years. But he remains the only thinker I know of to come so close to thematizing the kind of explanatory work neglect is capable of doing. But he missed it, as the great sage Maxwell Smart would say, by that much. It’s funny you should mention the Mysterians since I have a post coming up on the way BBT can be seen as a radicalization of ‘cognitive closure.’ Otherwise, I just don’t find McGinn all that convincing.
There’s at least 2 factors I can see confounding attempts to answer the kinds of questions you raise vis a vis specific heuristic structures in the brain and the kinds of problem-ecologies they might be adapted to: The one is the problem of isolating any heuristic as a discrete functional unit in the madness of the brain. The other has to do with the intrinsically speculative nature of the evolutionary psychology required to delimit problem-ecologies. Someone like Eric Schwitzgebel, for instance, thinks these kinds of difficulties warrant phenomenalism, a retreat for ‘mechanism talk’ (where we posit unobservables) to ‘disposition talk.’ He thinks there’s a good chance these problems are insoluble, and he could be right. For me, it’s more a matter of when, not if, we’ll be able to isolate discrete mechanisms in the brain. I find the kind of work involved in vision and memory, for instance, to be gobsmacking, particularly given the primitive instrumentation available. I also think experimental paradigms will be developed to handle the evopsych stuff as well, not to say that the speculation will disappear, only that it will be increasingly informed by hard and fast data regarding what kinds of tasks various components of the brain are good at.
Personally I think the creative misapplication of heuristics was one of the evolutionary drivers of human consciousness – that experience is literally a kind of ‘exaptation machine,’ allowing our organism to continually expand the suite of problem-ecologies it can solve by trying out old ‘good tricks’ (as Dennett calls them) on new kinds of problems. If you think of the profound role ‘happy side-effects’ play in evolution more generally, the selection for morphologies prone to generate such side-effects almost seems inevitable. Consciousness, you could say, is like a flower in this sense.
Christmas does seem the time for extracurricular reading. Is mentioning Fodor back form (Zawidzki references him in Theory of Mind, Computational Tractability, and Mind Shaping)?
Looking forward to that ‘cognitive closure’ post.
There’s at least 2 factors I can see confounding attempts to answer the kinds of questions you raise vis a vis specific heuristic structures in the brain and the kinds of problem-ecologies they might be adapted to:
The one is the problem of isolating any heuristic as a discrete functional unit in the madness of the brain
I was mostly riffing with the initial post (brainstorming) and I’m not entirely convinced that structural specificity is even an effective strategy for discerning problem-ecologies. But I do think the research is building up quickly for the rote identification of, at least, some structures/heuristics correlations, rough patterns of activation during heuristic testing, whether their heuristic properties are the result of acting in concert or isolation. For instance, the way musical, math, and language abilities are dependent on the similar aspects of working memory and so subject to the types of heuristics affecting working memory (like articulatory suppression or irrelevant sound effects affecting our ability to verify simple equations, small changes in a musical bar, or recall strings of letters – all which seem to depend on consolidation through working memory).
The other has to do with the intrinsically speculative nature of the evolutionary psychology required to delimit problem-ecologies
Good call. That’s a toughie.
that experience is literally a kind of ‘exaptation machine,’ allowing our organism to continually expand the suite of problem-ecologies it can solve by trying out old ‘good tricks’ (as Dennett calls them) on new kinds of problems.
Cool thoughts. I hope these thoughts are getting the speculative love TUC’s rewrite is. If conceptions like BBT are inevitable, I’d prefer if more individuals like yourself helped digest the issues for humankind… some reactionary scientific pursuits are bound to be intense.
Wow, sometimes my mistakes…
is mentioning Fodor *bad* form?
Not at all. I just take it to mean he’s still looking back to the old debates rather than forward to what should be the new.
And just think how complicated things will get if it turns out that field effects play a substantial role in neural processing!
Kleinliness is next to godlyness? Heh…(I think a post got sent early – e-mail notifications…)
RSB wrote: “If you’re going to get stuck, better the top of some hill.”
Interesting metaphor along with “skyhooks” because our motor neurons emanate from the top of our cerebral cortex so it’s not wrong to say we stand on our heads or taking stances is at the center of our brain’s function.
Maybe Dennett’s point is that like hearing, vision and smell the entire brain itself is a sense.
Excellent presentation of IST and BBT. I think this is the first time I feel like I’ve come close to comprehending either one.
[…] myself drawn back into the ongoing kerfuffle surrounding novelist-blogger Scott Bakker’s Blind Brain Theory. The core premise — that humans are unable through introspection to understand their own […]
[…] Skyhook Theory: Intentional Systems versus Blind Brains (rsbakker.wordpress.com) […]
[…] Skyhook Theory: Intentional Systems versus Blind Brains (rsbakker.wordpress.com) […]
[…] domain. So Brandom, for instance, takes Dennett’s interpretation of Charity in the form of the Intentional Stance as the foundation of his grand normative metaphysics (See, Making It Explicit, 55-62). What makes […]