Skyhook Theory: Intentional Systems versus Blind Brains

by rsbakker

So I recently finished Michael Graziano’s Consciousness and the Social Brain, and I’m hoping to provide a short review in the near future. Although he comes nowhere near espousing anything resembling the Blind Brain Theory, he does take some steps in its direction – but most of them turn out to be muddled versions of the same steps taken by Daniel Dennett decades ago. Since Dennett’s position remains the closest to my own, I thought it might be worthwhile to show how BBT picks up where Dennett’s Intentional Systems Theory (IST) ends.

Dennett, of course, espouses what might be called ‘evolutionary externalism’: meaning not only outruns any individual brain, it outruns any community of brains as well, insofar as both are the products of the vast ‘design space’ of evolution. As he writes:

“There is no way to capture the semantic properties of things (word tokens, diagrams, nerve impulses, brain states) by a micro-reduction. Semantic properties are not just relational but, you might say, super-relational, for the relation a particular vehicle of content, or token, must bear in order to have content is not just a relation it bears to other similar things (e.g., other tokens, or parts of tokens, or sets of tokens, or causes of tokens) but a relation between the token and the whole life–and counterfactual life–of the organism it ‘serves’ and that organism’s requirements for survival and its evolutionary ancestry.” The Intentional Stance, 65

Now on Dennett’s evolutionary account we always already find ourselves caught up in this super-relational totality: we attribute mind or meaning as a way to manage the intractable complexities of our natural circumstance. The patterns corresponding to ‘semantic properties’ are perspectival, patterns that can only be detected from positions embedded within the superordinate system of systems tracking systems. As Don Ross excellently puts it:

“A stance is a foregrounding of some (real) systematically related aspects of a system or process against a compensating backgrounding of other aspects. It is both possible and useful to pick out these sets of aspects because (as a matter of fact) the boundaries of patterns very frequently do not correspond to the boundaries of the naive realist’s objects. If they always did correspond, the design and intentional stances would be worthless, though there would have been no selection pressure to design a community in which this could be thought; and if they never corresponded, the physical stance, which puts essential constraints on reasonable design- and intentional-stance accounts, would be inaccessible. Because physical objects are stable patterns, there is a reliable logical basis for further order, but because many patterns are not coextensive with physical objects (in any but a trivial sense of ‘‘physical object’’), a sophisticated informavore must be designed to, or designed to learn to, track them. To be a tracker of patterns under more than one aspectualization is to be a taker of stances.” Dennett’s Philosophy, 20-21.

Dennett, in other words, is no mere instrumentalist. Attributions of mind, he wants to argue, are real enough given the reality of the patterns they track – the reality that renders the behaviour of certain systems predictable. The fact that some patterns can only be picked out perspectivally in no way impugns the reality of those patterns. But certainly the burning question then becomes one of how Dennett’s intentional stance manages to do this. As Robert Cummins writes, “I have doubts, however, about Dennett’s ‘intentional systems theory’ that would have us indulge in such characterization without worrying about how the intentional characterizations in question relate to characterization based on explicit representation” (The World in the Head, 87). How can a system ‘take the intentional stance’ in the first place? What is it about systems that render them explicable in intentional terms? How does this relation between these capacities ‘to take as’ and ‘to be taken as’ leverage successful prediction? These would seem to be both obvious and pressing questions. And yet Dennett replies,

“I propose we simply postpone the worrisome question of what really has a mind, about what the proper domain of the intentional stance is. Whatever the right answer to this question is – if it has a right answer – this will not jeopardize the plain fact that the intentional stance works remarkably well as a prediction method in these other areas, almost as well as it works in our daily lives as folk psychologists dealing with other people. This move of mine annoys and frustrates some philosophers, who want to blow the whistle and insist on properly settling the issue of what a mind, a belief, a desire is before taking another step. Define your terms, sir! No, I won’t. That would be premature. I want to explore first the power and extent of application of this good trick, the intentional stance.” Intuition Pumps, 79

Dennett goes on to describe this strategy as one of ‘nibbling,’ but he never really explains what makes it a good strategy aside from murky suggestions that sinking our teeth into the problem is somehow ‘premature.’ To me this sounds suspiciously like trying to make a virtue out of ignorance. I used to wonder why it was Dennett has consistently refused to push his imagination past the bourne of IST, why he would cling to intentional stances as his great unexplained explainer. Has he tried, only to retreat baffled, time and again? Or does he really believe his ‘prematurity thesis,’ that the time, for some unknown reason, is not ripe to press forward with a more detailed analysis of what makes intentional stances tick? Or has he planted his flag on this particular boundary and foresworn all further imperial ambitions for more political reasons (such as blunting the charge of eliminativism)?

Perhaps there’s an element of truth to all three of these scenarios, but more and more, I’m beginning to think that Dennett has simply run afoul of the very kind of deceptive intuition he is so prone to pull out of the work of others. Since this gaff is a gaff, it prevents him from pushing IST much further than he already has. But since this gaff allows him to largely avoid the eliminativist consequences his view would have otherwise, he really has no incentive to challenge his own thinking. If you’re going to get stuck, better the top of some hill.

BBT, for its part, agrees with the bulk of the foregoing. Since the mechanical complexities of brains so outrun the cognitive capacities of brains, managing brains (other’s or our own) requires a toolbox of very specialized tools, ‘fast and frugal’ heuristics that enable us to predict/explain/manipulate brains absent information regarding their mechanical complexities. What Dennett calls the ‘taking the intentional stance’ occurs whenever conditions trigger the application of these heuristics to some system in our environment.

But because heuristics systematically neglect information, they find themselves bound to specific sets of problems. The tool analogy is quite apt: it’s awful hard to hammer nails with a screwdriver. Thus the issue of the ‘proper domain’ that Dennett mentions in the above quote: to say that intentional problem-solving is heuristic is to say that it possesses a specific problem-ecology, a set of environments possessing the information structure that a given heuristic is adapted to solve. And this is where Dennett’s problems begin.

Dennett agrees that when we adopt the intentional stance we’re “finessing our ignorance of the details of the processes going on in each other’s skulls (and in our own!)” (Intuition Pumps, 83), but he fails to consider this ‘ignorance of the details’ in any detail. He seems to assume, rather, that these heuristics are merely adapted to the management of complexity. Perhaps this is why he never rigorously interrogates the question of whether the ‘intentional stance’ itself belongs to the set of problems the intentional stance can effectively solve. For Dennett, the fact that the intentional stance picks out ‘real patterns’ is warrant enough to take what he calls the ‘stance stance,’ or the intentional characterization of intentional characterization. He doesn’t think that ‘finessing our ignorance’ poses any particular problem in this respect.

As we saw above, BBT characterizes the intentional stance in mechanical terms, as the environmentally triggered application of heuristic devices adapted to solving social problem-ecologies. From the standpoint of IST, this amounts to taking another kind of stance, the ‘physical stance.’ From the standpoint of BBT, this amounts to the application of heuristic devices adapted to solving causal problem-ecologies, or in other words, the devices underwriting the mechanical paradigm of the natural sciences.

So what’s the difference? The first thing to note is that both ways of looking at things, heuristic application versus stance taking, involve neglect or the ‘finessing of ignorance.’  Heuristic application as BBT has it counts as what Carl Craver would call a ‘mechanism sketch,’ a mere outline of what is an astronomically more complicated picture. In fact, one of the things that make causal cognition so powerful is the way it can be reliably deployed across multiple ‘levels of description’ without any mystery regarding how we get from one level to the next. Mechanical thinking, in other words, allows us to ignore as much or as little information as we want (a point Dennett never considers to my knowledge).

This means the issue between IST and BBT doesn’t so much turn on the amount of information neglected as the kinds of information neglected. And this is where the superiority of the latter, mechanical characterization leaps into view. Intentional cognition, as we saw, turns on heuristics adapted to solving problems in the absence of causal information. This means that taking the ‘stance stance,’ as Dennett does, effectively shuts out the possibility of mechanically understanding IST.

IST thus provides Dennett with a kind of default ‘skyhook,’ a way to tacitly install intentionality into ‘presupposition space.’ This way, he can always argue with the intentionalist that the eliminativist necessarily begs the very intentionality they want to eliminate. If BBT argues that heuristic application is what is really going on (because, well, it is – and by Dennett’s own lights no less!), IST can argue that this is simply one more ‘stance,’ and so yank the issue back onto intentional ground (a fact that Brandom, for instance, exploits to bootstrap his inferentialism).

But as should be clear, it becomes difficult at this point to understand precisely what IST is even a theory about. On BBT, no one ever has or ever will ‘take an intentional stance.’ What we do is rely on certain heuristic systems adapted to certain problem-ecologies. The ‘intentional stance,’ on this account, is what this heuristic reliance looks like when we rely on those self-same heuristics to solve it. Doubtless there are a variety of informal problem-ecologies that can be effectively solved by taking the IST approach (such as blunting charges of eliminativism). But for better or worse, the natural scientific question of what is going on when we solve via intentionality is not one of them. Obviously so, one would think.