Reengineering Dennett: Intentionality and the ‘Curse of Dimensionality’
Aphorism of the Day: A headache is one of those rare and precious things that is both in your head and in your head.
In a few weeks time, Three Pound Brain will be featuring an interview with Alex Rosenberg, who has become one of the world’s foremost advocates of Eliminativism. If you’re so inclined, now would be a good time to pick up his Atheist’s Guide to Reality, which will be the focus of much of the interview.
The primary reason I’m mentioning this has to do with a comment of Alex’s regarding Dennett’s project in our back and forth, how he “has long sought an account of intentionality that constructs it out of nonintentional resources in the brain.” This made me think of a paper of Dennett’s entitled “A Route to Intelligence: Oversimplify and Self-Monitor” that is only available on his website, and which he has cryptically labelled, ‘NEVER-TO-APPEAR PAPERS BY DANIEL DENNETT.’ Now maybe it’s simply a conceit on my part, given that pretty much everything I’ve written falls under the category of ‘never-to-appear,’ but this quixotic piece has been my favourite Dennett article every since I first stumbled upon it. In the note that Dennett appends to the beginning, he explains the provenance of the paper, how it was written for a volume that never coalesced, but he leaves its ‘never-to-be-published’ fate to the reader’s imagination. (If I had to guess, I would say it has to do with the way the piece converges on what is now a dated consideration of the frame problem).
Now in this paper, Dennett does what he often does (most recently, in this talk), which is to tell a ‘design process’ story that begins with the natural/subpersonal and ends with the intentional/personal. The thing I find so fascinating about this particular design process narrative is the way it outlines, albeit in a murky form, what I think actually is an account of how intentionality arises ‘out of the nonintentional resources of the brain,’ or the Blind Brain Theory. What I want to do is simply provide a close reading of the piece (the first of its kind, given that no one I know of has referenced this piece apart from Dennett himself), suggesting, once again, that Dennett was very nearly on the right track, but that he simply failed to grasp the explanatory opportunities his account affords in the proper way. “A Route to Intelligence” fairly bowled me over when I first read it a few months ago, given the striking way it touches on so many of the themes I’ve been developing here. So what follows, then, begins with a consideration of the way BBT itself follows from certain, staple observations and arguments belonging to Dennett’s incredible oeuvre. More indirectly, it will provide a glimpse of how the mere act of conceptualizing a given dynamic can enable theoretical innovation.
Dennett begins with the theme of avoidance. He asks us to imagine that scientists discover an asteroid on a collision course with earth. We’re helpless to stop it, so the most we can do is prepare for our doom. Then, out of nowhere, a second asteroid appears, striking the first in the most felicitous way possible saving the entire world. It seems like a miracle, but of course the second meteor was always out there, always hurtling on its auspicious course. What Dennett wants us to consider is the way ‘averting’ or ‘preventing’ is actually a kind of perspectival artifact. We only assumed the initial asteroid was going to destroy earth because of our ignorance of the subsequent: “It seems appropriate to speak of an averted or prevented catastrophe because we compare an anticipated history with the way things turned out and we locate an event which was the “pivotal” event relative to the divergence between that anticipation and the actual course of events, and we call this the “act” of preventing or avoiding” (“A Route to Intelligence,” 3).
In BBT terms, the upshot of this fable is quite clear: Ignorance–or better, the absence of information–has a profound, positive role to play in the way we conceive events. Now coming out of the ‘Continental’ tradition this is no great shakes: one only need think of Derrida’s ‘trace structure’ or Adorno’s ‘constellations.’ But as Dennett has found, this mindset is thoroughly foreign to most ‘Analytic’ thinkers. In a sense, Dennett is providing a peculiar kind of explanation by subtraction, bidding us to understand avoidance as the product of informatic inaccessibility. Here it’s worth calling attention to what I’ve been calling the ‘only game in town effect,’ or sufficiency. Avoidance may be the artifact of information scarcity, but we never perceive it as such. Avoidance, rather, is simply avoidance. It’s not as if we catch ourselves after the fact and say, ‘Well, it only seemed like a close call.’
Academics spend so much time attempting to overcome the freshman catechism, ‘It-is-what-it-is!’ that they almost universally fail to consider how out-and-out peculiar it is, even as it remains the ‘most natural thing in the world.’ How could ignorance, of all things, generate such a profound and ubiquitous illusion of epistemic sufficiency? Why does the appreciation of contextual relativity, the myriad ways our interpretations are informatically constrained, count as a kind of intellectual achievement?
Sufficiency can be seen as a generalization of what Daniel Kahneman refers to as WYSIATI (‘What You See Is All There Is’), the way we’re prone to confuse the information we have for all the information required. Lacking information regarding the insufficiency of the information we have, such as the existence of a second ‘saviour’ asteroid, we assume sufficiency, that we are doomed. Sufficiency is the assumptive default, which is why undergrads, who have yet to be exposed to information regarding the insufficiency of the information they have, assume things like ‘It-is-what-it-is.’
The concept of sufficiency (and its flip-side, asymptosis) is of paramount importance. It explains why, for instance, experience is something that can be explained via subtraction. Dennett’s asteroid fable is a perfect case in point: catastrophe was ‘averted’ because we had no information regarding the second asteroid. If you think about it, we regularly explain one another’s experiences, actions, and beliefs by reference to missing information, anytime we say something of the form, So-and-so didn’t x (realize, see, etc.) such-and-such, in fact. Implicit in all this talk is the presumption of sufficiency, the ‘It-is-what-it-is! assumption,’ as well as the understanding that missing information can make no difference–precisely what we should expect of a biomechanical brain. I’ll come back to all this in due course, but the important thing to note, at this juncture at least, is that Dennett is arguing (though he would likely dispute this) that avoidance is a kind of perspectival illusion.
Dennett’s point is that the avoidance world-view is the world-view of the rational deliberator, one where prediction, the ability to anticipate environmental changes, is king. Given this, he asks:
Suppose then that one wants to design a robot that will live in the real world and be capable of making decisions so that it can further its interests–whatever interests we artificially endow it with. We want in other words to design a foresightful planner. How must one structure the capacities–the representational and inferential or computational capacities–of such a being? 4
The first design problem that confronts us, he suggests, involves the relationship between response-time, reliability, and environmental complexity.
No matter how much information one has about an issue, there is always more that one could have, and one can often know that there is more that one could have if only one were to take the time to gather it. There is always more deliberation possible, so the trick is to design the creature so that it makes reliable but not foolproof decisions within the deadlines naturally imposed by the events in its world that matter to it. 4
Our design has to perform a computational balancing act: Since the well of information has no bottom, and the time constraints are exacting, our robot has to be able to cherry-pick only the information it needs to make rough and reliable determinations: “one must be designed from the outset to economize, to pass over most of the available information” (5). This is the problem now motivating work in the field of rational ecology, which looks at human cognition as a ‘toolbox’ filled with a variety of heuristics, devices adapted to solve specific problems in specific circumstances–‘ecologies’–via the strategic neglect of various kinds of information. On the BBT account, the brain itself is such a heuristic device, a mechanism structurally adapted to walk the computational high-wire between behavioural efficiency and environmental complexity.
And this indeed is what Dennett supposes:
How then does one partition the task of the robot so that it is apt to make reliable real time decisions? One thing one can do is declare that some things in the world of the creature are to be considered fixed; no effort will be expended trying to track them, to gather more information on them. The state of these features is going to be set down in axioms, in effect, but these are built into the system at no representational cost. One simply designs the system in such a way that it works well provided the world is as one supposes it always will be, and makes no provision for the system to work well (“properly”) under other conditions. The system as a whole operates as if the world were always going to be one way, so that whether the world really is that way is not an issue that can come up for determination. 5
So, for instance, the structural fact that the brain is a predictive system simply reflects the fundamental fact that our environments not only change in predictable ways, but allow for systematic interventions given prediction. The most fundamental environmental facts, in other words, will be structurally implicit in our robot, and so will not require modelling. Others, meanwhile, will “be declared as beneath notice even though they might in principle be noticeable were there any payoff to be gained thereby” (5). As he explains:
The “grain” of our own perception could be different; the resolution of detail is a function of our own calculus of wellbeing, given our needs and other capacities. In our design, as in the design of other creatures, there is a trade-off in the expenditure of cognitive effort and the development of effectors of various sorts. Thus the insectivorous bird has a trade-off between flicker fusion rate and the size of its bill. If it has a wider bill it can harvest from a larger volume in a single pass, and hence has a greater tolerance for error in calculating the location of its individual prey. 6
Since I’ve been arguing for quite some time that we need to understand the appearance of consciousness as a kind of ‘flicker fusion writ large,’ I can tell you my eyebrows fairly popped off my forehead reading this particular passage. Dennett is isolating two classes of information that our robot will have no cause to model: environmental information so basic that it’s written into the structural blueprint or ‘fixed’, and environmental information so irrelevant that it is ignored outright or ‘beneath notice.’ What remains is to consider the information our robot will have cause to model:
If then some of the things in the world are considered fixed, and others are considered beneath notice, and hence are just averaged over, this leaves the things that are changing and worth caring about. These things fall roughly into two divisions: the trackable and the chaotic. The chaotic things are those things that we cannot routinely track, and for our deliberative purposes we must treat them as random, not in the quantum mechanical sense, and not even in the mathematical sense (e.g., as informationally incompresssible), but just in the sense of pseudo-random. These are features of the world which, given the expenditure of cognitive effort the creature is prepared to make, are untrackable; their future state is unpredictable. 6-7
Signal and noise. If we were to design our robot along, say, the lines of a predictive processing account of the brain, its primary problem would be one of deriving the causal structure of its environment on the basis of sensory effects. As it turns out, this problem (the ‘inverse problem’) is no easy one to solve. We evolved sets of specialized cognitive tools, heuristics with finite applications, for precisely this reason. The ‘signal to noise ratio’ for any given feature of the world will depend on the utility of the signal versus the computational expense of isolating it.
So far so good. Dennett has provided four, explicitly informatic categories–fixed, beneath notice, trackable, and chaotic–‘design decisions’ that will enable our robot to successfully cope with the complexities confronting it. This is where Dennett advances a far more controversial claim: that the ‘manifest image’ belonging to any species is itself an artifact of these decisions.
Now in a certain sense this claim is unworkable (and Dennett realizes as much) given the conceptual interdependence of the manifest image and the mental. The task, recall, was to build a robot that could tackle environmental complexity, not become self-aware. But his insight here stands tantalizingly close to BBT, which explains our blinkered metacognitive sense of ‘consciousness’ and ‘intentionality’ in the self-same terms of informatic access.
And things get even more interesting, first with his consideration of the how the scientific image might be related to the manifest image thus construed:
The principles of design that create a manifest image in the first place also create the loose ends that can lead to its unraveling. Some of the engineering shortcuts that are dictated if we are to avoid combinatorial explosion take the form of ignoring – treating as if non-existent – small changes in the world. They are analogous to “round off error”in computer number-crunching. And like round-off error, their locally harmless oversimplifications can accumulate under certain conditions to create large errors. Then if the system can notice the large error, and diagnose it (at least roughly), it can begin to construct the scientific image. 8
And then with his consideration of the constraints facing our robot’s ability to track and predict itself:
One of the pre-eminent varieties of epistemically possible events is the category of the agent’s own actions. These are systematically unpredictable by it. It can attempt to track and thereby render predictions about the decisions and actions of other agents, but (for fairly obvious and well-known logical reasons, familiar in the Halting Problem in computer science, for instance) it cannot make fine-grained predictions of its own actions, since it is threatened by infinite regress of self-monitoring and analysis. Notice that this does not mean that our creature cannot make some boundary-condition predictions of its own decisions and actions. 9
Because our robot possesses finite computational resources in an informatically bottomless environment, it must neglect information, and so must be heuristic through and through. Given that heuristics possess limited applicability in addition to limited computational power, it will perforce continually bump into problems it cannot solve. This will be especially the case when it comes the problem of itself–for the very reasons that Dennett adduces in the above quote. Some of these insoluble problems, we might imagine, it will be unable to see as problems, at least initially. Once it becomes aware of its informatic and cognitive limitations, however, it could begin seeking supplementary information and techniques, ways around its limits, allowing the creation of a more ‘scientific’ image.
Now Dennett is simply brainstorming here–a fact that likely played some role in his failure to pursue its publication. But “A Route to Intelligence” stuck with him as well, enough for him to reference it on a number of occasions, and to ultimately give it a small internet venue all of its own. I would like to think this is because he senses (or at least once sensed) the potential of this general line of thinking.
What makes this paper so extraordinary, for me, is the way he explicitly begins the work of systematically thinking through the informatic and cognitive constraints facing the human brain, both with respect to its attempts to cognize its environment and itself. For his part, Dennett never pursues this line of speculative inquiry in anything other than a piecemeal and desultory way. He never thinks through the specifics of the informatic privation he discusses, and so, despite many near encounters, never finds his way to BBT. And it this failure, I want to argue, that makes his pragmatic recovery of intentionality, the ‘intentional stance,’ seem feasible–or so I want to argue.
As it so happens, the import and feasibility of Dennett’s ‘intentional stance,’ has taken a twist of late, thanks to some of his more recent claims. In “The Normal Well-tempered Mind,” for instance, he claims that he was (somewhat) mistaken in thinking that “the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine,” the problem being that “each neuron, far from being a simple switch, is a little agent with an agenda, and they are much more autonomous and much more interesting than any switch.” For all his critiques of original intentionality in the heyday of computationalism, Dennett’s intentional apologetics have become increasingly strident and far-reaching. In what follows I will argue that his account of the intentional stance, and the ever expanding range of interpretative applicability that he accords it actually depends on his failure to think through the informatic straits of the human brain. If he had, I want to suggest, he would have seen that intentionality, like avoidance, is best explained in terms of missing information, which is to say, as a kind of perspectival illusion.
Now of course all this betrays more than a little theoretical vanity on my part, the assumption that Dennett has to be peering, stumped, at some fragmentary apparition of my particular inferential architecture. But this presumption stands high among my motives for writing this post. Why? Because for the life of me I can’t see any way around those inferences–and I distrust this ‘only game in town’ feeling I have.
But I’ll be damned if I can find a way out. As I hope to show, as soon as you begin asking what cognitive systems are accessing what information, any number of dismal conclusions seem to directly follow. We literally have no bloody clue what we’re talking about when begin theorizing ‘mind.’
To see this, it serves to diagram the different levels of information privation Dennett considers:
The evolutionary engineering problem, recall, is one of finding some kind of ‘golden informatic mean,’ extracting only the information required to maximize fitness given the material and structural resources available and nothing else. This structurally constrained select-and-neglect strategy is what governs the uptake of information from the sum of all information available for cognition and thence to the information available for metacognition. The Blind Brain Theory is simply an attempt to think this privation through in a principled and exhaustive way, to theorize what information is available to what cognitive systems, and the kinds of losses and distortions that might result.
Information is missing. No one I know of disputes this. Each of these ‘pools’ are the result of drastic reductions in dimensionality (number of variables). Neuroscientists commonly refer to something called the ‘Curse of Dimensionality,’ the way the difficulty of finding statistical patterns in data increases exponentially as the data’s dimensionality increases. Imagine searching for a ring on a 100m length of string, which is to say, in one dimension. No problem. Now imagine searching for that ring in two dimensions, a 100m by 100m square. More difficult, but doable. Now imagine trying to find that ring in three dimensions, in a 100m by 100m by 100m cube. The greater the dimensionality, the greater the volume, the more difficult it becomes extracting statistical relationships, whether you happen to be a neuroscientist trying to decipher relations between high-dimensional patterns of stimuli and neural activation, or a brain attempting to forge adaptive environmental relations.
For example, ‘semantic pointers,’ Eliasmith’s primary innovation in creating SPAUN (the recent artificial brain simulation that made headlines around the world) are devices that maximize computational efficiency by collapsing or inflating dimensionality according to the needs of the system. As he and his team write:
Compression is functionally important because low-dimensional representations can be more efficiently manipulated for a variety of neural computations. Consequently, learning or defining different compression/decompression operations provides a means of generating neural representations that are well suited to a variety of neural computations. “A Large-Scale Model of the Functioning Brain,” 1202
The human brain is rife with bottlenecks, which is why Eliasmith’s semantic pointers represent the signature contribution they do, a model for how the brain potentially balances its computational resources against the computational demands facing it. You could say that the brain is an evolutionary product of the Curse, since it is in the business of deriving behaviourally effective ‘representations’ from the near bottomless dimensionality of its environment.
Although Dennett doesn’t reference the Curse explicitly, it’s implicit in his combinatoric characterization of our engineering problem, the way our robot has to suss out adaptive patterns in the “combinatorial explosion,” as he puts it, of environmental variables. Each of the information pools he touches on, in other words, can be construed as solutions to the Curse of Dimensionality. So when Dennett famously writes:
I claim that the intentional stance provides a vantage point for discerning similarly useful patterns. These patterns are objective–they are there to be detected–but from our point-of-view they are not out there entirely independent of us, since they are patterns composed partly of our own “subjective” reactions to what is our there; they are the patterns made to order for our narcissistic concerns. The Intentional Stance, “Real Patterns, Deeper Facts, and Empty Questions,” 39
Dennett is discussing a problem solved. He recognizes that the solution is parochial, or ‘narcissistic,’ but it remains, he will want to insist, a solution all the same, a powerful way for us (or our robot) to predict, explain, and manipulate our natural and social environments as well as ourselves. Given this efficacy, and given that the patterns themselves are real, even if geared to our concerns, he sees no reason to give up on intentionality.
On BBT, however, the appeal of this argument is largely an artifact of its granularity. Though Dennett is careful to reference the parochialism of intentionality, he does not do it justice. In “The Last Magic Show,” I turned to the metaphor of shadows at several turns trying to capture something of the information loss involved in consciousness, unaware that researchers, trying to understand how systems preserve functionality despite massive reductions of dimensionality, had devised mathematical tools, ‘random projections,’ that take the metaphor quite seriously:
To understand the central concept of a random projection (RP), it is useful to think of the shadow of a wire-frame object in three-dimensional space projected onto a two dimensional screen by shining a light beam on the object. For poorly chosen angles of light, the shadow may lose important information about the wire-frame object. For example, if the axis of light is aligned with any segment of wire, that entire length of wire will have a single point as its shadow. However, if the axis of light is chosen randomly, it is highly unlikely that the same degenerate situation will occur; instead, every length of wire will have a corresponding nonzero length of shadow. Thus the shadow, obtained by this RP, generically retains much information about the wire-frame object. (Ganguli and Sompolinsky, “Sparsity and Dimensionality,” 487)
On the BBT account, mind is what the Curse of Dimensionality looks like from the inside. Consciousness and intentionality, as they appear to metacognition, can be understood as concatenations of idiosyncratic low-dimensional ‘projections.’ Why idiosyncratic? Because when it comes to ‘compression,’ evolution isn’t so much interested in the ‘veridical conservation’ as in scavenging effective information. And what counts as ‘effective information’? Whatever facilitates genetic replication–period. In terms of the wire-frame analogy, the angle may be poorly chosen, the projection partial, the light exceedingly dim, etc., and none of this would matter so long as the information projected discharged some function that increased fitness. One might suppose that only compression will serve in some instances, but to assume that only compression will serve in all instances is simply to misunderstand evolution. Think of ‘lust’ and the biological need to reproduce, or ‘love’ and the biological need to pair-bond. Evolution is opportunistic: all things being equal, the solutions it hits upon will be ‘quick and dirty,’ and utterly indifferent to what we intuitively assume (let alone want) to be the case.
Take memory research as a case in point. In the Theaetetus, Plato famously characterized memory as an aviary, a general store from which different birds, memories, could be correctly or incorrectly retrieved. It wasn’t until the late 19th century, when Hermann Ebbinghaus began tracking his own recall over time in various conditions, that memory became the object of scientific investigation. From there the story is one of greater and greater complication. William James, of course, distinguished between short and long term memory. Skill memory was distinguished from long term memory, which Endel Tulving famously decomposed into episodic and semantic memory. Skill memory, meanwhile, was recognized as one of several forms of nondeclarative or implicit memory, including classical conditioning, non-associative learning, and priming, which would itself be decomposed into perceptual and conceptual forms. As Plato’s grand aviary found itself progressively more subdivided, researchers began to question whether memory was actually a discrete system or rather part and parcel of some larger cognitive network, and thus not the distinct mental activity assumed by the tradition. Other researchers, meanwhile, took aim at the ‘retrieval assumption,’ the notion that memory is primarily veridical, adducing evidence that declarative memory is often constructive, more an attempt to convincingly answer a memory query than to reconstruct ‘what actually happened.’
The moral of this story is as simple as it should be sobering: the ‘memory’ arising out of casual introspection (monolithic and veridical) and the memory arising out of the scientific research (fractionate and confabulatory) are at drastic odds, to the point where some researchers suggest the term ‘memory’ is itself deceptive. Memory, like so many other cognitive capacities, seems to be a complex of specialized capacities arising out of non-epistemic and epistemic evolutionary pressures. But if this is the case, one might reasonably wonder how Plato could have gotten things so wrong. Well, obviously the information available to metacognition (in its ancient Greek incarnation) falls far short the information required to accurately model memory. But why would this be? Well, apparently forming accurate metacognitive models of memory was not something our ancestors needed to survive and reproduce.
We have enough metacognitive access to isolate memory as a vague capacity belonging to our brains and nothing more. The patterns accessed, in other words, are real patterns, but it seems more than a little hinky to take the next step and say they are “made to order for our narcissistic concerns.” For one, whatever those ‘concerns’ happen to be, they certainly don’t seem to involve any concern with self-knowledge, particularly when the ‘concerns’ at issue are almost certainly not the conscious sort–which is to say, concerns we could be said to be ‘ours’ in any straightforward way. The concerns, in fact, are evolutionary: Metacognition, for reasons Dennett touched on above and that I have considered at length elsewhere, is a computational nightmare, more than enough to necessitate the drastic informatic compromises that underwrite Plato’s Aviary.
And as memory goes, I want to suggest, so goes intentionality. The fact is, intentional patterns are not “made to order for our narcissistic concerns.” This is a claim that, while appearing modest, characterizes intentionality as an instrument of our agency, and so ‘narcissistic’ in a personal sense. Intentional patterns, rather, are ad hoc evolutionary solutions to various social or natural environmental problems, some perhaps obvious, others obscure. And this simply refers to the ‘patterns’ accessed by the brain. There is the further question of metacognitive access, and the degree to which the intentionality we all seem to think we have might not be better explained as a kind of metacognitive illusion pertaining to neglect.
Asymptotic. Bottomless. Rules hanging with their interpretations.
All the low-dimensional projections bridging pool to pool are evolutionary artifacts of various functional requirements, ‘fixes,’ multitudes of them, to some obscure network of ancestral, environmental problems. They are parochial, not to our ‘concerns’ as ‘persons,’ but to the circumstances that saw them selected to the exclusion of other possible fixes. To return to Dennett’s categories, the information ‘beneath notice,’ or neglected, may be out-and-out crucial for understanding a given capacity, such as ‘memory’ or ‘agency’ or what have you, even though metacognitive access to this information was irrelevant to our ancestor’s survival. Likewise, what is ‘trackable’ may be idiosyncratic, information suited to some specific, practical cognitive function, and therefore entirely incompatible with and so refractory to theoretical cognition–philosophy as the skeptics have known it.
Why do we find the notion of a fractionate, non-veridical memory surprising? Because we assume otherwise, namely, that memory is whole and veridical. Why do we assume otherwise? Because informatic neglect leads us to mistake the complex for the simple, the special purpose for the general purpose, and the tertiary for the primary. Our metacognitive intuitions are not reliable; what we think we do or undergo and what the sciences of the brain reveal need only be loosely connected. Why does it seem so natural to assume that intentional patterns are “made to order for our narcissistic concerns”? Well, for the same reason it seems so natural to assume that memory is monolithic and veridical: in the absence of information to the contrary, our metacognitive intuitions carry the day. Intentionality becomes a personal tool, as opposed to a low-dimensional projection accessed via metacognitive deliberation (for metacognition), or a heuristic device possessing a definite evolutionary history and a limited range of applications (for cognition more generally).
So to return to our diagram of ‘information pools’:
we can clearly see how the ‘Curse of Dimensionality’ is compounded when it comes to theoretical metacognition. Thus the ‘blind brain’ moniker. BBT argues that the apparent perplexities of consciousness and intentionality that have bedevilled philosophy for millennia are artifacts of cognitive and metacognitive neglect. It agrees with Dennett that the relationship between all these levels is an adaptive one, that low-dimensional projections must earn their keep, but it blocks the assumption that we are the keepers, seeing this intuition as the result of metacognitive neglect (sufficiency, to be precise). It’s no coincidence, it argues, that all intentional concepts and phenomena seem ‘acausal,’ both in the sense of seeming causeless, and in the sense of resisting causal explanation. Metacognition has no access whatsoever to the neurofunctional context of any information broadcast or integrated in consciousness, and so finds itself ‘encapsulated,’ stranded with a profusion of low-dimensional projections that it cannot cognize as such, since doing so would require metacognitive access to the very neurofunctional contexts that are occluded. Our metacognitive sense of intentionality, in other words, depends upon making a number of clear mistakes–much as in the case of memory.
The relations between ‘pools’ it should be noted, are not ‘vehicles’ in the sense of carrying ‘information about.’ All the functioning components in the system would have to count as ‘vehicles’ if that were the case, insofar as the whole is required for that information that does find itself broadcast or integrated. The ‘information about’ part is simply an artifact of what BBT calls medial neglect, the aggregate blindness of the system to its ongoing operations. Since metacognition can only neglect the neural functions that make a given conscious experience possible–since it is itself invisible to itself–it confuses an astronomically complex systematic effect for a property belonging to that experience.
The very reason theorists like Dretske or Fodor insist on semantic interpretations of information is the same reason those interpretations will perpetually resist naturalistic explanation: they are attempting to explain a kind of ‘perspectival illusion,’ the way the information broadcast or integrated exhausts the information available for deliberative cognition, so generating the ‘only-game-in-town-effect’ (or sufficiency). ‘Thoughts’ (or the low-dimensional projections we confuse for them) must refer to (rather than reliably covary with) something in the world because metacognition neglects all the neurofunctional and environmental machinery of that covariance, leaving only Brentano’s famous posit, intentionality, as the ‘obvious’ explanandum–one rendered all the more ‘obvious’ by thousands of largely fruitless years of intentional conceptual toil.
Aboutness is magic, in the sense that it requires the neglect of information to be ‘seen.’ It is an illusion of introspection, a kind of neural camera obscura effect, ‘obvious’ only because metacognition is a captive of the information it receives. This is why our information pool diagram can be so easily retooled to depict the prevailing paradigm in the cognitive sciences today:
The vertical arrows represent medial functions (sound, light, neural activity) that are occluded and so are construed acausally. The ‘mind’ (or the network of low-dimensional projections we confuse as such) is thought to be ‘emergent from’ or ‘functionally irreducible to’ the brain, which possesses both conscious and nonconscious ‘representations of’ or ‘intentional relations to’ the world. No one ever pauses to ask what kind of cognitive resources the brain could bring to bear upon itself, what it would take to reliably model the most complicated machinery known from within that machinery using only cognitive systems adapted to modelling external environments. The truth of the brain, they blithely assume, is available to the brain in the form of the mind.
But this is little more than wishful ‘thinking,’ as the opaque, even occult, nature of the intentional concepts used might suggest. Whatever emergence the brain affords, why should metacognition possess the capacity to model it, let alone be it? Whatever function the broadcasting or integration of a given low-dimensional projection provides, why should metacognition, which is out-and-out blind to neurofunctionality, possess the capacity to reliably model it, as opposed to doing what cognition always does when confronted with insufficient information it cannot flag as insufficient, leap to erroneous conclusions?
All of this is to say that the picture is both more clear and yet less sunny than Dennett’s ultimately abortive interrogation of information privation would lead us to believe. Certainly in an everyday sense it’s obvious that we take perspectives, views, angles, standpoints, and stances vis a vis various things. Likewise, it seems obvious that we have two broad ways in which to explain things, either by reference to what causes an event, or by virtue of what rationalizes an event. As a result, it seems natural to talk of two basic explanatory perspectives or stances, one pertaining to the causes of things, the other pertaining to the reasons for things.
The question is one of how far we can trust our speculations regarding the latter beyond this platitudinous observation. One might ask, for instance, if intentionality is a heuristic, which is to say, a specialized problem solver, then what are its conditions of applicability? The mere fact that this is an open question means that things like the philosophical question of knowledge, to give just one example, should be divided into intentional and mechanical incarnations–at the very least. Otherwise, given the ‘narcissistic idiosyncrasy’ of the former, we need to consider whether the kinds of conundrums that have plagued epistemology across the ages are precisely what we should expect. Chained to the informatic bottleneck of metacognition, epistemology has been trading in low-dimensional projections all along, attempting time and again to wring universality out of what amount to metacognitive glimpses of parochial cognitive heuristics. There’s a very real chance the whole endeavour has been little more than a fool’s errand.
The real question is one of why, as philosophers, we should bother entertaining the intentional stance. If the aim of philosophy really is, as Sellars has it, “to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term,” if explanatory scope is our goal, then understanding intentionality amounts understanding it in functional terms, which is to say, as something that can only be understood in terms of the information it neglects. What is the adaptive explanatory ecology of any given intentional concept? What was it selected for? And if it is ‘specialized,’ would that not suggest incompatibility with different (i.e., theoretical) cognitive contexts? Given what little information we have, what arbitrates our various metacognitive glimpses, our perpetually underdetermined interpretations, allowing us to discriminate between any stage on the continuum of the reliable and the farcical?
Short of answers to these questions, we cannot even claim to be engaging in educated as opposed to mere guesswork. So to return to “The Normal Well-tempered Mind,” what does Dennett mean when he says that neurons are best seen as agents? Does he mean that cellular machinery is complicated machinery, and so ill-served when conceptualized as a ‘mere switch’? Or does he mean they really are like little people, organized in little tribes, battling over little hopes and little crimes? I take it as obvious that he means the former, and that his insistence on the latter is more the ersatz product of a commitment he made long ago, one he has invested far too much effort in to relinquish.
‘Feral neurons’ are a metaphoric conceit, an interesting way to provoke original thought, perhaps, a convenient facon de parler in certain explanatory contexts, but more an attempt to make good on an old and questionable argument than anything, one that would have made a younger Dennett, the one who wrote “Mechanism and Responsibility,” smile and scowl as he paused to conjure some canny and critical witticism. Intentionality, as the history of philosophy should make clear, is an invitation to second-order controversy and confusion. Perhaps what we have here is a potential empirical basis for the infamous Wittgensteinian injunction against philosophical language games. Attributing intentionality in first-order contexts is not only well and fine, it’s unavoidable. But as soon as we make second-order claims on the basis of metacognitive deliberation, say things like, ‘Knowledge is justified, true belief,’ we might as well be playing Monopoly using the pieces of Risk, ‘deriving’ theoretical syntaxes constrained–at that point–by nothing ‘out there.’
On BBT, ‘knowledge’ simply is what it has to be if we agree that the life science paradigm cuts reality as close to the joints as anything we have ever known: a system of mechanical bets, a swarm of secondary asteroids following algorithmic trajectories, ‘miraculously’ averting disaster time and again.