‘V’ is for Defeat: The Total and Utter Annihilation of Representational Theories of Mind

by rsbakker

Aphorism of the Day: The mere fact of cartoons shouts the environmental orientation of our cognitive heuristics. A handful of lines is all the brain needs to create a world. South Park, of all things, likely means we have no idea what we’re talking about when we purport to explain ‘consciousness.’

.

Some kind of pervasive and elusive incompatibility haunts the relation between our intuitive self-understanding, what Wilfred Sellars famously referred to as the ‘Manifest Image,’ and our ever deepening natural self-understanding, the ‘Scientific Image.’ The question is really quite simple: How do we make intentionality consistent with causality? How do we make the intentional logic of the subject fit with the causal logic of the object? Most philosophers are what might called semantic Hawks, thinkers bent on finding ways of overcoming this incompatibility, hoping against hope that the resolution will leap out of the conceptual or empirical details. Some are semantic Diplomats, thinkers who have thrown their hands up, arguing the cognitive autonomy of the two domains. And still others, the semantic Profiteers, simply want to translate the causal into an expression of the intentional, to make science one particularly powerful ‘language game’ among others.

I’m what you might call a semantic Defeatist, someone convinced the only real solution is to explain the whole thing away. I think the Hawks are fighting a battle they’ve literally evolved to lose, that the Diplomats, despite their best intentions, are negotiating with ghosts, and that the Profiteers have simply found a way to load the horse and whip the cart. Defeatists, of course, rarely prevail, but they do persist. And so the madness of arguing for the profound and troubling structural role blindness plays in human consciousness and cognition continues. Existence understood as the tissue of neglect. Yee. Hah.

Today, I want to discuss the semantic Hawks, provide a historical and conceptual cartoon of what makes them so warlike, and then sketch out, as best as I can, why I think they are doomed to lose their war.

Like their political counterparts, semantic Hawks are motivated by conviction, particularly regarding the nature of meaning, representation, and truth. Given the millennial philosophical miasma surrounding these concepts, one might wonder how anyone could muster any conviction of any kind regarding their ‘nature.’ I know back in my continental philosophical days it was one of those ‘other guy’ head-scratchers, the preposterous commitment that made so much so-called ‘analytic thought’ sound more like religion than philosophy. But that was bigotry on my part, plain and simple. The Hawks constitute the semantic majority for damn good reasons. They are eminently sensible, which, as we shall see, is precisely the problem.

Historically, you have the influence of Frege and Russell at the beginning of the 20th century. A hundred and fifty years previous, Hume’s examinations of human nature had dramatically disclosed the limits that subjectivity placed on our attempts to think objective truth. Toward the end of the 18th century, Kant thought he had seen a way through: if we could deduce the categorical nature of that subjectivity, then we could, at the very least, grasp the true-for-us. But this just led to Hegel and the delicious-but-not-so-nutritious absurdity of reducing everything to ‘objective subjectivity.’ What Frege and Russell offered was nothing less than a way to pop the suffocating bubble of subjectivity, theories of meaning that seemed to put language, and therefore language users, in clear contact with the in-itself.

Practically speaking, the development of formal semantics was like cracking open caulked-shut windows. Given a handful of rules, you could formalize what seem to be the truth preserving features of natural languages. Of course, it only captured a limited set of linguistic features, and even within this domain it was plagued with puzzles and explanatory conundrums. But it was extraordinarily powerful nonetheless, so much so that it seemed natural to assume that with a little ingenious conceptual work all those pesky wrinkles could be ironed out, and we could jam with a perfectly-pressed Frock of Ages.

The theories of meaning arising out of these considerations in the philosophy of language also seemed–and still seem–to nicely dovetail with parallel questions in the philosophy of mind. Like language, conscious experience clearly seems to put us in logical contact with the world. Experiences, like claims, can be true or false. Phenomenology, like phonology, seems to vanish in the presentation of something else. And this drops us square in the lap of representationalism’s power as an explanatory paradigm: intentionality, meaning, and normativity are not simply central to human cognition, they are the very things that must be explained.

Conscious experience is representational: the reason we see through experience is the same as the reason we see through paintings or television screens. What is presented–qualia or paint or pixilated light–re-presents something else from the world, the representational content. What could be more obvious?

With the development of computers toward the middle of the 20th century, theorists in philosophy and psychology suddenly found themselves with a conspicuously mechanistic model of how it might all work. Human cognition, both personal and subpersonal, could be understood in terms of computations performed on representations. The relation of the mental to the neural, on this account, was no more mysterious than the relation between software and hardware (which, as it turns out, is every bit as mysterious!). And so, given this combination of intuitive appeal and continuity with other ‘hard’ research programs, representational theories of mind proved well nigh irresistible, not only to Anglo-American philosophy, but to a psychological establishment keen to go to rehab after a prolonged bout of behaviourism.

The real problem, aside from deciding the best way to characterize the theoretical details of the representational picture, is one of ironing out the causal details. The brain, after all, is biomechanical, an object belonging to the domain of the life sciences more generally. If you want to avoid the hubristic and (from a scientific perspective) preposterous enterprise of positing supra-natural entities, you need to explain how all this representation business, well, actually works. Thus the decades-long project of theorizing causal accounts of content.

The big problem, it turns out, is one of providing a natural account of content determination that simultaneously makes sense of misrepresentation. Jerry Fodor famously frames the difficulty in terms of the ‘disjunction problem’: you can say that your representation ‘dog’ is causally triggered by sensing a dog in your environment, which seems well and fine. The problem is that your representation ‘dog’ is sometimes causally triggered by sensing a fox in your environment (perhaps in less than ideal observational conditions). So the question becomes what, causally, makes your representation ‘dog’ a representation of a dog as opposed to a representation of a dog or fox. What, in other words, causally explains the way representations can be wrong? This may seem innocuous at first glance, but the very intelligibility of the representational account depends on it. Without some natural way of sorting content determining causes (dogs) from non-content determining causes (foxes or anything else) you quite simply have no causal account of content.

After decades of devious ingenuity, critics (most notably Fodor himself) have always been able to show how purported solutions run afoul some variant of this problem. So why not strike your colours and move on as a Defeatist like me advocates? The thing to remember is that there are at least two explanatory devils in this particular philosophical room: for many, conscious experience, short of representational theories, seems so baffling that the difficulties pertaining to causal content determination are a bargain in comparison. And this is one big reason why anti-representational accounts have made only modest headway over the intervening years: they literally seem to throw the baby out with the bathwater.

For the Hawk, intentionality is a primary explanandum. Recall the power of formal semantics I alluded to above: not only do logic and mathematics work, not only do they make science itself possible, they seem to be intentional through and through (though BBT disputes even this!). Given that intentionality is every bit as ‘real’ as causality, the question becomes one of how they come together in our heads. The responsible thing, it would seem, is to chalk up their track record of theoretical failure to mere factual ignorance, to simply continue taking runs at the problem armed with more and more neuroscientific knowledge.

As a Defeatist, however, I think the problem is thoroughly paradigmatic. I don’t worry about throwing out the baby with the bathwater simply because I’m not convinced the baby ever existed (unlike the Profiteers, for instance, who think the baby was switched in the hospital). For the Hawk, however, this means I have nothing short of an extraordinary explanatory and argumentative burden to discharge: not only do I need to explain why there’s no intentional baby, I need to explain why so many are so convinced that there is. Even worse, it would seem that I need to also explain away formal semantics itself, or at least account for its myriad and quite dazzling achievements. Worse of all, I probably need to explain Truth on top of everything.

The Blind Brain Theory (BBT) has crazy things to say about all these things. But I lack the space to do much more than wedge my foot in the door here. None of these burdens will be discharged in what follows. If I manage to convince a soul or two that their ingenuity is better wasted elsewhere, so much the better. But all I really want to show is that BBT is worth the time and effort required to understand it on its own terms. And I hope to do this by using it to formulate two, interrelated questions that I think are so straightforward and so obviously destructive of the representationalist paradigm, they might actually merit the hyperbole of this post’s title.

The first point I want to make has to do with heuristics, particularly as they are conceived by the growing number of researchers studying what is called ‘ecological rationality.’ Any strategy that solves problems by ignoring available information is heuristic. ‘Rules of thumb’ work by means of granularity and neglect, by ignoring complexities or entire domains if need be. As a result, they are problem specific: they only work when applied to a limited set of specifically structured challenges. As Todd and Gigarenzer write,

“The concept of ecological rationality–of specific decision-making tools fit to particular environments–is intimately linked to that of the adaptive toolbox. Traditional theories of rationality that instead assume one single decision mechanism do not even ask when this universal tools works better or worse than any other, because it is the only one thought to exist. Yet the empirical evidence looks clear: Humans and other animals rely on multiple cognitive tools. And cognition in an uncertain world would be inferior, inflexible, and inefficient with a general purpose optimizing calculator…” (Ecological Rationality, 14)

Ecological rationality looks at cognition in thoroughly evolutionary terms, which is to say, as adaptations, as a ‘toolbox’ of myriad biomechanical responses to various environmental challenges. It turns out that optimization strategies, problem-solving approaches that seek to maximize information availability in an attempt to generate optimal solutions, are not only much more computationally cumbersome (and thus an evolutionary liability), they are also often less effective than far simpler, far cheaper, quicker, and more robust heuristic strategies.

Todd and Gigarenzer give the example of catching a baseball. Until recently the prevailing assumption was that fielders unconsciously used a complex algorithm to estimate distance, velocity, angle, resistance, wind, and so on, to calculate the ball’s trajectory and anticipate where it would land–all within a matter of seconds. As it turns out, they actually rely on rules of thumb like the gaze heuristic, where they fix their gaze on the ball high up and start running so that the image of the ball rises at a continuous rate relative to their gaze and position. Rather than calculate the ball’s trajectory, they let the trajectory steer them in.

For our purposes, the important aspects of heuristic troubleshooting are 1) informatic neglect, the strategic omission of information; and 2) ecological matching, the way heuristics are only effective for a certain set of problems.

As far as I know, no one in consciousness research and philosophy of mind circles has bothered to think through the more global implications of informatic neglect on cognition, let alone consciousness. Most everyone with a naturalistic bent accepts the heuristic, plural nature of human and animal cognition. But no one to my knowledge has thought through the fact that the ‘representational paradigm’ is itself a heuristic.

How can we know the ‘R-paradigm’ is heuristic? Well… Because of the need to provide a causal account of content-determination!

Causal information, in other words, is the information neglected, the very thing the R-paradigm elides. I think you could mount a strong argument that the R-paradigm has to be heuristic simply on evolutionary, developmental grounds. But the primary reason is structural: there is simply no way for the brain to track the causal complexities of its own cognitive systems, even if it paid evolutionary dividends to do so. This structural fact, you could suppose, finds expression in the paradigmatic absence of neurofunctional information in so-called representational cognition.

The R-paradigm is heuristic–full stop. It systematically neglects information. This means (or at the very least, strongly suggests) that the R-paradigm, like all other heuristics, is ecologically matched to a specific set of problems. The R-paradigm, in other words, it is not a universal problem-solving device.

And this means that the R-paradigm is something that can be applied out-of-school–that it can be misapplied. Understood in these terms, the tenacious nature of the content-determination problem (and the grounding problem more generally) takes on an entirely new significance: Is it merely coincidental that Hawkish philosophers cannot conceptually (let alone empirically) explain the R-paradigm in causal terms–which is to say, in terms of the very information the R-paradigm neglects?

Perhaps. But let’s take a closer look.

As a heuristic, the R-paradigm necessarily has a limited scope of applicability: it is a parochial problem-solver, and only appears universal thanks (once again) to informatic neglect. It seems relatively safe to assume that the R-paradigm is primarily adapted to environmental problem-solving or third-person cognition. If this were so, we might expect it to possess a certain facility for causal relations in our environments. And indeed, as the transparency that motivates the Hawks would suggest, it’s tailor made for causal explanations of things not itself. It neglects almost all information pertaining to our informatic relation to our environment, and delivers objects bouncing around in relation to one another–fodder for causal explanation.

Small wonder, then, everything goes haywire when you take this heuristic to the question of consciousness and the brain. Neglecting your informatic relation to functionally independent systems in your environment is one thing; Neglecting your informatic relation to functionally dependent systems in your own brain is something altogether different. The R-paradigm is quite literally a heuristic that neglects the very information required to cognize consciousness.  How could it not misfire when faced with this problem? How could it come remotely close to accurately characterizing itself?

The problem of content determination, on the BBT account, is actually analogous to the problem of self-determination–which is to say, free will. In the latter, the problem is one of causally squaring the circle of ‘choice,’ whereas in the former the problem is one of causally squaring the circle of ‘meaning.’ Where cause flattens choice, it simply sails past meaning. And how could it be otherwise, when nothing less than truth is the long-sought-after ‘effect’?

Like choice, aboutness is a heuristic, a way of managing environmental relationships in the absence of constitutive causal information. It is a kluge–perhaps the most profound one. No conspiracy of causal factors can conjure representational content because the relationship sought is an exceedingly effective but nevertheless granular substitutefor the lack of access to those selfsame factors.

Of course it doesn’t seem that way, intuitively speaking. Consider the example of the gaze heuristic, given above. Does it make sense to suppose the gaze heuristic is actually an optimization algorithm? Of course not: Informatic neglect is constitutive of heuristic problem-solving. So why did so many assume that some kind of optimization algorithm underwrote ball catching? Why, in other words, was the informatic neglect involved in ball-catching something that required experimental research to reveal? Well, because informatic neglect is just that: informatic neglect. Not only is information systematically elided, information regarding this elision is lacking as well. This effectively renders heuristics invisible to conscious experience. Not only do we lack direct awareness of which heuristic we are using, we generally have no idea that we are relying heuristics at all. (Kahneman’s recent Thinking, Fast and Slow provides a wonderful crash course on this point. What he calls WYSIATI, or What-You-See-Is-All-There-Is, is a version of ‘informatic neglect’ as used here).

Aboutness not only seems ‘sufficient,’ to be the only tool we need; it also seems to be universal, a tool for all problem-solving occasions. Moreover, given the profoundly structural nature of the informatic neglect involved, the fact that the brain is necessarily blind to its own neurofunctionality, there is a sense in which aboutness is unavoidable: if the gaze heuristic is one tool among many, then aboutness is our hand, a ‘tool’ we cannot but use, (short of fumbling things with our elbows). More still, you can add to this list what might be called the ‘ease of worlding.’ One need only watch an episode of South Park to appreciate how primed our cognitive systems are, and how little information they require, to generate ‘external environments.’ It’s easy to forget that the ‘representational images’ that surround us are actually spectacular kinds of visual illusions. Structure a meagre amount of visual information the proper way, and we automatically cognize depth in flat surfaces populated with non-existent objects.

Aboutness provides the structural frame of our cognitive relation to our environments, conjuring worlds automatically at the least provocation. Given this, you could argue that representational theories of mind are a kind of ‘forced move,’ a theoretical step we had to take in our attempts to understand consciousness. But you can also see why it’s something a mature scientific account of consciousness and cognition requires we must see our way past. As soon as you acknowledge the intimate, inextricable relationship between mind and brain, you acknowledge that the former somehow turns on neurofunctionality–which is to say, the very thing systematically neglected by aboutness.

Reflecting on conscious experience means feeding brain processes to a heuristicthat spontaneously and systematically renders it causally inexplicable. In a sense, this explains the charges of ‘homunculism’ you find throughout the literature. The idea of a ‘little observer in the head’ that mistakenly ‘objectifies’ or ‘hypostatizes’ aspects of conscious experience is more than a little impressionistic. Framed in terms of heuristics and informatic neglect, the metaphoric problem of homunculism becomes a clear instance of heuristic misapplication: How can we trust a heuristic obviously designed to cognize our environments absent neurofunctional information to assist our attempts to cognize ourselves in terms of neurofunctional information?

If anything, one should expect that such a heuristic system would cognize the brain in non-neurofunctional terms, which is to say, as something quite apart from the brain. In other words, given something like an aboutness heuristic, one should expect dualistic interpretations of consciousness to be a kind of intuitive default. And what is more, given something like the aboutness heuristic, one should expect consciousness to be exceedingly difficult to understand in causal–which is to say, naturalistic–terms. Using the aboutness heuristic to cognize the brain environmentally, in the third-person, isn’t problematic simply because isolating causal relations in functionally independent systems is its stock and trade. Neglecting all the enabling machinery between the cognizing brain and the brain cognized facilitates cognizing the latter because that machinery is irrelevant to its function. Blindness to its own enabling machinery literally facilitates seeing the enabling machinery of other brains. Using the aboutness heuristic to cognize the brain in the first-person, therefore, is bound to generate intuitions of profound difference, as well as drive an apparently radical cognitive wedge between the first-person and third-person. What is obvious in the latter, becomes obscure in the former, and vice versa.

The route from the aboutness heuristic, the implicit device we are compelled to use given the structural inaccessibility of neurofunctional information, to the philosophically explicit R-paradigm described above should be obvious, at least in outline. Using the aboutness heuristic to cognize the brain in the first-person–or metacognitive applications–will tend to make an ‘environment’ of conscious experience, transform it into a repertoire of discrete elements. Since these elements seem to automatically vanish like paint or pixels in the apparent process of presenting something else, and since the enabling machinery is nowhere to be found, the activity of the aboutness heuristic is mistaken for a property belonging to each element. They are dubbed ‘representations,’ discrete ‘vehicles’ that take the something-else-presented as their ‘content’ or ‘meaning.’

Since the informatic neglect of causality is also constitutive of this new, secondary aboutness relation between thing representing and thing represented, it must be conceived in granular, normative terms–which is to say, in terms belonging to still another heuristic adapted to the structural neglect of causal information. And this, of course, kicks the door open onto another domain of philosophical perplexity (and another longwinded bloghard).

But as should be clear, if we take the mechanistic paradigm of the life sciences as our cognitive baseline, which representational theories of mind purport do, then it should be quite clear that there are no such things as representations (not even in the environmental sense of paintings and television screens). What we call ‘representations,’ what seems to be so obvious to basic intuition, is actually an artifact of that intuition, a ‘rule of thumb’ so profound that it seems to structure conscious experience itself, but really only provides an efficient shortcut for cognizing gross features of our environments absent any constitutive neurofunctional information.

We have no representations, not of dogs or foxes or anything else. Rather, we have nets bound into sensimotor loops that endlessly trawl our environments for patterns of information, sometime catching dogs, sometimes missing. Homomorphisms abound, yes. But speaking of homomorphic cogs within a mechanism is a far cry from speaking of representational mechanisms. The former, for one, is genuinely scientific!–at least to the extent it doesn’t require positing occult properties.

And perhaps this should come as no surprise. Science has been, if nothing else, the death-march of human conceit.

But I’m sure anyone with Hawkish sympathies is scowling, wondering exactly where I took a hard turn off the edge of the map map. What could be more obvious than our intentional relation to the world? Not much–I agree. But then not so long ago one could say the same about the motionlessness of the Earth or the solidity of objects. As I mentioned, I have come nowhere near discharging the explanatory and argumentative burdens as likely perceived by proponents of representational theories of mind. But despite this, the following two questions, I think, are straightforward enough, obvious enough, to reflect some of that burden back onto the representationalist, and perhaps test some Hawkish backs:

1) What information does the R-paradigm neglect?

2) How does this impact it’s scope of applicability?

The difficulty these questions pose for representationalism, I would argue, is the difficulty a sustained consideration of informatic neglect and its myriad roles pose for consciousness research and cognitive science as a whole.