Three Pound Brain

No bells, just whistling in the dark…

Tag: heuristics

Scripture become Philosophy become Fantasy

by rsbakker


Cosmos and History has published “From Scripture to Fantasy: Adrian Johnston and the Problem of Continental Fundamentalism” in their most recent edition, which can be found here. This is a virus that needs to infect as many continental philosophy graduate students as possible, lest the whole tradition be lost to irrelevance. The last millennium’s radicals have become this millennium’s Pharisees with frightening speed, and now only the breathless have any hope of keeping pace.

ABSTRACT: Only the rise of science allowed us to identify scriptural ontologies as fantastic conceits, as anthropomorphizations of an indifferent universe. Now that science is beginning to genuinely disenchant the human soul, history suggests that traditional humanistic discourses are about to be rendered fantastic as well. Via a critical reading of Adrian Johnston’s ‘transcendental materialism,’ I attempt to show both the shape and the dimensions of the sociocognitive dilemma presently facing Continental philosophers as they appear to their outgroup detractors. Trusting speculative a priori claims regarding the nature of processes and entities under scientific investigation already excludes Continental philosophers from serious discussion. Using such claims, as Johnston does, to assert the fundamentally intentional nature of the universe amounts to anthropomorphism. Continental philosophy needs to honestly appraise the nature of its relation to the scientific civilization it purports to decode and guide, lest it become mere fantasy, or worse yet, conceptual religion.

KEYWORDS: Intentionalism; Eliminativism; Humanities; Heuristics; Speculative Materialism

All transcendental indignation welcome! I was a believer once.

The Real Problem with ‘Correlation’

by rsbakker

stick zombies

Since presuming that intentional cognition can get behind intentional cognition belongs to the correlation problem, any attempt to understand the problem requires we eschew theoretical applications of intentional idioms. Getting a clear view, in other words, requires that we ‘zombify’ human cognition, adopt a thoroughly mechanical vantage that simply ignores intentionality and intentional properties. As it so happens, this is the view that commands whatever consensus one can find regarding these issues. Though the story I’ll tell is a complicated one, it should also be a noncontroversial one, at least insofar as it appeals to nothing more than naturalistic platitudes.

I first started giving these ‘zombie interpretations’ of different issues in philosophy and cognitive science a few years back.[1] Everyone in cognitive science agrees that consciousness and cognition turn on the physical somehow. This means that purely mechanical descriptions of the activities typically communicated via intentional idioms have to be relevant somehow (so long as they are accurate, at least). The idea behind ‘zombie interpretation’ is to explain as much as possible using only the mechanistic assumptions of the biological sciences—to see how far generalizing over physical processes can take our perennial attempt to understand meaning.

Zombies are ultimately only a conceit here, a way for the reader to keep the ‘explanatory gap’ clearly in view. In the institutional literature, ‘p-zombies’ are used for a variety of purposes, most famously to anchor arguments against physicalism. If a complete physical description of the world need not include consciousness, then the brute fact of consciousness implies that physicalism is incomplete. However, since this argument itself turns on the correlation problem, it will not concern us here. The point, oddly enough, is to adhere to an explanatory domain where we all pretty much agree, to speculate using only facts and assumptions belonging to the biological sciences—the idea being, of course, that these facts and assumptions are ultimately all that’s required. Zombies allow us to do that.

Philosophy Now zombie pic

So then, devoid of intentionality, zombies lurch through life possessing only contingent, physical comportments to their environment. Far from warehousing ‘representations’ possessing inexplicable intentional properties, their brains are filled with systems that dynamically interact with their world, devices designed to isolate select signals from environmental noise. Zombies do not so much ‘represent their world’ as possess statistically reliable behavioural sensitivities to their environments.

So where ‘subjects’ possess famously inexplicable semantic relations to the world, zombies possess only contingent, empirically tractable relations to the world. Thanks to evolution and learning, they just happen to be constituted such that, when placed in certain environments, gene conserving behaviours tend to reliably happen. Where subjects are thought to be ‘agents,’ perennially upstream sources of efficacy, zombies are components, subsystems at once upstream and downstream the superordinate machinery of nature. They are astounding subsystems to be sure, but they are subsystems all the same, just more nature—machinery.

What makes them astounding lies in the way their neurobiological complexity leverages behaviour out of sensitivity. Zombies do not possess distributed bits imbued with the occult property of aboutness; they do not model or represent their worlds in any intentional sense. Rather, their constitution lets ongoing environmental contact tune their relationship to subsequent environments, gradually accumulating the covariant complexities required to drive effective zombie behaviour. Nothing more is required. Rather than possessing ‘action enabling knowledge,’ zombies possess behaviour enabling information, where ‘information’ is understood in the bald sense of systematic differences making systematic differences.

A ‘cognitive comportment,’ as I’ll use it here, refers to any complex of neural sensitivities subserving instances of zombie behaviour. It comes in at least two distinct flavours: causal comportments, where neurobiology is tuned to what generally makes what happen, and correlative comportments, where zombie neurobiology is tuned to what generally accompanies what happens. Both systems allow our zombies to predict and systematically engage their environments, but they differ in a number of crucial respects. To understand these differences we need some way of understanding what positions zombies upstream their environments–or what leverages happy zombie outcomes.

The zombie brain, much like the human brain, confronts a dilemma. Since all perceptual information consists of sensitivity to selective effects (photons striking the eye, vibrations the ear, etc.), the brain needs some way of isolating the relevant causes of those effects (a rushing tiger, say) to generate the appropriate behavioural response (trip your mother-in-law, then run). The problem, however, is that these effects are ambiguous: a great many causes could be responsible. The brain is confronted with a version of the inverse problem, what I will call the medial inverse problem for reasons that will soon be clear. Since it has nothing to go on but more effects, which are themselves ambiguous, how could it hope to isolate the causes it needs to survive?

By allowing sensitivities to discrepancies between the patterns initially cued and subsequent sensory effects to select—and ultimately shape—the patterns subsequently cued. As it turns out, zombie brains are Bayesian brains.[2] Allowing discrepancies to both drive and sculpt the pattern-matching process automatically optimizes the process, allowing the system to bootstrap wide-ranging behavioural sensitivities to environments in turn. In the intentionality laden idiom of theoretical neuroscience, the brain is a ‘prediction error minimization’ machine, continually testing occurrent signals against ‘guesses’ (priors) triggered by earlier signals. Success (discrepancy minimization) quite automatically begets success, allowing the system to continually improve its capacity to make predictions—and here’s the important thing—using only sensory signals.[3]

But isolating the entities/behaviour causing sensory effects is one thing; isolating the entities/behaviour causing those entities/behaviour is quite another. And it’s here that the chasm between causal cognition and correlative cognition yawns wide. Once our brain’s discrepancy minimization processes isolate the relevant entities/behaviours—solve the medial inverse problem—the problem of prediction simply arises anew. It’s not enough to recognize avalanches as avalanches or tigers as tigers, we have to figure out what they will do. The brain, in effect, faces a second species of inverse problem, what might be called the lateral inverse problem. And once again, it’s forced to rely on sensitivities to patterns (to trigger predictions to test against subsequent signals, and so on).[4]

Nature, of course, abounds with patterns. So the problem is one of tuning a Bayesian subsystem like the zombie brain to the patterns (such as ‘avalanche behaviour’ or ‘tiger behaviour’) it needs to engage its environments given only sensory effects. The zombie brain, in other words, needs to wring behavioural sensitivities to distal processes out of a sensitivity to proximal effects. Though they are adept at comporting themselves to what causes their sensory effects (to solving the medial inverse problem), our zombies are almost entirely insensitive to the causes behind those causes. The etiological ambiguity behind the medial inverse problem pales in comparison to the etiological ambiguity comprising the lateral inverse problem, simply because sensory effects are directly correlated to the former, and only indirectly correlated to the latter. Given the limitations of zombie cognition, in other words, zombie environments are ‘black box’ environments, effectively impenetrable to causal cognition.

Part of the problem is that zombies lack any ready means of distinguishing causality from correlation on the basis of sensory information alone. Not only are sensory effects ambiguous between causes, they are ambiguous between causes and correlations as well. Cause cannot be directly perceived. A broader, engineered signal and greater resources are required to cognize its machinations with any reliability—only zombie science can furnish zombies with ‘white box’ environments. Fortunately for their prescientific ancestors, evolution only required that zombies solve the lateral inverse problem so far. Mere correlations, despite burying the underlying signal, remain systematically linked to that signal, allowing for a quite different way of minimizing discrepancies.

Zombies, once again, are subsystems whose downstream ‘componency’ consists in sensitivities to select information. The amount of environmental signal that can be filtered from that information depends on the capacity of the brain. Now any kind of differential sensitivity to an environment serves organisms in good stead. To advert to the famous example, frogs don’t need the merest comportment to fly mechanics to catch flies. All they require is a select comportment to select information reliably related to flies and fly behaviour, not to what constitutes flies and fly behaviour. And if a frog did need as much, then it would have evolved to eat something other than flies. Simple, systematic relationships are not only all that is required to solve a great number of biological problems, they are very often the only way those problems can be solved, given evolutionary exigencies. This is especially the case with complicated systems such as those comprising life.

So zombies, for instance, have no way of causally cognizing other zombies. They likewise have no way of causally cognizing themselves, at least absent the broader signal and greater computational resources provided by zombie science. As a result, they possess at best correlative comportments both to each other and to themselves.

Idoits guide to zombies

So what does this mean? What does it mean to solve systems on basis of inexpensive correlative comportments as opposed to far more expensive causal comportments? And more specifically, what does it mean to be limited to extreme versions of such comportments when it comes to zombie social cognition and metacognition?

In answer to the first question, at least three, interrelated differences can be isolated:

Unlike causal (white box) comportments, correlative (black box) comportments are idiosyncratic. As we saw above, any number of behaviourally relevant patterns can be extracted from sensory signals. How a particular problem is solved depends on evolutionary and learning contingencies. Causal comportments, on the other hand, involve behavioural sensitivity to the driving environmental mechanics. They turn on sensitivities to upstream systems that are quite independent of the signal and its idiosyncrasies.

Unlike causal (white box) comportments, correlative (black box) comportments are parasitic, or differentially mediated. To say that correlative comportments are ‘parasitic’ is to say they depend upon occluded differential relations between the patterns extracted from sensory effects and the environmental mechanics they ultimately solve. Frogs, once again, need only a systematic sensory relation to fly behaviour, not fly mechanics, which they can neglect, even though fly mechanics drives fly behaviour. A ‘black box solution’ serves. The patterns available in the sensory effects of fly behaviour are sufficient for fly catching given the cognitive resources possessed by frogs. Correlative comportments amount to the use of ‘surface features’—sensory effects—to anticipate outcomes driven by otherwise hidden mechanisms. Causal comportments, which consist of behavioural sensitivities (also derived from sensory effects) to the actual mechanics involved, are not parasitic in this sense.

Unlike causal (white box) comportments, correlative (black box) comportments are ecological, or problem relative. Both causal comportments and correlative comportments are ‘ecological’ insofar as both generate solutions on the basis of finite information and computational capacity. But where causal comportments solve the lateral inverse problem via genuine behavioural sensitivities to the mechanics of their environments, correlative comportments (such as that belonging to our frog) solve it via behavioural sensitivities to patterns differentially related to the mechanics of their environments. Correlative comportments, as we have seen, are idiosyncratically parasitic upon the mechanics of their environments. The space of possible solutions belonging to any correlative comportment is therefore relative to the particular patterns seized upon, and their differential relationships to the actual mechanics responsible. Different patterns possessing different systematic relationships will possess different ‘problem ecologies,’ which is to say, different domains of efficacy. Since correlative comportments are themselves causal, however, causal comportments apply to all correlative domains. Thus the manifest ‘objectivity’ of causal cognition relative to the ‘subjectivity’ of correlative cognition. 

So far, so good. Correlative comportments are idiosyncratic, parasitic, and ecological in a way that causal comportments are not. In each case, what distinguishes causal comportments is an actual behavioural sensitivity to the actual mechanics of the system. Zombies are immersed in potential signals, awash in causal differences, information, that could make a reproductive difference. The difficulties attendant upon the medial and lateral inverse problems, the problems of what and what-next, render the extraction of causal signals enormously difficult, even when the systems involved are simple. The systematic nature of their environments, however, allow them to use behavioural sensitivities as ‘cues,’ signals differentially related to various systems, to behaviourally interact with those systems despite the lack of any behavioural sensitivity to their particulars. So in research on contingencies, for instance, the dependency of ‘contingency inferences’ on ‘sampling,’ the kinds of stimulus input available, has long been known, as have the kinds of biases and fallacies that result. Only recently, however, have researchers realized the difficulty of accurately making such inferences given the kinds of information available in vivo, and the degree to which we out and out depend on so-called ‘pseudocontingency heuristics’ [5]. Likewise, research into ‘spontaneous explanation’ and  ‘essentialism,’ the default attribution of intrinsic traits and capacities in everyday explanation, clearly suggests that low-dimensional opportunism is the rule when it comes to human cognition.[6] The more we learn about human cognition, in other words, the more obvious the above story becomes.

So then what is the real problem with correlation? The difficulty turns on the fact that black box cognition, solving systems via correlative cues, can itself only be cognized in black box terms.

Given their complexity, zombies are black boxes to themselves as much to others. And this is what has cued so much pain behaviour in so many zombie philosophers. As a black box, zombies cannot cognize themselves as black boxes: the correlative nature of their correlative comportments utterly escapes them (short, once again, the information provided by zombie science). Zombie metacognition is blind to the structure and dynamics of zombie metacognition, and thus prone to what might be called ‘white box illusions.’ Absent behavioural sensitivity to the especially constrained nature of their correlative comportments to themselves, insufficient data is processed in the same manner as sufficient data, thus delivering the system to ‘crash space,’ domains rendered intractable by the systematic misapplication of tools adapted to different problem ecologies. Unable to place themselves downstream their incapacity, they behave as though no such incapacity exists, suffering what amounts to a form of zombie anosognosia.

Perhaps this difficulty shouldn’t be considered all that surprising: after all, the story told here is a white box story, a causal one, and therefore one requiring extraction from the ambiguities of effects and correlations. The absence of this information effectively ‘black-boxes’ the black box nature of correlative cognition. Zombies cued to solve for that efficacy accordingly run afoul the problem of processing woefully scant data as sufficient, black boxes as white boxes, thus precluding the development of effective, behavioural sensitivities to the actual processes involved.  The real Problem of Correlation, in other words, is that correlative modes systematically confound cognition of correlative comportments. Questions regarding the nature of our correlative comportments simply do not lie within the problem space of our correlative comportments—and how could they, when they’re designed to solve absent sensitivity to what’s actually going on?

And this is why zombies not only have philosophers, they have a history of philosophy as well. White box illusions have proven especially persistent, despite the spectacular absence of systematic one-to-one correspondences between the apparent white box that zombies are disposed to report as ‘mind’ and the biological white box emerging out of zombie science. Short any genuine behavioural sensitivity to the causal structure of their correlative comportments, zombies can at most generate faux-solutions, reports anchored to the systematic nature of their conundrum, and nothing more. Like automatons, they endlessly report low-dimensional, black box posits the way they report high-dimensional environmental features—and here’s the thing—using the very same terms that humans use. Zombies constantly utter terms like ‘minds,’ ‘experiences,’ ‘norms,’ and so on. Zombies, you could say, possess a profound disposition to identify themselves and each other as humans.

Just like us.

MJ zombie



[1] See, Davidson’s Fork: An Eliminativist Radicalization of Radical Interpretation, The Blind Mechanic, The Blind Mechanic II: Reza Negarestani and the Labour of Ghosts, Zombie Interpretation: Eliminating Kriegel’s Asymmetry Argument, and Zombie Mary versus Zombie God and Jesus: Against Lawrence Bonjour’s “Against Materialism”

[2] For an overview of Bayesian approaches, see Andy Clark, “Whatever next? Predictive brains, situated agents, and the future of cognitive science.”

[3]  The following presumes an ecological (as opposed to an inferential) understanding of the Bayesian brain. See Nico Orlandi, “Bayesian perception is ecological perception.”

[4] Absent identification there is no possibility of prediction. The analogy between this distinction and the ancient distinction between being and becoming (or even the modern one between the transcendental and the empirical) is interesting to say the least.

[5] See Klaus Fiedler et al, “Pseudocontingencies: Logically Unwarranted but Smart Inferences.”

[6] See Andrei Cimpian, “The Inherence Heuristic: Generating Everyday Explanations,” or Cimpian and Salomon, “The inherence heuristic: An intuitive means of making sense of the world, and a potential precursor to psychological essentialism.”

BBT Creep: The Inherence Heuristic

by rsbakker

Exciting stuff! For years now the research has been creeping toward my grim semantic worst-case scenario–but “The inherence heuristic” is getting close, very close, especially the way it explicitly turns on the importance of heuristic neglect. The pieces have been there for quite some time; now researchers are beginning to put them together.

One way of looking at blind brain theory’s charge against intentionalism is that so-called intentional phenomena are pretty clear cut examples of inherence heuristics as discussed in this article, ways to handle complex systems absent any causal handle on those systems.  When Cimpion and Saloman write,

“To reiterate, the pool of facts activated by the mental shotgun for the purpose of generating an explanation for a pattern may often be heavily biased toward the inherent characteristics of that pattern’s constituents. As a result, when the storytelling part of the heuristic process takes over and attempts to make sense of the information at its disposal, it will have a rather limited number of options. That is, it will often be forced to construct a story that explains the existence of a pattern in terms of the inherent features of the entities within that pattern rather than in terms of factors external to it. However, the one-sided nature of the information delivered by the mental shotgun is not an impediment to the storytelling process. Quite the contrary – the less information is available, the easier it will be to fit it all into a coherent story.” 464

I think they are also describing what’s going on when philosophers attempt to theoretically solve intentionality, intentional cognition, relying primarily on the resources of intentional cognition. In fact, once you understand the heuristic nature of intentional cognition, the interminable nature of intentional philosophy becomes very easy to understand. We have no way of carving the complexities of cognition at the joints of the world, so we carve it at the joints of the problem instead. When your neighbour repairs your robotic body servant, rather than cognizing all the years he spent training to be a spy before being inserted into your daily routines, you ‘attribute’ him ‘knowledge,’ something miraculously efficacious in its own  right, inherent. And for the vast majority of problems you encounter, it works. Then the philosopher asks, ‘What is knowledge?’ and because adducing causal information scrambles our intuitions of ‘inherence,’ he declares only intentional idioms can cognize intentional phenomena, and the species remains stumped to this very day. Exactly as we should expect. Why should we think tools adapted to do without information regarding our nature can decode their own nature? What would this ‘nature’ be?

The best way to understand intentional philosophy, on a blind brain view, is as a discursive ‘crash space,’ a point where the application of our cognitive tools outruns their effectiveness in ways near and far. I’ve spent the last few years, now, providing various diagnoses of the kinds of theoretical wrecks we find in this space. Articles such as this convince me I won’t be alone for much longer!

So to give a brief example. Once one understands the degree to which intentional idioms turn on ‘inherence heuristics’–ways to manage causal systems absent any behavioural sensitivity to the mechanics of those systems–you can understand the deceptiveness of things like ‘intentional stances,’ the way they provide an answer that functions more like a get-out-of-jail-free card than any kind of explanation.

Given that ‘intentional stances’ belong to intentional cognition, then the fact that intentional cognition solves problems neglecting what is actually going on reflects rather poorly on the theoretical fortunes of the intentional stance. The fact is ‘intentional stances’ leave us with a very low dimensional understanding of our actual straits when it comes to understanding cognition–as we should expect, given that it utilizes a low dimensional heuristic system geared to solving practical problems on the fly and theoretical problems not at all.

All along I’ve been trying to show the way heuristics allow us to solve the explanatory gap, to finally get rid of intentional occultisms like the intentional stance and replace them with a more austere, and more explanatorily comprehensive picture. Now that the cat’s out of the bag, more and more cognitive scientists are going to explore the very real consequences of heuristic neglect. They will use it to map out the neglect structure of the human brain in ever finer detail, thus revealing where our intuitions trip over their own heuristic limits, and people will begin to see how thought can be construed as mangles of parallel-distributed processing meat. It will be clear that the ‘real patterns’ are not the ones required to redeem reflection, or its jargon. Nothing can do that now. Mark my words, inherence heuristics have a bright explanatory future.

Bonfire bright.

Meaning Fetishism

by rsbakker

War of the Worlds vintage poster

He sits back on his haunches, looking at the bills and coins in his hand. He looks from the bag to Clayton and back again, suddenly shaken and terribly shocked. –Barre Lyndon, The War of the Worlds, Scene 268.

The 1953 version of The War of the Worlds has a wonderful scene where a well-dressed man offers a bag of money to board a Pacific-Tech truck fleeing Los Angeles, only to be violently rebuffed by more rugged souls. And so he’s left, perplexed and dismayed, to await his doom wondering how money, the long-time source of his power over others, suddenly possesses no power at all.

Money offers a paradigmatic example of the confusion of differential or relational properties with intrinsic properties. Given the reliability of a system, information pertaining to the system need not be known to master the capacities belonging to some element within the system. An individual need not know anything about political economy to know, locally at least, what money can do. Given ignorance of the system, attributing special powers to the available element becomes the default, the only way to understand how the element, in this case money, does what it does. We literally fetishize money. The attribution of ‘special powers’ actually allows us to solve a wide variety of practical problems. How did your brother-in-law get that mansion? Well, he won a million dollars in the lottery. Since the enabling background is a ubiquitous feature of all such explanations, it need not figure in them—it ‘goes without saying.’ Given the system, money makes things happen. Why did that stranger at the till give me the cigarettes? Because I gave him ten bucks.

Intrinsic efficacy, in other words, is a useful heuristic, a way to solve problems belonging to a certain ecology. No one needs to know how money works to know that money does work. Even though money only possesses power as a component of a far larger system, we can solve a number of problems within that system simply assuming that money possesses that power intrinsically.

Out of sight, out of mind. This is why financial crises regularly shock the assumptions of so many. Heuristic cognition is largely an unconscious, habitual affair: everyone assumes the stranger is going to run the same routines for the same gold. Instabilities in the system make plain the complex, differential nature of the properties assumed intrinsic. Though the notion of intrinsic value would die a hard death in economic theory more generally, the differential nature of ‘fiat money’ is apparent to anyone bearing currency that others refuse to recognize.

Some systems, however, never give us a heuristic reality check. Since we humans are embedded in a wide variety of systems that (until recently) we had no hope of understanding, yet filled with entities that required some kind of understanding, it makes sense to suppose that attributions of intrinsic efficacy provide humans with a general problem-solving strategy. As a cultural artifact, money is actually a good example of that generality, of the way intrinsic efficacy can be used to make sense of items in novel, yet otherwise occluded, systems.

Think about how many things, phenomenally speaking, just happen; we have no inkling whatsoever of the underwriting systems. By dint of what we are, we perpetually suffer the Inverse Problem, the problem of cognizing environmental systems given only the effects of those systems. Somehow our brain conjures a world from a thin stream of visual, auditory, olfactory, and haptic effects. This is why my daughter perpetually hounds me with origin questions: she’s trying to figure out what’s relational and what’s intrinsic, what’s part of the great Rube Goldberg machine and what stands alone. It’s almost as if she’s identifying all the little Big Bangs scattered across her environment, all points where effects, for all practical purposes, arise ab initio.

War of the Worlds battle

The Inverse Problem illustrates the extremity of our cognitive straits, and so explains the practical necessity of intrinsic efficacy. When consistently confronted by effects absent any cause—viz., a system that outruns our on-the-fly capacity to cognize—we assume such efficacy to be intrinsic to the entity occasioning it. Given the sheer ubiquity of such effects, then, we should expect attributions of intrinsic efficacy to be a ubiquitous feature of human cognition.

As indeed they are. Magical thinking, for instance, clearly involves the application of intrinsic efficacy, only to problem-ecologies it plainly cannot solve. A fetish understood in the anthropological sense provides what might seem a paradigmatic example, where occult powers are attributed to some object. In fact, the bulk of what science has labelled ‘superstition’ consists in the erroneous attribution of intrinsic efficacy to objects, actions, and events.

Of course, what makes magical thinking magical is the fact that the intrinsic efficacies posited simply do not exist. Where money does in fact mediate the functions attributed to it, fetishes do not. They may very well mediate ulterior functions—leveraging prestige, reinforcing social cohesion, and the like—but they do not do what the practitioners themselves suppose. A million dollars will buy you a house, but a fetish won’t make a rich relative sicken and die! Where systematic understanding demystifies money, clarifies the nature of the actual functions involved, it simply debunks fetishes.

All applications of intrinsic efficacy, in other words, are not equal. Some function in their domain of explicit application, while others do not. Since science has shown us that larger systems are always responsible, however, we should presume that all applications involve neglect of those systems. We should assume, in other words, that no such thing as intrinsic efficacy exists, and that if, for any reason, it seems that such a thing does (or worse yet, has to), it only does so for neglect.

And yet the vast majority of us continue to believe in it. Rules constrain. Representations reveal. Decisions resolve. Goals guide. Desires drive. Reasons clarify. According to some, the bloody apriori organizes the whole of bloody existence!

All these abstract or mental entities possess efficacies that we simply cannot square with our understanding of the various natural systems of which they should be part. We refer to these various loci of efficacy all the time; they help us predict, explain, and manipulate, given certain problem ecologies. Nevertheless, our every attempt to find them in nature has come up empty-handed.

In other words, they exhibit all the characteristics of what we’ve been referring to as intrinsic efficacy heuristics. As extreme as our cognitive straits are relative to our environments, they are even more so relative to ourselves. Given their complexity, brains simply cannot cognize brains in ‘plug and play’ terms. Intrinsic efficacies are not simply useful, they are mandatory when it comes to our intuitive understanding of ourselves and others. When our mechanic repairs our car, we have no access to his personal history, the way continual exposure to mechanical issues has honed his problem-solving capacities, and even less access to his evolutionary history, the way continual exposure to problematic environments has sculpted his biological problem-solving capacities. We have no access, in other words, to the vast systems of quite natural relata that make his repairs possible. So we call him ‘knowledgeable’ instead; we presume he possesses something—a fetish, in effect—possessing the efficacy explaining his almost miraculous ability to make your Ford Pinto run: a mass of true beliefs, representations, regarding automotive mechanics.

Since the point of the ‘representation fetish’ is to solve neglecting the systems actually responsible, our every attempt to explain representations in terms of these systems fails. Representation, like all intentional phenomena, is heuristic through and through. But for some reason, we simply cannot relinquish the notion that they have to be more. Even though intrinsic efficacy is obviously a ‘cognitive conceit’ everywhere else, the majority of cognitive science researchers insist on the reality of these particular loci, or at least the reality of some of them (because everybody thinks something has to be eliminated). The illusion—so easily overcome vis a vis money—remains the single most contentious issue confronting cognitive science today.

So why?

War of the Worlds Paris

One reason is simply that the past never crashes. Where monetary systems possess limits and instabilities that regularly indicate the relational nature of money’s efficacy, individual and evolutionary history are fixed. The complex relationality of meaning, or ‘externalism,’ can only be demonstrated indirectly, via a number of different philosophical tactics. In lieu of crashing markets, Wittgenstein challenges us to source the efficacy of the rules governing our representations, showing how citing further rules simply defers the issue, and how no recollection of prior use can serve to warrant present uses, because any number of recollections can be made to accord with any given use. In lieu of crashing markets, Quine uses the problem of starting a meaning market from scratch, or ‘radical translation,’ to demonstrate how meanings are perpetual hostages of contexts. In lieu of crashing markets, Putnam poses a systematically attenuated world, a Twin Earth, demonstrating the relationality of meaning via the equivocity of meat. In lieu of crashing markets, Derrida devises a market crashing methodology, deconstruction, where the myth of the ‘transcendental signified’ is revealed through the incremental, interpretative deformation of meaning in texts. In lieu of crashing markets, Dennett provides an alternate evolutionary history of a meaning system, the ‘two bitser,’ showing how successively complicating a mere mechanism can generate the complicated behaviours we associate with meaning.

In each case, the theorist relies on some imaginative way of removing meaning from our present market to show its dependence on the greater system. But alternate worlds are not quite as convincing as actual ones, and the power of the ‘representational intuition’ seems to be commensurate with its local problem-solving power, so these arguments, as immanently decisive as they are, have failed to carry the field. Even worse, those they have convinced generally assume that representation alone is the problem, and thus that these arguments motivate some form of pragmatic normativism—which is to say, a different form of intrinsic efficacy! They miss the whole moral.

And this speaks to the second great difficulty obscuring the heuristic nature of meaning: the fact that it constitutes a component of a larger system of such heuristics. Representation begs reference begs truth begs rationality begs normativity, and so on. Overcoming one instance of intrinsic efficacy, therefore, simply results in becoming snarled in another, and the gain in understanding is minimal at best. One set of conundrums is exchanged for another, as we should expect. Since this heuristic system has remained invisible for the whole of human history, erroneous attributions of intrinsic efficacy characterize the sum of our traditional self-understanding, what Sellars famously called the ‘Manifest Image.’ Seeing this heuristic system for what it is, therefore, represents as radical a conceptual break with our past as one can imagine. And this radicality, accordingly, means that epistemic conservativism itself counts against the possibility of seeing intrinsic efficacy for what it is.

We find ourselves stranded with a variety of special purpose ‘meaning fetishes,’ floating efficacies that motivate and constrain our activities, bind us to our environments, solve our disputes, and so on. And like the well-dressed man in The War of the Worlds, we quite simply do not know how to go on.

War of the Worlds plague