Three Pound Brain

No bells, just whistling in the dark…

Tag: problem of meaning

A Secret History of Enlightened Animals (by Ben Cain)

by rsbakker

Stair of Being

 

As proud and self-absorbed as most of us are, you’d expect we’d be obsessed with reading history to discover more and more of our past and how we got where we are. But modern historical narratives concentrate on the mere facts of who did what to whom and exactly when and where such dramas played out. What actually happened in our recent and distant past doesn’t seem grandiose enough for us, and so we prefer myths that situate our endeavours in a cosmic or supernatural background. Those myths can be religious, of course, but also secular as in films, novels, and the other arts. We’re so fixated on ourselves and on our cultural assumptions that we must imagine we’re engaged in more than just humdrum family life, business, political chicanery, and wars. We’re heroes in a universal tale of good and evil, gods and monsters. We thereby resort to the imagination, overlooking the existential importance of our actual evolutionary transformation. When animals became people, the universe turned in its grave.

 

Awakening from Animal Servitude unto Alienation

The so-called wise way of life, that of our species, originates from the birth of an anomalous form of consciousness. That origin has been widely mythologized to protect us from the vertigo of feeling how fine the line is between us and animals. Thus, personal consciousness has been interpreted as an immaterial spirit or as a spark left behind by the intrusion of a higher-dimensional realm into fallen nature, as in Gnosticism, or as an illusion to maintain the play of the slumbering God Brahman, as in some versions of Hinduism, and so on and so forth. But the consciousness that separates people from animals is merely the particular higher-order thought—that is, a thought about thoughts—that you (your lower-order thoughts) are free in the sense of being autonomous, that you’re largely liberated from naturally-selected, animal processes such as hunting for food or seeking mates in the preprogrammed ways. That thought eventually comes to lie in the background of the flurry of mental activity sustained by our oversized brains, along with the frisson of fear that accompanies the revelation that as long as we can think we’re free from nature, we’re actually so. This is because such a higher-order thought, removed as it is from the older, animal parts of our brain, is just what allows us to independently direct our body’s activities. The freedom opened up by human sentience is typically experienced as a falling away from a more secure position. In fact, our collective origin is very likely encapsulated in each child’s development of personhood, fraught as that is with anxiety and sadness as well as with wonder. Children cry and sulk when they don’t get their way, which is when they learn that they stand apart from the world as egos who must strive to live up to otherworldly social standards.

Animals become people by using thought to lever themselves into a black hole-like viewpoint subsisting outside of nature as such. The results are alienation and the existential crisis which are at the root of all our actions. Organic processes are already anomalous and thus virtually miraculous. Personhood represents not progress, since the values that would define such an advance are themselves alien and unnatural by being anthropocentric, but a maximal state of separation from the world, the exclusion of some primates from the environments that would test their genetic mettle. Personal consciousness is the carving of godlike beings from the raw materials of animal slaves, by the realization that thoughts—memories, emotions, imaginings, rational modeling for the sake of problem-solving—comprise an inner world whose contents need not be dictated by stimuli. The cost of personhood, that is, of virtual godhood in the otherwise mostly inanimate universe, is the suffering from alienation that marks our so-called maturity, our fall from childhood innocence whereupon we land in the adult’s clownish struggles with hubris. Our independence empowers us to change ourselves and the world around us, and so we assume we’re the stars of the cosmic show or at least of the narrative of our private life. But because the business of our acting like grownups is witnessed by hardly any audience at all—except in the special case of celebrities who are ironically infantilized by their fame, because the wildly inhuman cosmos is indifferent to our successes and failures—we typically develop into existential mediocrities, not heroes. We overcompensate for the anguish we feel because our thoughts sever us from everything outside our skull, becoming proud of our adult independence; we’re like children begging their parents to admire their finger paintings. The natural world responds with randomness and indiscriminateness, with luck and indifference, humiliating us with a sense of the ultimate futility of our efforts. Our oldest solution is to retreat to the anthropocentric social world in which we can honour our presumed greatness, justly rewarding or punishing each other for our deeds as we feel we deserve.

 

Hypersocialization and the Existential Crisis of Consciousness

The alienation of higher consciousness is followed, then, by intensive socialization. Animals socialize for natural purposes, whereas we do so in the wake of the miracle of personhood. Our relatively autonomous selves are miraculous not just because they’re so rare (count up the rocks and the minds in the universe, for example, and the former will so outnumber the latter that minds will seem to have spontaneously popped into existence without any general cause), but because whereas animals adapt to nature, conforming to genetic and environmental regularities, people negate those regularities, abandoning their genetic upbringing and reshaping the global landscape. The earliest people channeled their resentment against the world they discovered they’re not wholly at home in, by inventing tools to help them best nature and its animal slaves, but also by forming tribes defined by more and more elaborate social conventions. The more arbitrary the implicit and explicit laws that regulate a society, the more frenzied its members’ dread of being embedded in a greater, uncaring wilderness. Again, human societies are animalistic in so far as they rely on the structure of dominance hierarchies, but whereas alpha males in animal groups overpower their inferiors for the natural reason of maintaining group cohesion to protect the alphas whose superior genes are the species’ best hope for future generations, human leaders adopt the pathologies of the God complex. Indeed, all people would act like gods if only they could sustain the farce. Alas, just as every winning lottery ticket necessitates multitudes of losers, every full-blown personal deity depends on an army of worshippers. Personhood makes us all metaphysically godlike with respect to our autonomy and our liberation from some natural, impersonal systems, but only a lucky minority can live like mythical gods on Earth.

We socialize, then, to flatter our potential for godhood, by elevating some of our members to a social position in which they can tantalize us with their extravagant lifestyles and superhuman responsibilities. We form sheltered communities in which we can hide from nature’s alien glare. Our elders, tyrants, kings, and emperors lord it over us and we thank them for it, since their appallingly decadent lives nevertheless prove that personhood can be completed, that an absolute fall from the grace of animal innocence isn’t asymptotic, that our evolution has a finite end in transhumanity. Our psychopathic rulers are living proofs that nature isn’t omnipresent, that escape is possible in the form of insanity sustained by mass hallucination. We daydream the differences between right and wrong, honour and dishonour, meaning and meaninglessness. We fill the air with subtle noises and imagine that those symbols are meant to lay bare the final truth. We thus mitigate the removal of our mind from the world, with a myth of reconciliation between thoughts and facts. But language was likely conceived of in the first place as a magical instrument, that is, as an extension of mentality into nature which was everywhere anthropomorphized. Human tribes were assumed to be mere inner circles within a vast society of gods, monsters, and other living forces. We socialized, then, not just to escape to friendly domains to preserve our dignity as unnatural wonders, but to pretend that we hadn’t emerged just by a satanic/promethean act of cognitive defiance, with the ego-making thought that severs us from natural reality. We childishly presumed that the whole universe is a stage populated by puppets and actors; thus, no existential retreat might have been deemed necessary, because nature’s alienness was blotted out in our mythopoeic imagination. As in Genesis, God created by speaking the world into being, just as shamans and magicians were believed to cast magical spells that bent reality to their will.

But every theistic posit was part of an unconscious strategy to avoid facing the obvious fact that since all gods are people, we’re evidently the only gods. Nevertheless, having conceived of theistic fictions, we drew up models to standardize the behaviour of actual gods. Thus, the Pharaoh had to be as remote and majestic as Osiris, while the Roman Emperor had to rule like Jupiter, the Raj had to adjudicate like Krishna, the Pope had to appear Christ-like, and the U.S. President has to seem to govern like your favourite Hollywood hero. The double standard that exempts the upper classes from the laws that oppress the lowly masses is supposed to prevent an outbreak of consciousness-induced angst. Social exceptions for the upper class work with mass personifications and enchantments of nature, and those propagandistic myths are then made plausible by the fact that superhuman power elites actually exist. Ironically, such class divisions and their concomitant theologies exacerbate the existential predicament by placing those exquisite symbols of our transcendence (the power elites) before public consciousness, reminding us that just as the gods are prior to and thus independent of nature, so too we who are the only potential or actual gods don’t belong within that latter world.

 

Scientific Objectivity and Artificialization

Hypersocialization isn’t our only existential stratagem; there’s also artificialization as a defense against full consciousness of our unnatural self-control. Whereas the socializer tries to act like a god by climbing social ladders, bullying his underlings, spending unseemly wealth in generational projects of self-aggrandizement, and creating and destroying societal frameworks, the artificializer wants to replace all of nature with artifacts. That way, what began as the imaginary negation of nature’s inhuman indifference to life, in the mythopoeic childhood of our species, can be fulfilled when that indifference is literally undone by our re-engineering of natural processes.

To do that, the artificializer needs to think, not just to act, like a god. That required forming cognitive programs that don’t depend on the innate, naturally-selected ones. Cognitive scientists maintain that the brain’s ability to process sensations, for example, evolved not to present us with the absolute truth but to ensure our fitness to our environment, by helping us survive long enough to sexually reproduce. Animal neural pathways differ from personal ones in that the former serve the species, not the individual, and so the animal is fundamentally a puppet acting out its life cycle as directed by its genetic programming and by certain environmental constraints. Animals can learn to adapt their behaviour to their environment and so their behaviour isn’t always robotic, but unless they can apply their learning towards unnatural ends, such as by developing birth control techniques that systematically thwart the pseudo goals of natural selection, they’ll think as animals, not as gods. Animals as such are entirely natural creatures, meaning that in so far as their behaviour is mediated by an independent control center, their thinking nevertheless is dedicated to furthering the end of natural selection, which is just that of transmitting genes to future generations. By contrast, gods don’t merely survive or even thrive. Insects and bacteria thrive, as did the dinosaurs for millions of years, but none were godlike because none were existentially transformed by conscious enlightenment, by a cognitive black hole into which an animal can fall, creating the world of inner space.

People, too, have animal programming, such as the autonomic programs for processing sensory information. Social behaviour is likewise often purely animalistic, as in the cases of sex and the power struggle for some advantage in a dominance hierarchy. Rational thinking is less so and thus less natural, meaning more anti-natural in that it serves rational ideals rather than just lower-order aims. To be sure, Machiavellian reasoning is animalistic, but reason has also taken on an unnatural function. Whereas writing was first used for the utilitarian purpose of record keeping, reason in the Western tradition was initially not so practical. The Presocratics argued about metaphysical substances and other philosophical matters, indicating that they’d been largely liberated from animal concerns of day-to-day survival and were exploring cognitive territory that’s useful only from the internal, personal perspective. Who am I really? What is the world, ultimately speaking? Is there a worthy difference between right and wrong? Such philosophical questions are impossible without rational ideals of skepticism, intellectual integrity, and love of knowledge even if that knowledge should be subversive—as it proved to be in Socrates’ classic case.

While the biblical Abraham was willing to sacrifice his son for the sake of hypersocializing with an imaginary deity, Socrates died for the antisocial quest of pursuing objective knowledge that inevitably threatens the natural order along with the animal social structures that entrench that order, such as the Athenian government of his day. Socrates cared not about face-saving opinions, but about epistemic principles that arm us with rationally-justified beliefs about how the world might be in reality. Much later, in the Scientific Revolution, rationalists (which is to say philosophers) in Europe would revive the ancient pagan ideal of reasoning regardless of the impact on faith-based dogmas. Scientists like Isaac Newton developed cognitive methods that were counterintuitive in that they went against the grain of more natural human thinking that’s prone to fallacies and survival-based biases. In addition, he served rational institutions, namely the Royal Society and Cambridge, which rivaled the genes for control over the enlightened individual’s loyalty. Moreover, the findings of those cognitive methods were symbolized using artificial languages such as mathematics and formal logic, which enabled liberated minds to communicate their discoveries without the genetic tragicomedies of territorialism, fight-or-flight responses, hero worship, demagoguery, and the like that are liable to be triggered by rhetoric and metaphors expressed in natural languages.

But what is objective knowledge? Are scientists and other so-called enlightened rationalists as neutral as the indifferent world they study? No, rationalists in this broad sense are partly liberated from animal life but they’re not lost in a limbo; rather, they participate in another, unnatural process which I’m calling artificialization. Objectivity isn’t a purely mechanical, impersonal capacity; indeed, natural processes themselves have aesthetically interpretable ends and effective means, so there are no such capacities. In any case, the search for objective knowledge builds on human animalism and on our so-called enlightenment, on our having transcended our animal past and instincts. We were once wholly slaves to nature and we often behave as if we were still playthings of natural forces. But consciousness and hypersocialization provided escapes, albeit into fantasy worlds that nevertheless empowered us. We saw ourselves as being special because we became aware of the increasing independence of our mental models from the modeled territory, owing to the formers’ ultra-complexity. The inner world of the mind emerged and detached from the natural order—not just metaphysically or abstractly, but psychologically and historically. That liberation was traumatic and so we fled to the fictitious world of our imagination, to a world we could control, and we pretended the outer world was likewise held captive to our mental projections. The rational enterprise is fundamentally another form of escape, a means of living with the burden of hyper-awareness. Instead of settling for cheap, flimsy mental constructions such as our gods, boogeymen, and the panoply of delusions to which we’re prone, and instead of hording divinity in the upper social classes that exercise their superpowers in petty or sadistic projects of self-aggrandizement, we saw that we could usurp God’s ability to create real worlds, as it were. We could democratize divinity, replacing impersonal nature with artificial constructs that would actually exist outside our minds as opposed to being mere projections of imagination and existential longing.

The pragmatic aspect of objectivity is apparent from the familiar historical connections between science, European imperialism, and modern industries. But it’s apparent also from the analytical structure of scientific explanations itself. The existential point of scientific objectivity was paradoxically to achieve a total divorce from our animal side by de-personalizing ourselves, by restraining our desire for instant gratification, scolding our inner child and its playpen, the imagination, and identifying with rational methods. Whereas an animal relies on its hardwired programs or on learned rules-of-thumb for interpreting its environment, an enlightened person codifies and reifies such rules, suspending disbelief and siding with idealized or instrumental formulations of these rules so that the person can occupy a higher cognitive plane. Once removed from natural processes by this identification with rational procedures and institutions, with teleological algorithms, artificial symbols and the like, the animal has become a person with a godlike view from outside of nature—albeit not an overview of what the universe really is, but an engineer’s perspective of how the universe works mechanically from the ground up.

To see what I mean, consider the Hindu parable of the blind men who try to ascertain the nature of an elephant by touching its different body parts. One of the men feels a tusk and infers that the elephant is like a pipe. Another touches the leg and thinks the whole animal is like a tree trunk. Another touches the belly and believes the animal is like a wall. Another touches the tail and says the elephant is like a rope. Finally, another one touches the ear and thinks the elephant is like a hand fan. One of the traditional lessons of this parable is that we can fallaciously overgeneralize and mistake the part for the whole, but this isn’t my point about science. Still, there is a difference between what the universe is in reality, which is what it is in its entirety in so far as all of its parts form a cohesive order, and how inquisitive primates choose to understand the universe with their divisive concepts and models. Scientists can’t possibly understand everything in nature all at once; the word “universe” is a mere placeholder with no content adequate to the task of representing everything that’s out there interacting to produce what we think of as distinct events. We have no name for the universe which gives us power over it by identifying its essence, as it were. So scientists analyze the whole, observing how parts of the world work in isolation, ideally in a laboratory. They then generalize their findings, positing a natural regularity or nomic relation between those fragments, as pictured by their model or theory. It’s as if scientists were the blind men who lack the brainpower to cognize the whole of natural reality, and so they study each part, perhaps hoping that if they cooperate they can combine their partial understandings and arrive at some inkling of what the natural universe in general is. Unfortunately, the deeper we look into nature, the more complexity we find in its parts and so the more futile becomes any such plan for total comprehension. Scientists can barely keep up with advances in their subfields; the notion that anyone could master all the sciences as they currently stand is ludicrous, and there’s still much in the world that isn’t scientifically understood by anyone.

So whatever the scientist’s aspiration might be, the effect of science isn’t the achievement of complete, final understanding of everything in the universe or of the whole of nature. Instead, science allows us to rebuild the whole based on partial, analytical knowledge of how the world works. Suppose scientists discover an extraterrestrial artifact and they have no clue as to the artifact’s function, which is to say they have no understanding of what the object is in reality. Still, they can reverse-engineer the artifact, taking it apart, identifying the materials used to assemble it and certain patterns in how the parts interact with each other. With that limited knowledge of the artifact’s mechanical aspect, scientists might be able to build a replica or else they could apply that knowledge to create something more useful to them, that is, something that works in similar ways to the original but which works towards an end supplied by the scientists’ interests, not the alien’s. There would be no point in replicating the alien technology, since the artifact would be useless without knowledge of what it’s for or without even a shared interest in pursuing that alien goal. Replace the alien artifact with the natural universe and you have some measure of the Baconian position of human science. Of course, nature has no designer; nevertheless, we experience natural processes as having ends and so we’re faced with the choice of whether to apply our piecemeal knowledge of natural mechanisms to the task of reinforcing those ends or to that of adjusting or even reversing them. The choice is to act as stewards of God’s garden, as it were, or as promethean rebels who seek to be divine creators. There are still enclaves of native tribes living as retro-human animals and preserving nature rather than demolishing the wilderness and establishing in its place a technological wonderland built with knowledge of natural mechanisms. But the billions of participants in the science-driven, global monoculture have evidently chosen the promethean, quasi-satanic path.

 

Existentialism and our Hidden History

History is a narrative that often informs us indirectly about the present state of human affairs, by representing part of our past. Ancient historical narratives were more mythical than fact-based. The New Testament, for example, uses historical details to form an exoteric shell around the Gnostic, transhumanist suspicion that human nature is “fallen” to the extent that we surrender our capacity to transcend the animal life cycle; we must “die” to our natural bodies and be reborn in a glorious, unnatural or “spiritual” form. At any rate, like poetry, the mythical language of such ancient historical narratives is open to endless interpretations, which is to say that such stories are obscure. Josephus’s ancient histories of the Jewish people, written for a Roman audience, aren’t so mythologized but they’re no less propagandistic. By contrast, modern historians strive to avoid the pitfalls of writing highly subjective or biased narratives, and so they seek to analyze and interpret just the facts dug up by archeologists and textual critics. Modern histories are thus influenced by the exoteric presumption about science, which is that science isn’t primarily in the business of artificializing everything that’s wild in the sense of being out of our control, but is just a mode of inquiry for arriving at the objective truth (come what may).

Left out of this development of the telling of history is the existential significance of our evolutionary transition from being animals, which were at one with nature, to being people who are implicitly if not consciously at war with everything nonhuman. What I’ve sketched above is part of our secret history; it’s the story of what it means to be human, which underlies all our endeavours. The significance of our standing between animalism and godhood is hidden and largely unknown or forgotten, because at the root of this purpose that drives us is the trauma of godlike consciousness which we’d rather not relive. We each have our fill of that trauma in our passage from childhood innocence, which approximates the animal state of unknowing, to adult independence. Teen angst, which cultures combat with initiation rituals to distract the teenager with sanctioned, typically delusional pastimes, is the tip of the iceberg of pain that awaits anyone who recognizes the plight entailed by our very form of existence.

In Escape from Freedom, Erich Fromm argued that citizens of modern democracies are in danger of preferring the comfort of a totalitarian system, to escape the ennui and dehumanization generated by modern societies. In particular, capitalistic exploitation of the worker class and the need to assimilate to an environment run more and more by automated, inhuman machines are supposed to drive civilized persons to savage, authoritarian regimes. At least, this was Fromm’s explanation of the Nazis’ rise to power. A similar analysis could apply to the present degeneration of the Republican Party in the U.S. and to the militant jihadist movement in the Middle East. But Fromm’s analysis is limited. To be sure, capitalism and technology have their drawbacks and these may even contribute to totalitarianism’s appeal, as Fromm shows. But this overlooks what liberal, science-driven societies and savage, totalitarian societies have in common. Both are flights from existential reckoning, as I’ve explained: the one revolves around artificialization (Enlightenment, rationalist values of individual autonomy, which deteriorate until we’re left with the fraud of consumerism), the other around hypersocialization (cult of personality, restoring the sadomasochistic interplay between mythical gods and their worshippers). Fromm ignores the existential effect of the rational enlightenment that brought on modern science, democracy, and capitalism in the first place, the effect being our deification. By deifying ourselves, we prevent our treasured religions from being fiascos and we spare ourselves the horror of living in an inhuman wilderness from which we’re alienated by our hyper-awareness.

We created the modern world to accelerate the rate at which nature is removed from our presence. Contrary to optimists like Steven Pinker, modernity hasn’t fulfilled its promise of democratizing divinity, as I’d put it. Robber barons and more parasitic oligarchs do indeed resort to the older departure of hypersocialization, acting like decadent gods in relation to human slaves instead of focusing their divine creativity on our common enemy, the monstrous wilderness. The internet that trivializes everything it touches and the omnipresence of our high-tech gadgets do infantilize us, turning us into cattle-like consumers instead of unleashing our creativity and training us to be the indomitable warriors that alone could endure the promethean mission. This is because we, being the only gods that exist, are woefully unprepared for our responsibility, having retained our animal heritage in the form of our bodies which infect most of our decisions with natural fears and prejudices. At any rate, the deeper story of the animal that becomes a godlike person to obliterate the source of alienation that’s the curse of any flawed, lonely godling helps explain why we now settle more often for the minor anxieties of living in modern civilization, to avoid the major angst of recognizing the existential importance of what we are.

Reason, Bondage, Discipline

by rsbakker

We can understand all things by her; but what she is we cannot apprehend.

–Robert Burton, Anatomy of Melancholy, 1652

.

So I was rereading Ray Brassier’s account of Churchland and eliminativism in his watershed Nihil Unbound: Enlightenment and Extinction the other day and I thought it worth a short post given the similarities between his argument and Ben’s. I’ve already considered his attempt to rescue subjectivity from the neurobiological dismantling of the self in “Brassier’s Divided Soul.” And in “The Eliminativistic Implicit II: Brandom in the Pool of Shiloam,” I dissected the central motivating argument for his brand of normativism (the claim that the inability of natural cognition to substitute for intentional cognition means that only intentional cognition can theoretically solve intentional cognition), showing how it turns on metacognitive neglect and thus can only generate underdetermined claims. Here I want to consider Brassier’s problematic attempt to domesticate the challenge posed by scientific reason, and to provision traditional philosophy with a more robust sop.

In Nihil Unbound, Brassier casts Churchland’s eliminativism as the high water mark of disenchantment, but reads his appeal to pragmatic theoretical virtues as a concession to the necessity of a deflationary normative metaphysics. He argues (a la Sellars) that even though scientific theories possess explanatory priority over manifest claims, manifest claims nevertheless possess conceptual parity. The manifest self is the repository of requisite ‘conceptual resources,’ what anchors the ‘rational infrastructure’ that makes us intelligible to one another as participants in the game of giving and asking for reasons—what allows, in other words, science to be a self-correcting exercise.

What makes this approach so attractive is the promise of providing transcendental constraint absent ontological tears. Norms, reasons, inferences, and so on, can be understood as pragmatic functions, things that humans do, as opposed to something belonging to the catalogue of nature. This has the happy consequence of delimiting a supra-natural domain of knowledge ideally suited to the kinds of skills philosophers already possess. Pragmatic functions are real insofar as we take them to be real, but exist nowhere else, and so cannot possibly be the object of scientific study. They are ‘appearances merely,’ albeit appearances that make systematic, and therefore cognizable, differences in the real world.

Churchland’s eliminativism, then, provides Brassier with an exemplar of scientific rationality and the threat it poses to our prescientific self-understanding that also exemplifies the systematic dependence of scientific rationality on pragmatic functions that cannot be disenchanted on pain of scuttling the intelligibility of science. What I want to show is how in the course of first defending and then critiquing Churchland, Brassier systematically misconstrues the challenge eliminativism poses to all philosophical accounts of meaning. Then I want to discuss how his ‘thin transcendentalism’ actually requires this misconstrual to get off the ground.

The fact that Brassier treats Churchland’s eliminativism as exemplifying scientific disenchantment means that he thinks the project is coherent as far as it goes, and therefore denies the typical tu quoque arguments used to dismiss eliminativism more generally. Intentionalists, he rightly points out, simply beg the question when accusing eliminativists of ‘using beliefs to deny the reality of beliefs.’

“But the intelligibility of [eliminative materialism] does not in fact depend upon the reality of ‘belief’ and ‘meaning’ thus construed. For it is precisely the claim that ‘beliefs’ provide the necessary form of cognitive content, and that propositional ‘meaning’ is thus the necessary medium for semantic content, that the eliminativist denies.” (15)

The question is, What are beliefs? The idea that the eliminativist must somehow ‘presuppose’ one of the countless, underdetermined intentionalist accounts of belief to be able to intelligibly engage in ‘belief talk’ amounts to claiming that eliminativism has to be wrong because intentionalism is right. The intentionalist, in other words, is simply begging the question.

The real problem that Churchland faces is the problem that all ‘scientistic eliminativism’ faces: theoretical mutism. Cognition is about getting things right, so any account of cognition lacking the resources to explain its manifest normative dimension is going to seem obviously incomplete. And indeed, this is the primary reason eliminative materialism remains a fringe position in psychology and philosophy of mind today: it quite simply cannot account for what, pretheoretically, seems to be the most salient feature of cognition.

The dilemma faced by eliminativism, then, is dialectical, not logical. Theory-mongering in cognitive science is generally abductive, a contest of ‘best explanations’ given the intuitions and scientific evidence available. So far as eliminativism has no account of things like the normativity of cognition, then it is doomed to remain marginal, simply because it has no horse in the race. As Kriegel says in Sources of Intentionality, eliminativism “does very poorly on the task of getting the pretheoretically desirable extension right” (199), fancy philosopher talk for ‘it throws the baby out with the bathwater.’

But this isn’t quite the conclusion Brassier comes to. The first big clue comes in the suggestion that Churchland avoids the tu quoque because “the dispute between [eliminative materialism] and [folk psychology] concerns the nature of representations, not their existence” (16). Now although it is the case that possessing an alternative theory makes it easier to recognize the question-begging nature of the tu quoque, the tu quoque is question-begging regardless. Churchland need only be skeptical to deny rather than affirm the myriad, underdetermined interpretations of belief one finds in intentional philosophy. He no more need specify any alternative theory to use the word ‘belief’ than my five-year old daughter does. He need only assert that the countless intentionalist interpretations are wrong, and that the true nature of belief will become clear once cognitive science matures. It just so happens that Churchland has a provisional neuroscientific account of representation.

As an eliminativist, having a theoretical horse in the race effectively blocks the intuition that you must be riding one of the myriad intentional horses on the track, but the intuition is faulty all the same. Having a theory of meaning is a dialectical advantage, not a logical necessity. And yet nowhere does Brassier frame the problem in these terms. At no point does he distinguish the logical and dialectical aspects of Churchland’s situation. On the contrary, he clearly thinks that Churchland’s neurocomputational alternative is the only thing rescuing his view. In other words, he conflates the dialectical advantage of possessing an alternate theory of meaning with logical necessity.

And as we quickly discover, this oversight is instrumental to his larger argument. Brassier, it turns out, is actually a fan of the tu quoque—and a rather big one at that. Rather than recognizing that Churchland’s problem is abductive, he frames it more abstrusely as a “latent tension between his commitment to scientific realism on the one hand, and his adherence to a metaphysical naturalism on the other” (18). As I mentioned above, Churchland finds himself in a genuine dialectical bind insofar as accounts of cognition that cannot explain ‘getting things right’ (or other apparent intentional properties of cognition) seems to get the ‘pretheoretically desirable extension’ wrong. This argumentative predicament is very real. Pretheoretically, at least, ‘getting things right’ seems to be the very essence of cognition, so the dialectical problem posed is about as serious as can be. So long as intentional phenomena as they appear remain part of the pretheoretically desirable extension of cognitive science, then Churchland is going to have difficulty convincing others of his view.

Brassier, however, needs the problem to be more than merely dialectical. He needs some way of transforming the dialectically deleterious inability to explain correctness into warrant for a certain theory of correctness—namely, some form of pragmatic functionalism. He needs, in other words, the tu quoque. He needs to show that Churchland, whether he knows it or not, requires the conceptual resources of the manifest image as a condition of understanding science as an intelligible enterprise. The way to show this requirement, Brassier thinks, is to show—you guessed it—the inability of Churchland’s neurocomputational account of representation to explain correctness. His inability to explain correctness, the assumption is, means he has no choice but to utilize the conceptual resources of the manifest image.

But as we’ve seen, the tu quoque begs the question against the eliminativist regardless of their ability to adduce alternative explanations for the phenomena at issue. Possessing an alternative simply makes the tu quoque easier to dismiss. Churchland is entirely within his rights to say, “Well, Ray, although I appreciate the exotic interpretation of theoretical virtue you’ve given, it makes no testable predictions, and it shares numerous family resemblance to countless other such, chronically underdetermined theories, so I think I’m better off waiting to see what the science has to say.”

It really is as easy as that. Only the normativist is appalled, because only they are impressed by their intuitions, the conviction that some kind of intentionalist account is the only game in town.

So ultimately, when Brassier argues that “[t]he trouble with Churchland’s naturalism is not so much that it is metaphysical, but that it is an impoverished metaphysics, inadequate to the task of grounding the relation between representation and reality” (25) he’s mistaking a dialectical issue with an inferential and ontological one, conflating a disadvantage in actual argumentative contexts (where any explanation is preferred to no explanation) with something much grander and far more controversial. He thinks that lacking a comprehensive theory of meaning automatically commits Churchland to something resembling his theory of meaning, a deflationary normative metaphysics, namely his own brand of pragmatic functionalism.

For the naturalist, lacking answers to certain questions can mean many different things. Perhaps the question is misguided. Perhaps we simply lack the information required. Perhaps we have the information, but lack the proper interpretation. Maybe the problem is metaphysical—who the hell knows? When listing these possibilities, ‘Perhaps the phenomena is supra-natural,’ is going to find itself somewhere near, ‘Maybe ghosts are real,’ or any other possibility that amounts to telling science to fuck off and go home! A priori claims on what science can and cannot cognize have a horrible track record, period. As Anthony Chemero wryly notes, “nearly everyone working in cognitive science is working on an approach that someone else has shown to be hopeless, usually by an argument that is more or less purely philosophical” (Radical Embodied Cognitive Science, 3).

Intentional cognition is heuristic cognition, a way to cognize systems without cognizing the operations of those systems. What Brassier calls ‘conceptual parity’ simply pertains to the fact that intentional cognition possesses its own adaptive ecologies. It’s a ‘get along’ system, not a ‘get it right’ system, which is why, as a rule, we resort to it in ‘get along’ situations. The sciences enjoy ‘explanatory priority’ because they cognize systems via cognizing the operations of those systems: they solve on the basis of information regarding what is going on. They constitute a ‘get it right’ system. The question that Brassier and other normativists need to answer is why, if intentional cognition is the product of a system that systematically ignores what’s going on, we should think it could provide reliable theoretical cognition regarding what’s going on. How can a get along system get itself right? The answer quite plainly seems to be that it can’t, that the conundrums and perpetual disputation that characterize all attempts to solve intentional cognition via intentional cognition are exactly what we should expect.

Maybe the millennial discord is just a coincidence. Maybe it isn’t a matter of jamming the stick to find gears that don’t exist. Either way, the weary traveller is entitled to know how many more centuries are required, and, if these issues will never find decisive resolution, why they should continue the journey. After all, science has just thrown down the walls of the soul. Billions are being spent to transform the tsunami of data into better instruments of control. Perhaps tilting yet one more time at problems that have defied formulation, let alone solution, for thousands of the years is what humanity needs…

Perhaps the time has come to consider worst case scenarios–for real.

Which brings us to the moral: You can’t concede that science monopolizes reliable theoretical cognition then swear up and down that some chronically underdetermined speculative account somehow makes that reliability possible, regardless of what the reliability says!  The apparent conceptual parity between manifest and scientific images is something only the science can explain. This allows us to see just how conservative Brassier’s position is. Far from pursuing the “conceptual ramifications entailed by a metaphysical radicalization of eliminativism” (31), Brassier is actually arguing for the philosophical status quo. Far from following reason no matter where it leads, he is, like so many philosophers before him, playing another version of the ‘domain boundary game,’ marshalling what amounts to a last ditch effort to rescue intentional philosophy from the depredations of science. Or as he himself might put it, devising another sop.

As he writes,

“At this particular historical juncture, philosophy should resist the temptation to install itself within one of the rival images… Rather, it should exploit the mobility that is one of the rare advantages of abstraction in order to shuttle back and forth between images, establishing conditions of transposition, rather than synthesis, between the speculative anomalies thrown up within the order of phenomenal manifestation, and the metaphysical quandaries generated by the sciences’ challenge to the manifest order.” 231

Isn’t this just another old, flattering trope? Philosophy as fundamental broker, the medium that allows the dead to speak to the living, and the living to speak to the dead? As I’ve been arguing for quite sometime, the facts on the ground simply do not support anything so sunny. Science will determine the relation between the manifest and the scientific images, the fate of ‘conceptual parity,’ because science actually has explanatory priority. The dead decide, simply because nothing has ever been alive, at least not the way our ancestors dreamed.

The Ironies of Modern Progress and Infantilization (by Ben Cain)

by rsbakker

It’s commonly observed that we tend to rationalize our flaws and failings, to avoid the pain of cognitive dissonance, so that we all come to think of ourselves as fundamentally good persons even though many of us must instead be bad if “good” is to have any contrastive meaning. Societies, too, often exhibit pride which leads their chief representatives to embarrass themselves by declaring that their nation is the greatest that’s ever been in history. Both the ancients and the moderns did this, but it’s hard to deny the facts of modern technological acceleration. Just in the last century, global and instant communications have been established, intelligent machines run much of our infrastructure, robots have taken over many menial jobs, the awesome power of nuclear weapons has been demonstrated, and humans have visited the moon. We tend to think that the social impact of such uniquely powerful machines must be for the better. We speak casually, therefore, of technological advance or progress.

The familiar criticism of technology is that it destroys at least as much as it creates, so that the optimists tell only one side of the story. I’m not going to argue that neo-Luddite case here. Instead, I’m interested in the source of our judgment about progress through technology. Ironically, the more modern technology we see, the less reason we have to think there’s any kind of progress at all. This is because modernists from Descartes and Galileo onward have been compelled to distinguish between real and superficial properties, the former being physical and quantitative and the latter being subjective and qualitative. Examples of the superficial, “secondary” aspects are the contents of consciousness, but also symbolic meaning, purpose, and moral value, which include the normative idea of progress. For the most part, modernists think of subjective qualities as illusory, and because they devised scientific methods of investigation that bypass personal impressions and biases, modernists acquired knowledge of how natural processes actually work, which has enabled us to produce so much technology. So it’s curious to hear so many of us still assuming that our societies are generally superior to premodern ones, thanks in particular to our technological advantage. On the contrary, our technology is arguably the sign of a cognitive development that renders such an assumption vacuous.

.

Animism and Angst

One way of making sense of this apparent lack of social awareness is to point out that there are always elites who understand their society better than do the masses. And we could add that because the modern technological changes have happened so swiftly and have such staggering implications, many people won’t catch up to them or will even pretend there are no such consequences because they’re horrifying. But I think this makes for only part of the explanation. The masses aren’t merely ignoring the materialistic implications of science or the bad omens that technologies represent; instead, they have a commonsense conviction that technology must be good because it improves our lives.

In short, most citizens of modern, technologically-developed societies are pragmatic about technology. If you asked them whether they think their societies are better than earlier ones, they’d say yes and if you asked them why, they’d say that technology enables us to do what we want more efficiently, which is to say that technology empowers us to achieve our goals. And it turns out that this pragmatic attitude is more or less consistent with modern materialism. There’s no appeal here to some transcendent ideal, but just an egocentric view of technologies as useful tools. So our societies are more advanced than ancient ones because the ancients had to work harder to achieve their goals, whereas modern technology makes our lives easier. Mind you, this assumes that everyone in history has had some goals in common, and indeed our instinctive, animalistic desires are universal in so far as they’re matters of biology. By contrast, if all societies were alien and incommensurable to each other, national pride would be egregiously irrational. And most people probably also assume that our universal desires ought to be satisfied, because we have human rights, so that there’s moral force behind this social progress.

The instincts to acquire shelter, food, sex, power, and prestige, however, seem to me likewise insufficient to explain our incessant artificialization of nature. There’s another universal urge, which we can think of as the existential one and this is the need to overcome our fear of the ultimate natural truths. There are two ways of doing so, with authenticity or with inauthenticity, which is to say with honour, integrity, and creativity or with delusions arising from a weak will. (Again, this raises the question of whether even these values make sense in the naturalistic picture, and I’ll come back to this at the end of this article.) Elsewhere, I talk about the ancient worldviews as glorifying our penchant for personification. Prehistoric animists saw all of nature as alive, partly because hardly anything at that time was redesigned and refashioned to suit human interests and the predominant wilderness was full of plant and animal life. Also, the ancients hadn’t learned to repress their childlike urge to vent the products of their imagination. At that time, populations were sparse and there were no machines standing as solemn proofs of objective facts; moreover, there wasn’t much historical information to humble the Paleolithic peoples with knowledge of opposing views and thus to rein in their speculations. For such reasons, those ancients must have confronted the world much as all children do—at least with respect to their trust in their imagination.

More precisely, they didn’t confront the world at all. When a modern adult rises in the morning, she leaves behind her irrational dreams and prides herself on believing that she controls her waking hours with her autonomous and rational ego. By contrast, there’s no such divergence between the child’s dream life and waking hours, since the child’s dreams spill into her playful interpretations of everything that happens to her. To be sure, modern children have their imagination tempered by the educational system that’s bursting at the seams with lessons from history. But children generally have only a fuzzy distinction between subject and object. That distinction becomes paramount after the technoscientific proofs of the world’s natural impersonality. The world has always been impersonal and amoral, but only modernists have every reason to believe as much and thus only we inheritors of that knowledge face the starkest existential choice between personal authenticity and its opposite. The prehistoric protopeople, who were still experimenting with their newly acquired excess brain power, faced no such decision between intellectual integrity and flagrant self-deception. They didn’t choose to personify the world, because they knew no different; instead, they projected their mental creations onto the wilderness with childlike abandon and so distracted themselves from their potential to understand the nature of the world’s apparent indifference. After all, in spite of the relative abundance of the ancient environments, things didn’t always go the ancients’ way; they suffered and died like everyone else. Moreover, even early humans were much cleverer than most other species.

Thus, the ancients weren’t so innocent or ignorant that they felt no fear, if only because few animals are that helpless. But human fear differs from the reactionary animal kind, because ours has an existential dimension due to the breadth of our categories and thus of our understanding. Humans attach labels to so many things in the world not just because we’re curious, but because we’re audacious and we have excess (redundant) brain capacity. Animals feel immediate pain and perhaps even the alienness of the world beyond their home territory, but not the profound horror of death’s inexorability or of the world’s undeadness, which is to say the fear of nature’s way of developing (through complexification, natural selection, and the laws of probability) without any normative reason. Animals don’t see the world for what it is, because their vision and thus their concern are so narrow, whereas we’ve looked far out into the macrocosmic and microcosmic magnitudes of the universe. We’ve found no reassuring Mind at the bottom of anything, not even in our bodies. Our overactive brains compel us to care about aspects of the world that are bad for our mental health, and so we’re liable to feel anxious. And as I say, we cope with that anxiety in different ways.

.

Modernity and Infantilization

But how does this existentialism relate to the source of our myth of modern progress? Well, I see a comparison between prehistoric, mythopoeic reverie and the modern consumer’s infantilization. In each case, we have a lack of enlightenment, a retreat from rational neutrality, and an intermixing of subject and object. I’ve discussed the mythopoeic worldview elsewhere, so here I’ll just say that it amounts to thinking of the world as entirely enchanted and filled with vitality. Again, the modern revolutions (science and capitalistic industry) have led to our disenchantment with nature, because we’ve been forced to see the world as dead inside. That’s why late modernists are at best pragmatic about progress. We must somehow express our naïve pride in ourselves and in our self-destructive modern nations, because we prefer not to suffer as alienated outsiders. But modernity’s ideal of ultrarationality makes absolutist and xenophobic pride seem uncivilized—although American audiences are notorious for stooping to that sort of savagery when they chant “USA! USA!” to quell disturbances in their proceedings. In any case, we postmodern pragmatists think of progress as being relative to our interests.

Arguably, then, we should all be despairing, nihilistic antinatalists, cheering on our species’ extinction to spare us more horror from our accursed powers of reason, because of the atheistic implications of science-led philosophical naturalism. But something funny happened along the way to the postmodern now, which is that our high-tech environment has driven most of us to revert to the mythopoeic trance. We, too, collapse the distinction between subject and object, because we’re not surrounded by the wilderness that science has shown to be the “product” of undead forces; instead, we’ve blocked out that world from our daily life and immersed ourselves in our technosphere. That artificial world is at our beck and call: our technology is designed for us and it answers to us a thousand times a day. Science has not yet shown us to be exactly as impersonal as the lifeless universe and so we can take comfort in our amenities as we assume that while there’s no spirit under any rock, there’s a mind behind every iPhone.

So while we’re aware of the scientist’s abstract concept of the physical object, we don’t typically experience the world as including such absurdly remote quantities. Heidegger spoke of the pragmatic stance as the instrumentalization of every object, in which case we can look at a rock and see a potential tool, a “ready-to-hand” helper, not just an impersonal, undead and “given” object. (This is in contrast to objectification, in which we treat things only as “present-to-hand,” or as submitting to scientific scrutiny. The latter seems to reduce to the former, though, since objectification is still anthropocentric, in that the object is viewed not as a fully independent noumenon, but as a subject of human explanation and that makes it a sort of tool. True objectivity is the torment not of scientists but of those suffering from angst on account of their experience of nature’s horrible indifference and undeadness. True objectivity is just angst, when we despair that we can’t do anything with the world because we’re not at home in it and nature marches on regardless. All other attitudes, roughly speaking, are pragmatic.) In any case, the modern environment surpasses that instrumentalism with infantilization, because we late modernists usually encounter actual artifacts, not just potential ones. The big cities, at least, are almost entirely artificial places. Of course, everything in a city is also physical, on some level of scientific explanation, but that’s irrelevant to how we interpret the world we experience. A city is made up of artifacts and artifacts are objects whose functions extend the intentions of some subjects. Thus, hypermodern places bridge the divide between subjects and objects at the experiential level.

However, that’s only a precondition of infantilization. What is it for an adult to live as a child? To answer this, we need standards of psychological adulthood and infancy. My idea of adulthood derives from the modern myths of liberty and rational self-empowerment. Ours is a modern world, albeit one infected with our postmodern self-doubts, so it’s fitting that we be judged according to the standards set by modern European cultures. The modern individual, then, is liberated by the Enlightenment’s break with the past, made free to pursue her self-interest. Above all, this individual is rational since reason makes for her autonomy. Moreover, she’s skeptical of authority and tradition, since the modern experience is of how ancient Church teachings became dogmas that stifled the pursuit of more objective knowledge; indeed, the Church demonized and persecuted those who posed untraditional questions. The modern adult idolizes our hero, the Scientist, who relies on her critical faculties to uncover the truth, which is to say that the modern adult should be expected to be fearlessly individualistic in her assessments and tastes. Finally, this adult should be cosmopolitan—which is very different from Catholic universalism, for example. The Catholic has a vision of everyone’s obligation to convert to Catholicism, whereas the modernist appreciates everyone’s equal potential for self-determination, and so the modernist is classically liberal in welcoming a wide variety of opinions and lifestyles.

What, then, are the relevant characteristics of an infant? The infant is almost entirely dependent on a higher power. A biological infant has no choice in the matter and her infancy is only a stage in a process of maturation. Similarly, an infantile adult lacks autonomy and may be fed information in the same way a biological infant is fed food. For example, a cult member who defers to the charismatic leader in all matters of judgment is infantile with respect to that act of self-surrender. Many premodern cultures have been likewise infantile and our notion of modern progress compares the transition from that anti-modern version of maturity to the modern ideal of the individual’s rational autonomy, with the baby’s growth into a more independent being.

That’s the theory, anyway. The reality is that modern science is wedded to industry which applies our knowledge of nature, and the resulting artificial world infantilizes the masses. How so? For starters, through the post-WWII capitalistic imperative to grow the economy through hyper-consumption. Artificial demand is stimulated through propaganda, which is to say through mostly irrational, associative advertising. The demand is artificial in that it’s manufactured by corporations that have mastered the inhuman science of persuasion. That demand is met by mass-produced supply, the products of which tend to be planned for obsolescence and thus shoddier than they need to be.

The familiar result is the rebranding of the two biologically normal social classes: the rich and powerful alphas and everyone else (the following masses). Modern wealth is rationalized with myths of self-determination and genius, since no credible appeal can be made now to the divine right of kings. Mind you, the exception has been the creation of distinct middle classes which is due to socialist policies in liberal parts of the world that challenge the social Darwinian cynicism that’s implicit in capitalism. Maintaining a middle class in a capitalistic society, though, is a Sisyphean task: it’s like pushing a boulder up a hill we’re doomed to have to keep reclimbing. The middle class members are fattened like livestock awaiting slaughter by the predators that are groomed by capitalistic institutions such as the elite business schools. And so the middle class inevitably goes into debt and joins the poor, while the wealthy consolidate their power as the ruling oligarchs, as has happened in Canada and the US. (For more on what are effectively the hidden differences between democratic liberals and capitalistic conservatives, see here.)

The masses, then, are targeted by the propaganda arm of modern industry, while the wealthy live in a more rarified world. For example, the wealthy tend not to watch television, they’re not in the market for cheap, mass-produced merchandise, and they don’t even gullibly link their self-worth to their hording of possessions in the crass materialistic fashion. No, the oligarchs who come to power through the capitalistic competition have a much graver flaw: they’re as undead as the rest of nature, which makes them fitting avatars of nature’s inhumanity. Those who are obsessed with becoming very powerful or who are corrupted by their power tend to be sociopathic, which means they lack the ability to care what others feel. For that reason, the power elite are more like machines than people: they tend not to be idealistic and so associative advertising won’t work on them, since that kind of advertising construes the consumption of a material good as a means of fulfilling an archetypal desire. Of course, the relatively poor masses are just the opposite: burdened by their conscience, they trust that our modern world isn’t a horror show. Thus, they’re all-too ready to seek advice from advertisers on how to be happy, even though advertisers are actually deeply cynical. The masses are thereby indoctrinated into cultural materialism.

Workers in the service industry literally talk to the customer as if she were a baby, constantly smiling and speaking in a lilting, sing-songy voice; telling the customer whatever she wants to hear, because the customer is always right (just as Baby gets whatever it wants); working like a dog to satisfy the customer as though the latter were the boss and the true adult in the room—but she’s not. The real power elite don’t deal directly with lowly service providers, such as the employees of the average mall. Their underlings do both their buying and their selling for them, so that they needn’t mix with lower folk. This is why George H. W. Bush had never before seen a grocery scanner. No, the service provider is the surrogate parent who is available around the clock to service the consumer, just as a mother must be prepared at any moment to drop everything and attend to Baby. The consumer is the baby—and a whining, selfish one she is at that. That’s the unsettling truth obscured by the illusion of freedom in a consumption-driven society. A consumer can choose which brand name to support out of the hundreds she surveys in the department store, and that bewildering selection reassures her that she’s living the modern dream. But just as the democratic privileges in an effective plutocracy are superficial and structurally irrelevant, so too the consumer’s freedom of choice is belied by her lack of what Isaiah Berlin calls positive freedom. Consumers have negative freedom in that they’re free from coercion so that they can do whatever they want (as long as they don’t hurt anyone). But they lack the positive freedom of being able to fulfill their potential.

In particular, consumers fail to live up to the above ideal of modern adulthood. Choosing which brand of soft drink to buy, when you’ve been indoctrinated by a materialistic culture, is like an infant preferring to receive milk from the left breast rather than the right. Obviously, the deeper choice is to prefer something other than limitless consumption, but that choice is anathema because it’s bad for business. Still, in so far as we have the potential to be mature in the modern sense, to be like those iconoclastic early modern scientists who overcame their Christian culture by way of discovering for themselves how the real world works, we manic consumers have fallen far short. Almost all of us are grossly immature, regardless of how old we are or whether consumer-friendly psychologists pronounce us “normal.”

Now, you might think I’ve established, at best, not a one-way dependence of the masses on the plutocrats, but a sort of sadomasochistic interdependence between them. After all, the producers need consumers to buy their goods, just as a farmer needs to maintain his livestock out of self-interest. Unfortunately, this isn’t so in the globalized world, since the predators of our age have learned that they can express the nihilism at the heart of social Darwinian capitalism, without reservation, just by draining one country of its resources at a time and then by taking their business to a developing country when the previous host has expired, perhaps one day returning as that prior host revivifies in something like the Spenglerian manner. Thus, while it’s true that sellers need buyers, in general, it’s not the case that transnational sellers need any particular country’s buyers, as long as some country somewhere includes willing and able customers. But whereas the transnational sellers don’t need any particular consumers and the consumers can choose between brands (even though companies tend to merge to avoid competing, becoming monopolies or oligopolies), there’s asymmetry in the fact that the mass consumer’s self-worth is attached to consumption and thus to the buyer-seller relationship, whereas that’s not so for the wealthy producers.

Again, that’s because the more power you have, the more dehumanized you become, so that the power elite can’t afford moral principles or a conscience or a vision of a better world. Those who come to be in positions of great power become custodians of the social system (the dominance hierarchy), and all such systems tend to have unequal power distributions so that they can be efficiently managed. (To take a classic example, soviet communism failed largely because its system had to waste so much energy on the pretense that its power wasn’t centralized.) Centralized power naturally corrupts the leaders or else it attracts those who are already corrupt or amoral. So powerful leaders are disproportionately inhuman, psychologically speaking. (I take it this is the kernel of truth in David Icke’s conspiracy theory that our rulers are secretly evil lizards from another dimension.) Although the oligarch may be inclined to consume for her pleasure and indeed she obviously has many more material possessions than the average consumer, the oligarch attaches no value to consumption, because she’s without human feeling. She feels pleasure and pain like most animals, but she lacks complex, altruistic emotions. Ironically, then, the more wealth and power you have, the fewer human rights you ought to have. (For more on this naturalistic, albeit counterintuitive interpretation of oligarchy, see here.)

In any case, to return to the childish consumer, the point is that consumption-driven capitalism infantilizes the masses by establishing this asymmetric relationship between transnational producer and the average buyer. Just as a biological baby is almost wholly dependent on its guardian, the average consumer depends on the economic system that satisfies her craving for more and more material goods. The wealthy consume because they’re predatory machines, like viruses that are only semi-alive, but the masses consume because we’ve been misled into believing that owning things makes us happy and we dearly want to be happy. We think wealth and power liberate us, because with enough money we can buy whatever we want. But we forget the essence of our modern ideal or else we’ve outgrown that ideal in our postmodern phase. What makes the modern individual heroic is her independence, which is why our prototypes (Copernicus, Galileo, Bruno, Darwin, Nietzsche) were modern especially because of their socially subversive inquiries. We consumers aren’t nearly so modern or individualistic, regardless of our libertarian or pragmatic bluster. As consumers, we’re dependent on the mass producers and on our material possessions themselves. We’re not autonomous iconoclasts, we’re just politically correct followers. We don’t think for ourselves, but put our faith in the contemptible balderdash of corporate propaganda. We haven’t the rationality even to laugh at the foolish fallacies that are the bread and butter of associative ads. It doesn’t matter what we say or write; if we enjoy consuming material goods, our subconscious has been colonized by materialistic memes and so our working values are as shallow as they can be without being as empty as those of the animalistic power elite. As consumers, we’re children playing at adult dress-up; we’re cattle that make-believe we’re free just because we routinely choose from among a preselected array of options.

So both technology and capitalism infantilize the masses. By doing our bidding and so making us feel we’re of central importance in the artificial world, technology suppresses angst and alienation. We therefore live not the modern dream but the ancient mythopoeic one—which is also the child’s experience of playing in a magical place, regardless of where the child actually happens to be. And capitalism turns us into consumers, first and foremost, and constant consumption is the very name of the infant’s game, because the infant needs abundant fuel to support her accelerated growth.

A third source of our existential immaturity is inherent in the myth of the modern hero. For many years, this problem with modernism lay dormant because of the early modernists’ persistent sexism, racism, and imperialism. Only white European males were thought of as proper individuals. Their rationalism, however, implied egalitarianism since we’re all innately rational, to some extent, and once the civil rights of women and minorities were recognized, there was a perceptible decline in the manliness of the modern hero. No longer a bold rebel against dogmas or a skeptical lover of the truth, the late-modern individual now is someone who must tolerate all differences. Ours is a multicultural, global village and so we’re consigned to moral relativism and forced to defer to politically correct conventions out of respect for each other’s right to our opinions. Thus, bold originality, once regarded as heroic, is now considered boorish. Early modernists loved to discuss ideas in Salons, but now even to broach a political or religious subject in public is considered impolite, because you may offend someone.

Such rules of political correctness are like parents’ futile restrictions on their child’s thoughts and actions. Western children are protected from coarse language and violence and nudity, because postmodern parents labour under the illusion that their children will be infantile for their entire lifespan, whereas we’re all primarily animals and so are bound to run up against the horrors of natural life sooner or later. Compare these arbitrary strictures with the medieval Church’s laws against heresy. In all three cases (taboos for infantilized adults, protectionist illusions for children, and medieval Christian imperialism), the rules are uninspired as solutions to the existential problem of how to face reality, but the Church went as far as to torture and kill on behalf of its absurd notions. At most, postmodern parents may spank their child for saying a bad word, while an adult who carries the albatross of the archaic ideal of the independent person and so wishes to test the merit of her assumptions by attempting to engage others in a conversation about ideas will only find herself alone and ignored at the party, inspecting the plant in the corner of the room. Still, our postmodern mode of infantilization is fully degrading despite the lack of severe consequences when we step out of bounds.

This is the ethic of care that’s implicit in modern individualism, which is at odds with the modern hunt for the truth. Modernism was originally framed in the masculine terms of a conflict between scientific truth and Christian dogmatic opinion, but now that everyone is recognized as an autonomous, dignified modern person, feminine values have surged. And just as someone with a hammer sees everything else as a nail, a woman is inclined to see everyone else as a baby. This is why, for example, young women who haven’t outgrown their motherly instincts overuse the word “cute”: handbags are cute, as are small pets and even handsome men. This is also why girls worship not tough, rugged male celebrities, but androgynous ones like Justin Bieber. As conservative social critics appreciate, manliness is out of fashion. Even hair on a man’s chest is perceived as revolting, let alone the hair on his back. Men’s bodies must be shorn of any such symbol of their unruly desires, because men are obliged to fulfill women’s fantasy that men are babies who need to be nurtured. Men must be innocent, not savage; they must be eternally youthful and thus hairless, not battered and scarred by the heartless world; they must be doe-eyed and cheerful, not grim, aloof and embittered. Men must be babies, not the manly heroes celebrated by the early modernists, who brought Europe out of the relative Dark Age. Men have been feminized, thanks ironically to the early modern ideal of personal autonomy through reason. As for women themselves, those who must see themselves primarily as care-givers in so far as they’re naturally inclined to infantilize men, they too become child-like, because “care” is reflexive. And so modern women baby themselves, treating themselves to the spa, to the latest fashions and accessories, to the inanities of daytime television, to the sentimental fantasies of soap operas and romance novels, and to the platitudes of flattering, feel-good New Age cults.

.

The Ignorant Baby and the Enlightened Aesthete

Those are three sources of modern infantilization: technology, capitalism, and postmodern culture. I submit, then, that the reason we can be so ignorant as to speak of technoscientific progress, even though scientific theories imply naturalism which in turn implies the unreality of normative values and the undeadness of all processes, is that we lack self-knowledge because we’re infantile. We’re distracted by the games of possessing and playing with our technotoys, because our artificial environment trains us to be babies. And babies aren’t interested in ideas, let alone in terribly dispiriting philosophies such as naturalism with its atheistic and dark existential implications. That’s why we can parrot the meme of modern progress, because we’ve already swallowed a thousand corporate myths by the time we’ve watched a year’s worth of materialistic ads on TV. What’s one more piece of foolishness added to that pile? If we were to look at the myth of progress, we’d see it derives from ancient theistic apocalypticism, and specifically from the Zoroastrian idea of a linear and teleological arrow of historical time. The idea was that time would come to a cataclysmic end when God would perfect the fallen world and defeat the forces of evil in a climactic battle. All prior events are made meaningful in relation to that ultimate endpoint. In that teleological metaphysics, the idea of real progress makes sense. But there’s no such teleology in naturalism, so there can be no modern progress. At best, some scientific theory or piece of technology can meet with our approval and allow us to achieve our personal goals more readily, but that subjective progress loses its normative force. Mind you, that’s the only kind of progress that pragmatists are entitled to affirm, but there’s no real goodness in modernity if that’s all we mean by the word.

The titular ironies, then, are that the so-called technoscientific signs of modern progress are indications rather of the superficiality or illusoriness of the very concept of social progress that most people have in mind, despite their pragmatic attitude, and that the late great modernists who are supposed to stand tall as the current leaders of humanity are instead largely infantilized by modernity and so are similar to the mythopoeic, childlike ancients.

Here, finally, I’ve pointed out that there’s no real progress in nature, since nature is undead rather than enchanted by personal qualities such as meaning or purpose, and yet I affirmed the existential value of personal authenticity. I promised to return to this apparent contradiction. My solution, as I’ve explained at length elsewhere, is to reduce normative evaluation to the aesthetic kind. For example, I say intellectual integrity is better than self-delusion. But is that judgment as superficial and subjective as a moral principle in light of philosophical naturalism? Not if the goodness of personal integrity and more specifically of the coherence of your worldview which drives your behaviour, is thought of as a kind of beauty. When we take up the aesthetic perspective, all processes seem not just undead but artistically creative. Life itself becomes art and our aesthetic duty is to avoid the ugliness of cliché and to strive for ingenious and subversive originality in our actions.

Is the aesthetic attitude as arbitrary as a theistic interpretation of the world, given science-centered naturalism? No, because aesthetics falls out of the objectification made possible by scientific skepticism. We see something as an art object when we see it as complete in itself and thus as useless and indifferent to our concerns, the opposite being a utilitarian or pragmatic stance. And that’s precisely the essence of cosmicism, which is the darkest part of modern wisdom. Natural things, as such, are complete in themselves, meaning that they exist and develop for no human reason. That’s the horror of nature: the world doesn’t care about us, our adaptability notwithstanding, and so we’re bound to be overwhelmed by natural forces and to perish with just as little warning as we were given when nature evolved us in the first place. But the point here is that the flipside of this horror is that nature is full of art! The undeadness of things is also their sublime beauty or raw ugliness. When we recognize the alienness and monstrosity of natural processes, because we’ve given up naïve anthropocentrism, we’ve already adopted the aesthetic attitude. That’s because we’ve declined to project our interests onto what are wholly impersonal things, and so we objectify and aestheticize them with one and the same act of humility. The angst and the horror we feel when we understand what nature really is and thus how impersonal we ourselves are are also aesthetic reactions. Angst is the dawning of awe as we begin to fathom nature’s monstrous scope, horror the awakening of pantheistic fear of the madness of the artist responsible for so much wasted art. The aesthetic values which are also existential ones aren’t merely subjective, because nature’s undead creativity is all-too real.

Leaving It Implicit

by rsbakker

Since the aim of philosophy is not “to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term” with as little information as possible, I thought it worthwhile to take another run at the instinct to raise firewalls about certain discourses, to somehow immunize them from the plague of scientific information to come. I urge anyone disagreeing to sound off, to explain to me how it’s possible to assert the irrelevance of any empirical discovery in advance, because I am duly mystified. On the one hand, we have these controversial sketches regarding the nature of meaning and normativity, and on the other we have the most complicated mechanism known, the human brain. And learning the latter isn’t going to revolutionize the former?

Of course it is. We are legion, a myriad of subpersonal heuristic systems that we cannot intuit as such. We have no inkling of when we swap between heuristics and so labour under the illusion of cognitive continuity. We have no inkling as to the specific problem-ecologies our heuristics are adapted to and so labour under the illusion of cognitive universality. We are, quite literally, blind to the astronomical complexity of what we are and what we do. I’ve spent these past 18 months on TPB brain-storming novel ways to conceptualize this blindness, and how we might see the controversies and conundrums of traditional philosophy as its expression.

Say that consciousness accompanies/facilitates/enables a disposition to ‘juggle’ cognitive resources, to creatively misapply heuristics in the discovery of exaptive problem ecologies. Traditional philosophy, you might say, represents the institutionalization of this creative misapplication, the ritualized ‘making problematic’ ourselves and our environments. As an exercise in serial misapplication, one must assume (as indeed every individual philosophy does) that the vast bulk of philosophy solves nothing whatsoever. But if one thinks, as I do, that philosophy was a necessary condition of science and democracy, then the obvious, local futility of the philosophical enterprise would seem to be globally redeemed. Thinkers are tinkers, and philosophy is a grand workshop: while the vast majority of the gadgets produced will be relegated to the dustbin, those few that go retail can have dramatic repercussions.

Of course, the hubris is there staring each and every one of us in the face, though its universality renders it almost invisible. To the extent that we agree with ourselves, we all assume we’ve won the Magical Belief Lottery—the conviction, modest or grand, that this gadget here will be the one that reprograms the future.

I’m going to call my collection of contending gadgets, ‘progressive naturalism,’ or more simply, pronaturalism. It is progressive insofar as it attempts to continue the project of disenchantment, to continue the trend of replacing traditional intentional understanding with mechanical understanding. It is naturalistic insofar as it pilfers as much information and as many of its gadgets from natural science as it can.

So from a mechanical problem-solving perspective, words are spoken and actions… simply ensue. Given the systematicity of the ensuing actions, the fact that one can reliably predict the actions that typically follow certain utterances, it seems clear that some kind of constraint is required. Given the utter inaccessibility of the actual biomechanics involved, those constraints need to be conceived in different terms. Since the beginning of philosophy, normativity has been the time-honoured alternative. Rather than positing causes, we attribute reasons to explain the behaviour of others. Say you shout “Duck!” to our golf partner. If he fails to duck and turns to you quizzically instead, you would be inclined to think him incompetent, to say something like, “When I say ‘Duck!’ I mean ‘Duck!’”

From a mechanical perspective, in other words, normativity is our way of getting around the inaccessibility of what is actually going on. Normativity names a family of heuristic tools, gadgets that solve problems absent biomechanical information. Normative cognition, in other words, is a biomechanical way of getting around the absence of biomechanical information.

What else would it be?

From a normative perspective, however, the biomechanical does not seem to exist, at least at the level of expression. This is no coincidence, given that normative heuristics systematically neglect otherwise relevant biomechanical information. Nor is the manifest incompatibility between the normative and biomechanical perspectives any coincidence: as a way to solve problems absent mechanical information, normative cognition will only reliably function in those problem ecologies lacking that information. Information formatted for mechanical cognition simply ‘does not compute.’

From a normative perspective, in other words, the ‘normative’ is bound to seem both ontologically distinct and functionally independent vis a vis the mechanical. And indeed, once one begins taking a census of the normative terms used in biomechanical explanations, it begins to seem clear that normativity is not only distinct and independent, but that it comes first, that it is, to adopt the occult term normalized by the tradition, ‘a priori.’

From the mechanical perspective, these are natural mistakes to make given that mechanical information systematically eludes theoretical metacognition as well. As I said, we are blind to the astronomical complexities of what we are and what we do. Whenever a normative philosopher attempts to ‘make explicit’ our implicit sayings and doings they are banking on the information and cognitive resources they happen to have available. They have no inkling that they’re relying on any heuristics at all, let alone a variety of them, let alone any clear sense of the narrow problem-ecologies they are adapted to solve. They are at best groping their way to a possible solution in the absence of any information pertaining to what they are actually doing.

From the mechanical perspective, in other words, the normative philosopher has only the murkiest idea of what’s going on. They theorize ‘takings as’ and ‘rules’ and ‘commitments’ and ‘entitlements’ and ‘uses’—they develop their theoretical vocabulary—absent any mechanical information, which is to say, absent the information underwriting the most reliable form of theoretical cognition humanity has ever achieved.

The normative philosopher is now in a bind. Given that the development of their theoretical vocabulary turns on the absence of mechanical information, they have no way of asserting that what they are ‘making explicit’ is not actually mechanical. If the normativity of the normative is not given, then the normative philosopher simply cannot assume normative closure, that the use of normative terms—such as ‘use’—implicitly commits any user to any kind of theoretical normative realism, let alone this or that one. This is the article of faith I encounter most regularly in my debates with normative types: that I have to be buying into their picture somehow, somewhere. My first order use of ‘use’ no more commits me to any second-order interpretation of the ‘meaning of use’ as something essentially normative than uttering the Lord’s name in vain commits me to Christianity. The normative philosopher’s inability to imagine how it could be otherwise certainly commits me to nothing. Evolution has given me all these great, normative gadgets—I would be an idiot not to use them! But please, if you want to convince me that these gadgets aren’t gadgets at all, that they are something radically different from anything in nature, then you’re going to have to tell me how and why.

It’s just foot-stomping otherwise.

And this is where I think the bind becomes a garrotte, because the question becomes one of just how the normative philosopher could press their case. If they say their theoretical vocabulary is merely ‘functional,’ a way to describe actual functions at a ‘certain level’ you simply have to ask them to evidence this supposed ‘actuality.’ How can you be sure that your ‘functions’ aren’t, as Craver and Piccinini would argue, ‘mechanism sketches,’ ways to rough out what is actually going on absent the information required to know what’s actually going on? It is a fact that we are blind to the astronomical complexity of what we are and what we do: How do you know if the rope you keep talking about isn’t actually an elephant’s tail?

The normative philosopher simply cannot presume the sufficiency of the information at their disposal. On the one hand, the first-order efficacy of the target vocabulary in no way attests to the accuracy of their second-order regimentations: our ‘mindreading’ heuristics were selected precisely because they were efficacious. The same can be said of logic or any other apparently ‘irreducibly normative’ family of formal problem-solving procedures. Given the relative ease with which these procedures can be mechanically implemented in a simple register system, it’s hard to understand how the normative philosopher can insist they are obviously ‘intrinsically normative.’ Is it simply a coincidence that our brains are also mechanical? Perhaps it is simply our metacognitive myopia, our (obvious) inability to intuit the mechanical complexity of the brain buzzing behind our eyeballs, that leads us to characterize them as such. This would explain the utter lack of second-order, theoretical consensus regarding the nature of these apparently ‘formal’ problem solving systems. Regardless, the efficacy of normative terms in everyday contexts no more substantiates any philosophical account of normativity than the efficacy of mathematics substantiates any given philosophy of mathematics.

Normative intuitions, on the other hand, are equally useless. If ‘feeling right’ had anything but a treacherous relationship with ‘being right,’ we wouldn’t be having this conversation. Not only are we blind to the astronomical complexities of what we are and what we do, we’re blind to this blindness as well! Like Plato’s prisoners, normative philosophers could be shackled to a play of shadows, convinced they see everything they need to see simply for want of information otherwise.

But aside from intuition (or whatever it is that disposes us to affirm certain ‘inferences’ more than others), just what does inform normative theoretical vocabularies?

Good question!

On the mechanical perspective, normative cognition involves the application of specialized heuristics in specialized problem-ecologies—ways we’ve evolved (and learned) to muddle through our own mad complexities. When I utter ‘use’ I’m deploying something mechanical, a gadget that allows me to breeze past the fact of my mechanical blindness and to nevertheless ‘cognize’ given that the gadget and the problem ecologies are properly matched. Moreover, since I understand that ‘use,’ like ‘meaning,’ is a gadget, I know better than to hope that second-order applications of this and other related gadgets to philosophical problem-ecologies will solve much of anything—that is, unless your problem happens to be filling lecture time!

So when Brandom writes, for instance, “What we could call semantic pragmatism is the view that the only explanation there could be for how a given meaning gets associated with a vocabulary is to be found in the use of that vocabulary…” (Extending the Project of Analysis, 11), I hear the claim that the heuristic misapplications characteristic of traditional semantic philosophy can only be resolved via the heuristic misapplications characteristic of traditional pragmatic philosophy. We know that normative cognition is profoundly heuristic. We know that heuristics possess problem ecologies, that they are only effective in parochial contexts. Given this, the burning question for any project like Brandom’s has to be whether the heuristics he deploys are even remotely capable of solving the problems he tackles.

One would think this is a pretty straightforward question deserving a straightforward answer—and yet, whenever I raise it, it’s either passed over in silence or I’m told that it doesn’t apply, that it runs roughshod over some kind of magically impermeable divide. Most recently I was told that my account refuses to recognize that we have ‘perfectly good descriptions’ of things like mathematical proof procedures, which, since they can be instantiated in a variety of mechanisms, must be considered independently of mechanism.

Do we have perfectly good descriptions of mathematical proof procedures? This is news to me! Every time I dip my toe in the philosophy of mathematics I’m amazed by the florid diversity of incompatible theoretical interpretations. In fact, it seems pretty clear that we have no consensus-compelling idea of what mathematics is.

Does the fact that various functions can be realized in a variety of different mechanisms mean that those functions must be considered independently of mechanism altogether? Again, this is news to me. As convenient as it is to pluck apparently identical functions from a multiplicity of different mechanisms in certain problem contexts, it simply does not follow that one must do the same for all problem contexts. For one, how do we know we’ve got those functions right? Perhaps the granularity of the information available occludes a myriad of functional differences. Consider money: despite being a prototypical ‘virtual machine’ (as Dennett calls it in his latest book), there can be little doubt that the mechanistic details of its instantiation have a drastic impact on its function. The kinds of computerized nanosecond transactions now beginning to dominate financial markets could make us pine for good old ‘paper changing hands’ days soon enough. Or consider normativity: perhaps our blindness to the heuristic specificity of normative cognition has led us to theoretically misconstrue its function altogether. There’s gotta be some reason why no one seems to agree. Perhaps mathematics baffles us simply because we cannot intuit how it is instantiated in the human machine! We like to think, for instance, that the atemporal systematicity of mathematics is what makes it so effective—but how do we know this isn’t just another ‘noocentric’ conceit? After all, we have no way of knowing what function our conscious awareness of mathematical cognition plays in mathematical cognition more generally. All that seems certain is that it is not the whole story. Perhaps our apparently all-important ‘abstractions’ are better conceived as low-dimensional shadows of what is actually going on.

And all this is just to say that normativity, even in its most imposing, formal guises, isn’t something magical. It is an evolved capacity to solve specific problems given limited resources. It is natural— not normative. As a natural feature of human cognition, it is simply another object of ongoing scientific inquiry. As another object of ongoing scientific inquiry, we should expect our traditional understanding to be revolutionized, that positions such as ‘inferentialism’ will come to sound every bit as prescientific as they in fact are. To crib a conceit of Feynman’s: the more we learn, the more the neural stage seems too big for the normative philosopher’s drama.

Reengineering Dennett: Intentionality and the ‘Curse of Dimensionality’

by rsbakker

Aphorism of the Day: A headache is one of those rare and precious things that is both in your head and in your head.

.

In a few weeks time, Three Pound Brain will be featuring an interview with Alex Rosenberg, who has become one of the world’s foremost advocates of Eliminativism. If you’re so inclined, now would be a good time to pick up his Atheist’s Guide to Reality, which will be the focus of much of the interview.

The primary reason I’m mentioning this has to do with a comment of Alex’s regarding Dennett’s project in our back and forth, how he “has long sought an account of intentionality that constructs it out of nonintentional resources in the brain.” This made me think of a paper of Dennett’s entitled “A Route to Intelligence: Oversimplify and Self-Monitor” that is only available on his website, and which he has cryptically labelled, ‘NEVER-TO-APPEAR PAPERS BY DANIEL DENNETT.’ Now maybe it’s simply a conceit on my part, given that pretty much everything I’ve written falls under the category of ‘never-to-appear,’ but this quixotic piece has been my favourite Dennett article every since I first stumbled upon it. In the note that Dennett appends to the beginning, he explains the provenance of the paper, how it was written for a volume that never coalesced, but he leaves its ‘never-to-be-published’ fate to the reader’s imagination. (If I had to guess, I would say it has to do with the way the piece converges on what is now a dated consideration of the frame problem).

Now in this paper, Dennett does what he often does (most recently, in this talk), which is to tell a ‘design process’ story that begins with the natural/subpersonal and ends with the intentional/personal. The thing I find so fascinating about this particular design process narrative is the way it outlines, albeit in a murky form, what I think actually is an account of how intentionality arises ‘out of the nonintentional resources of the brain,’ or the Blind Brain Theory. What I want to do is simply provide a close reading of the piece (the first of its kind, given that no one I know of has referenced this piece apart from Dennett himself), suggesting, once again, that Dennett was very nearly on the right track, but that he simply failed to grasp the explanatory opportunities his account affords in the proper way. “A Route to Intelligence” fairly bowled me over when I first read it a few months ago, given the striking way it touches on so many of the themes I’ve been developing here. So what follows, then, begins with a consideration of the way BBT itself follows from certain, staple observations and arguments belonging to Dennett’s incredible oeuvre. More indirectly, it will provide a glimpse of how the mere act of conceptualizing a given dynamic can enable theoretical innovation.

Dennett begins with the theme of avoidance. He asks us to imagine that scientists discover an asteroid on a collision course with earth. We’re helpless to stop it, so the most we can do is prepare for our doom. Then, out of nowhere, a second asteroid appears, striking the first in the most felicitous way possible saving the entire world. It seems like a miracle, but of course the second meteor was always out there, always hurtling on its auspicious course. What Dennett wants us to consider is the way ‘averting’ or ‘preventing’ is actually a kind of perspectival artifact. We only assumed the initial asteroid was going to destroy earth because of our ignorance of the subsequent: “It seems appropriate to speak of an averted or prevented catastrophe because we compare an anticipated history with the way things turned out and we locate an event which was the “pivotal” event relative to the divergence between that anticipation and the actual course of events, and we call this the “act” of preventing or avoiding” (“A Route to Intelligence,” 3).

In BBT terms, the upshot of this fable is quite clear: Ignorance–or better, the absence of information–has a profound, positive role to play in the way we conceive events. Now coming out of the ‘Continental’ tradition this is no great shakes: one only need think of Derrida’s ‘trace structure’ or Adorno’s ‘constellations.’ But as Dennett has found, this mindset is thoroughly foreign to most ‘Analytic’ thinkers. In a sense, Dennett is providing a peculiar kind of explanation by subtraction, bidding us to understand avoidance as the product of informatic inaccessibility. Here it’s worth calling attention to what I’ve been calling the ‘only game in town effect,’ or sufficiency. Avoidance may be the artifact of information scarcity, but we never perceive it as such. Avoidance, rather, is simply avoidance. It’s not as if we catch ourselves after the fact and say, ‘Well, it only seemed like a close call.’

Academics spend so much time attempting to overcome the freshman catechism, ‘It-is-what-it-is!’ that they almost universally fail to consider how out-and-out peculiar it is, even as it remains the ‘most natural thing in the world.’ How could ignorance, of all things, generate such a profound and ubiquitous illusion of epistemic sufficiency? Why does the appreciation of contextual relativity, the myriad ways our interpretations are informatically constrained, count as a kind of intellectual achievement?

Sufficiency can be seen as a generalization of what Daniel Kahneman refers to as WYSIATI (‘What You See Is All There Is’), the way we’re prone to confuse the information we have for all the information required. Lacking information regarding the insufficiency of the information we have, such as the existence of a second ‘saviour’ asteroid, we assume sufficiency, that we are doomed.  Sufficiency is the assumptive default, which is why undergrads, who have yet to be exposed to information regarding the insufficiency of the information they have, assume things like ‘It-is-what-it-is.’

The concept of sufficiency (and its flip-side, asymptosis) is of paramount importance. It explains why, for instance, experience is something that can be explained via subtraction. Dennett’s asteroid fable is a perfect case in point: catastrophe was ‘averted’ because we had no information regarding the second asteroid. If you think about it, we regularly explain one another’s experiences, actions, and beliefs by reference to missing information, anytime we say something of the form, So-and-so didn’t x (realize, see, etc.) such-and-such, in fact. Implicit in all this talk is the presumption of sufficiency, the ‘It-is-what-it-is! assumption,’ as well as the understanding that missing information can make no difference–precisely what we should expect of a biomechanical brain. I’ll come back to all this in due course, but the important thing to note, at this juncture at least, is that Dennett is arguing (though he would likely dispute this) that avoidance is a kind of perspectival illusion.

Dennett’s point is that the avoidance world-view is the world-view of the rational deliberator, one where prediction, the ability to anticipate environmental changes, is king. Given this, he asks:

Suppose then that one wants to design a robot that will live in the real world and be capable of making decisions so that it can further its interests–whatever interests we artificially endow it with. We want in other words to design a foresightful planner. How must one structure the capacities–the representational and inferential or computational capacities–of such a being? 4

The first design problem that confronts us, he suggests, involves the relationship between response-time, reliability, and environmental complexity.

No matter how much information one has about an issue, there is always more that one could have, and one can often know that there is more that one could have if only one were to take the time to gather it. There is always more deliberation possible, so the trick is to design the creature so that it makes reliable but not foolproof decisions within the deadlines naturally imposed by the events in its world that matter to it. 4

Our design has to perform a computational balancing act: Since the well of information has no bottom, and the time constraints are exacting, our robot has to be able to cherry-pick only the information it needs to make rough and reliable determinations: “one must be designed from the outset to economize, to pass over most of the available information” (5). This is the problem now motivating work in the field of rational ecology, which looks at human cognition as a ‘toolbox’ filled with a variety of heuristics, devices adapted to solve specific problems in specific circumstances–‘ecologies’–via the strategic neglect of various kinds of information. On the BBT account, the brain itself is such a heuristic device, a mechanism structurally adapted to walk the computational high-wire between behavioural efficiency and environmental complexity.

And this indeed is what Dennett supposes:

How then does one partition the task of the robot so that it is apt to make reliable real time decisions? One thing one can do is declare that some things in the world of the creature are to be considered fixed; no effort will be expended trying to track them, to gather more information on them. The state of these features is going to be set down in axioms, in effect, but these are built into the system at no representational cost. One simply designs the system in such a way that it works well provided the world is as one supposes it always will be, and makes no provision for the system to work well (“properly”) under other conditions. The system as a whole operates as if the world were always going to be one way, so that whether the world really is that way is not an issue that can come up for determination. 5

So, for instance, the structural fact that the brain is a predictive system simply reflects the fundamental fact that our environments not only change in predictable ways, but allow for systematic interventions given prediction. The most fundamental environmental facts, in other words, will be structurally implicit in our robot, and so will not require modelling. Others, meanwhile, will “be declared as beneath notice even though they might in principle be noticeable were there any payoff to be gained thereby” (5). As he explains:

The “grain” of our own perception could be different; the resolution of detail is a function of our own calculus of wellbeing, given our needs and other capacities. In our design, as in the design of other creatures, there is a trade-off in the expenditure of cognitive effort and the development of effectors of various sorts. Thus the insectivorous bird has a trade-off between flicker fusion rate and the size of its bill. If it has a wider bill it can harvest from a larger volume in a single pass, and hence has a greater tolerance for error in calculating the location of its individual prey. 6

Since I’ve been arguing for quite some time that we need to understand the appearance of consciousness as a kind of ‘flicker fusion writ large,’ I can tell you my eyebrows fairly popped off my forehead reading this particular passage. Dennett is isolating two classes of information that our robot will have no cause to model: environmental information so basic that it’s written into the structural blueprint or ‘fixed’, and environmental information so irrelevant that it is ignored outright or ‘beneath notice.’ What remains is to consider the information our robot will have cause to model:

If then some of the things in the world are considered fixed, and others are considered beneath notice, and hence are just averaged over, this leaves the things that are changing and worth caring about. These things fall roughly into two divisions: the trackable and the chaotic. The chaotic things are those things that we cannot routinely track, and for our deliberative purposes we must treat them as random, not in the quantum mechanical sense, and not even in the mathematical sense (e.g., as informationally incompresssible), but just in the sense of pseudo-random. These are features of the world which, given the expenditure of cognitive effort the creature is prepared to make, are untrackable; their future state is unpredictable. 6-7

Signal and noise. If we were to design our robot along, say, the lines of a predictive processing account of the brain, its primary problem would be one of deriving the causal structure of its environment on the basis of sensory effects. As it turns out, this problem (the ‘inverse problem’) is no easy one to solve. We evolved sets of specialized cognitive tools, heuristics with finite applications, for precisely this reason. The ‘signal to noise ratio’ for any given feature of the world will depend on the utility of the signal versus the computational expense of isolating it.

So far so good. Dennett has provided four, explicitly informatic categories–fixed, beneath notice, trackable, and chaotic–‘design decisions’ that will enable our robot to successfully cope with the complexities confronting it. This is where Dennett advances a far more controversial claim: that the ‘manifest image’ belonging to any species is itself an artifact of these decisions.

Now in a certain sense this claim is unworkable (and Dennett realizes as much) given the conceptual interdependence of the manifest image and the mental. The task, recall, was to build a robot that could tackle environmental complexity, not become self-aware. But his insight here stands tantalizingly close to BBT, which explains our blinkered metacognitive sense of ‘consciousness’ and ‘intentionality’ in the self-same terms of informatic access.

And things get even more interesting, first with his consideration of the how the scientific image might be related to the manifest image thus construed:

The principles of design that create a manifest image in the first place also create the loose ends that can lead to its unraveling. Some of the engineering shortcuts that are dictated if we are to avoid combinatorial explosion take the form of ignoring – treating as if non-existent – small changes in the world. They are analogous to “round off error”in computer number-crunching. And like round-off error, their locally harmless oversimplifications can accumulate under certain conditions to create large errors. Then if the system can notice the large error, and diagnose it (at least roughly), it can begin to construct the scientific image. 8

And then with his consideration of the constraints facing our robot’s ability to track and predict itself:

One of the pre-eminent varieties of epistemically possible events is the category of the agent’s own actions. These are systematically unpredictable by it. It can attempt to track and thereby render predictions about the decisions and actions of other agents, but (for fairly obvious and well-known logical reasons, familiar in the Halting Problem in computer science, for instance) it cannot make fine-grained predictions of its own actions, since it is threatened by infinite regress of self-monitoring and analysis. Notice that this does not mean that our creature cannot make some boundary-condition predictions of its own decisions and actions. 9

Because our robot possesses finite computational resources in an informatically bottomless environment, it must neglect information, and so must be heuristic through and through. Given that heuristics possess limited applicability in addition to limited computational power, it will perforce continually bump into problems it cannot solve. This will be especially the case when it comes the problem of itself–for the very reasons that Dennett adduces in the above quote. Some of these insoluble problems, we might imagine, it will be unable to see as problems, at least initially. Once it becomes aware of its informatic and cognitive limitations, however, it could begin seeking supplementary information and techniques, ways around its limits, allowing the creation of a more ‘scientific’ image.

Now Dennett is simply brainstorming here–a fact that likely played some role in his failure to pursue its publication. But “A Route to Intelligence” stuck with him as well, enough for him to reference it on a number of occasions, and to ultimately give it a small internet venue all of its own. I would like to think this is because he senses (or at least once sensed) the potential of this general line of thinking.

What makes this paper so extraordinary, for me, is the way he explicitly begins the work of systematically thinking through the informatic and cognitive constraints facing the human brain, both with respect to its attempts to cognize its environment and itself. For his part, Dennett never pursues this line of speculative inquiry in anything other than a piecemeal and desultory way. He never thinks through the specifics of the informatic privation he discusses, and so, despite many near encounters, never finds his way to BBT. And it this failure, I want to argue, that makes his pragmatic recovery of intentionality, the ‘intentional stance,’ seem feasible–or so I want to argue.

As it so happens, the import and feasibility of Dennett’s ‘intentional stance,’ has taken a twist of late, thanks to some of his more recent claims. In “The Normal Well-tempered Mind,” for instance, he claims that he was (somewhat) mistaken in thinking that “the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine,” the problem being that “each neuron, far from being a simple switch, is a little agent with an agenda, and they are much more autonomous and much more interesting than any switch.” For all his critiques of original intentionality in the heyday of computationalism, Dennett’s intentional apologetics have become increasingly strident and far-reaching. In what follows I will argue that his account of the intentional stance, and the ever expanding range of interpretative applicability that he accords it actually depends on his failure to think through the informatic straits of the human brain. If he had, I want to suggest, he would have seen that intentionality, like avoidance, is best explained in terms of missing information, which is to say, as a kind of perspectival illusion.

Diagram cube 1

Now of course all this betrays more than a little theoretical vanity on my part, the assumption that Dennett has to be peering, stumped, at some fragmentary apparition of my particular inferential architecture. But this presumption stands high among my motives for writing this post. Why? Because for the life of me I can’t see any way around those inferences–and I distrust this ‘only game in town’ feeling I have.

But I’ll be damned if I can find a way out. As I hope to show, as soon as you begin asking what cognitive systems are accessing what information, any number of dismal conclusions seem to directly follow. We literally have no bloody clue what we’re talking about when begin theorizing ‘mind.’

To see this, it serves to diagram the different levels of information privation Dennett considers:

Levels of information privation

The evolutionary engineering problem, recall, is one of finding some kind of ‘golden informatic mean,’ extracting only the information required to maximize fitness given the material and structural resources available and nothing else. This structurally constrained select-and-neglect strategy is what governs the uptake of information from the sum of all information available for cognition and thence to the information available for metacognition. The Blind Brain Theory is simply an attempt to think this privation through in a principled and exhaustive way, to theorize what information is available to what cognitive systems, and the kinds of losses and distortions that might result.

Information is missing. No one I know of disputes this. Each of these ‘pools’ are the result of drastic reductions in dimensionality (number of variables). Neuroscientists commonly refer to something called the ‘Curse of Dimensionality,’ the way the difficulty of finding statistical patterns in data increases exponentially as the data’s dimensionality increases. Imagine searching for a ring on a 100m length of string, which is to say, in one dimension. No problem. Now imagine searching for that ring in two dimensions, a 100m by 100m square. More difficult, but doable. Now imagine trying to find that ring in three dimensions, in a 100m by 100m by 100m cube. The greater the dimensionality, the greater the volume, the more difficult it becomes extracting statistical relationships, whether you happen to be a neuroscientist trying to decipher relations between high-dimensional patterns of stimuli and neural activation, or a brain attempting to forge adaptive environmental relations.

For example, ‘semantic pointers,’ Eliasmith’s primary innovation in creating SPAUN (the recent artificial brain simulation that made headlines around the world) are devices that maximize computational efficiency by collapsing or inflating dimensionality according to the needs of the system. As he and his team write:

Compression is functionally important because low-dimensional representations can be more efficiently manipulated for a variety of neural computations. Consequently, learning or defining different compression/decompression operations provides a means of generating neural representations that are well suited to a variety of neural computations. “A Large-Scale Model of the Functioning Brain,” 1202

The human brain is rife with bottlenecks, which is why Eliasmith’s semantic pointers represent the signature contribution they do, a model for how the brain potentially balances its computational resources against the computational demands facing it. You could say that the brain is an evolutionary product of the Curse, since it is in the business of deriving behaviourally effective ‘representations’ from the near bottomless dimensionality of its environment.

Although Dennett doesn’t reference the Curse explicitly, it’s implicit in his combinatoric characterization of our engineering problem, the way our robot has to suss out adaptive patterns in the “combinatorial explosion,” as he puts it, of environmental variables. Each of the information pools he touches on, in other words, can be construed as solutions to the Curse of Dimensionality. So when Dennett famously writes:

I claim that the intentional stance provides a vantage point for discerning similarly useful patterns. These patterns are objective–they are there to be detected–but from our point-of-view they are not out there entirely independent of us, since they are patterns composed partly of our own “subjective” reactions to what is our there; they are the patterns made to order for our narcissistic concerns. The Intentional Stance, “Real Patterns, Deeper Facts, and Empty Questions,” 39

Dennett is discussing a problem solved. He recognizes that the solution is parochial, or ‘narcissistic,’ but it remains, he will want to insist, a solution all the same, a powerful way for us (or our robot) to predict, explain, and manipulate our natural and social environments as well as ourselves. Given this efficacy, and given that the patterns themselves are real, even if geared to our concerns, he sees no reason to give up on intentionality.

On BBT, however, the appeal of this argument is largely an artifact of its granularity. Though Dennett is careful to reference the parochialism of intentionality, he does not do it justice. In “The Last Magic Show,” I turned to the metaphor of shadows at several turns trying to capture something of the information loss involved in consciousness, unaware that researchers, trying to understand how systems preserve functionality despite massive reductions of dimensionality, had devised mathematical tools, ‘random projections,’ that take the metaphor quite seriously:

To understand the central concept of a random projection (RP), it is useful to think of the shadow of a wire-frame object in three-dimensional space projected onto a two dimensional screen by shining a light beam on the object. For poorly chosen angles of light, the shadow may lose important information about the wire-frame object. For example, if the axis of light is aligned with any segment of wire, that entire length of wire will have a single point as its shadow. However, if the axis of light is chosen randomly, it is highly unlikely that the same degenerate situation will occur; instead, every length of wire will have a corresponding nonzero length of shadow. Thus the shadow, obtained by this RP, generically retains much information about the wire-frame object. (Ganguli and Sompolinsky, “Sparsity and Dimensionality,” 487)

On the BBT account, mind is what the Curse of Dimensionality looks like from the inside. Consciousness and intentionality, as they appear to metacognition, can be understood as concatenations of idiosyncratic low-dimensional ‘projections.’ Why idiosyncratic? Because when it comes to ‘compression,’ evolution isn’t so much interested in the ‘veridical conservation’ as in scavenging effective information. And what counts as ‘effective information’? Whatever facilitates genetic replication–period. In terms of the wire-frame analogy, the angle may be poorly chosen, the projection partial, the light exceedingly dim, etc., and none of this would matter so long as the information projected discharged some function that increased fitness. One might suppose that only compression will serve in some instances, but to assume that only compression will serve in all instances is simply to misunderstand evolution. Think of ‘lust’ and the biological need to reproduce, or ‘love’ and the biological need to pair-bond. Evolution is opportunistic: all things being equal, the solutions it hits upon will be ‘quick and dirty,’ and utterly indifferent to what we intuitively assume (let alone want) to be the case.

Take memory research as a case in point. In the Theaetetus, Plato famously characterized memory as an aviary, a general store from which different birds, memories, could be correctly or incorrectly retrieved. It wasn’t until the late 19th century, when Hermann Ebbinghaus began tracking his own recall over time in various conditions, that memory became the object of scientific investigation. From there the story is one of greater and greater complication. William James, of course, distinguished between short and long term memory. Skill memory was distinguished from long term memory, which Endel Tulving famously decomposed into episodic and semantic memory. Skill memory, meanwhile, was recognized as one of several forms of nondeclarative or implicit memory, including classical conditioning, non-associative learning, and priming, which would itself be decomposed into perceptual and conceptual forms. As Plato’s grand aviary found itself progressively more subdivided, researchers began to question whether memory was actually a discrete system or rather part and parcel of some larger cognitive network, and thus not the distinct mental activity assumed by the tradition. Other researchers, meanwhile, took aim at the ‘retrieval assumption,’ the notion that memory is primarily veridical, adducing evidence that declarative memory is often constructive, more an attempt to convincingly answer a memory query than to reconstruct ‘what actually happened.’

The moral of this story is as simple as it should be sobering: the ‘memory’ arising out of casual introspection (monolithic and veridical) and the memory arising out of the scientific research (fractionate and confabulatory) are at drastic odds, to the point where some researchers suggest the term ‘memory’ is itself deceptive. Memory, like so many other cognitive capacities, seems to be a complex of specialized capacities arising out of non-epistemic and epistemic evolutionary pressures. But if this is the case, one might reasonably wonder how Plato could have gotten things so wrong. Well, obviously the information available to metacognition (in its ancient Greek incarnation) falls far short the information required to accurately model memory. But why would this be? Well, apparently forming accurate metacognitive models of memory was not something our ancestors needed to survive and reproduce.

We have enough metacognitive access to isolate memory as a vague capacity belonging to our brains and nothing more. The patterns accessed, in other words, are real patterns, but it seems more than a little hinky to take the next step and say they are “made to order for our narcissistic concerns.” For one, whatever those ‘concerns’ happen to be, they certainly don’t seem to involve any concern with self-knowledge, particularly when the ‘concerns’ at issue are almost certainly not the conscious sort–which is to say, concerns we could be said to be ‘ours’ in any straightforward way. The concerns, in fact, are evolutionary: Metacognition, for reasons Dennett touched on above and that I have considered at length elsewhere, is a computational nightmare, more than enough to necessitate the drastic informatic compromises that underwrite Plato’s Aviary.

And as memory goes, I want to suggest, so goes intentionality. The fact is, intentional patterns are not “made to order for our narcissistic concerns.” This is a claim that, while appearing modest, characterizes intentionality as an instrument of our agency, and so ‘narcissistic’ in a personal sense. Intentional patterns, rather, are ad hoc evolutionary solutions to various social or natural environmental problems, some perhaps obvious, others obscure. And this simply refers to the ‘patterns’ accessed by the brain. There is the further question of metacognitive access, and the degree to which the intentionality we all seem to think we have might not be better explained as a kind of metacognitive illusion pertaining to neglect.

Asymptotic. Bottomless. Rules hanging with their interpretations.

All the low-dimensional projections bridging pool to pool are evolutionary artifacts of various functional requirements, ‘fixes,’ multitudes of them, to some obscure network of ancestral, environmental problems. They are parochial, not to our ‘concerns’ as ‘persons,’ but to the circumstances that saw them selected to the exclusion of other possible fixes. To return to Dennett’s categories, the information ‘beneath notice,’ or neglected, may be out-and-out crucial for understanding a given capacity, such as ‘memory’ or ‘agency’ or what have you, even though metacognitive access to this information was irrelevant to our ancestor’s survival. Likewise, what is ‘trackable’ may be idiosyncratic, information suited to some specific, practical cognitive function, and therefore entirely incompatible with and so refractory to theoretical cognition–philosophy as the skeptics have known it.

Why do we find the notion of a fractionate, non-veridical memory surprising? Because we assume otherwise, namely, that memory is whole and veridical. Why do we assume otherwise? Because informatic neglect leads us to mistake the complex for the simple, the special purpose for the general purpose, and the tertiary for the primary. Our metacognitive intuitions are not reliable; what we think we do or undergo and what the sciences of the brain reveal need only be loosely connected. Why does it seem so natural to assume that intentional patterns are “made to order for our narcissistic concerns”? Well, for the same reason it seems so natural to assume that memory is monolithic and veridical: in the absence of information to the contrary, our metacognitive intuitions carry the day. Intentionality becomes a personal tool, as opposed to a low-dimensional projection accessed via metacognitive deliberation (for metacognition), or a heuristic device possessing a definite evolutionary history and a limited range of applications (for cognition more generally).

So to return to our diagram of ‘information pools’:

Levels of information privation

we can clearly see how the ‘Curse of Dimensionality’ is compounded when it comes to theoretical metacognition. Thus the ‘blind brain’ moniker. BBT argues that the apparent perplexities of consciousness and intentionality that have bedevilled philosophy for millennia are artifacts of cognitive and metacognitive neglect. It agrees with Dennett that the relationship between all these levels is an adaptive one, that low-dimensional projections must earn their keep, but it blocks the assumption that we are the keepers, seeing this intuition as the result of metacognitive neglect (sufficiency, to be precise). It’s no coincidence, it argues, that all intentional concepts and phenomena seem ‘acausal,’ both in the sense of seeming causeless, and in the sense of resisting causal explanation. Metacognition has no access whatsoever to the neurofunctional context of any information broadcast or integrated in consciousness, and so finds itself ‘encapsulated,’ stranded with a profusion of low-dimensional projections that it cannot cognize as such, since doing so would require metacognitive access to the very neurofunctional contexts that are occluded. Our metacognitive sense of intentionality, in other words, depends upon making a number of clear mistakes–much as in the case of memory.

The relations between ‘pools’ it should be noted, are not ‘vehicles’ in the sense of carrying ‘information about.’ All the functioning components in the system would have to count as ‘vehicles’ if that were the case, insofar as the whole is required for that information that does find itself broadcast or integrated. The ‘information about’ part is simply an artifact of what BBT calls medial neglect, the aggregate blindness of the system to its ongoing operations. Since metacognition can only neglect the neural functions that make a given conscious experience possible–since it is itself invisible to itself–it confuses an astronomically complex systematic effect for a property belonging to that experience.

The very reason theorists like Dretske or Fodor insist on semantic interpretations of information is the same reason those interpretations will perpetually resist naturalistic explanation: they are attempting to explain a kind of ‘perspectival illusion,’ the way the information broadcast or integrated exhausts the information available for deliberative cognition, so generating the ‘only-game-in-town-effect’ (or sufficiency). ‘Thoughts’ (or the low-dimensional projections we confuse for them) must refer to (rather than reliably covary with) something in the world because metacognition neglects all the neurofunctional and environmental machinery of that covariance, leaving only Brentano’s famous posit, intentionality, as the ‘obvious’ explanandum–one rendered all the more ‘obvious’ by thousands of largely fruitless years of intentional conceptual toil.

Aboutness is magic, in the sense that it requires the neglect of information to be ‘seen.’ It is an illusion of introspection, a kind of neural camera obscura effect, ‘obvious’ only because metacognition is a captive of the information it receives. This is why our information pool diagram can be so easily retooled to depict the prevailing paradigm in the cognitive sciences today:

Levels of intentionality

The vertical arrows represent medial functions (sound, light, neural activity) that are occluded and so are construed acausally. The ‘mind’ (or the network of low-dimensional projections we confuse as such) is thought to be ‘emergent from’ or ‘functionally irreducible to’ the brain, which possesses both conscious and nonconscious ‘representations of’ or ‘intentional relations to’ the world. No one ever pauses to ask what kind of cognitive resources the brain could bring to bear upon itself, what it would take to reliably model the most complicated machinery known from within that machinery using only cognitive systems adapted to modelling external environments. The truth of the brain, they blithely assume, is available to the brain in the form of the mind.

Or thought.

But this is little more than wishful ‘thinking,’ as the opaque, even occult, nature of the intentional concepts used might suggest. Whatever emergence the brain affords, why should metacognition possess the capacity to model it, let alone be it? Whatever function the broadcasting or integration of a given low-dimensional projection provides, why should metacognition, which is out-and-out blind to neurofunctionality, possess the capacity to reliably model it, as opposed to doing what cognition always does when confronted with insufficient information it cannot flag as insufficient, leap to erroneous conclusions?

All of this is to say that the picture is both more clear and yet less sunny than Dennett’s ultimately abortive interrogation of information privation would lead us to believe. Certainly in an everyday sense it’s obvious that we take perspectives, views, angles, standpoints, and stances vis a vis various things. Likewise, it seems obvious that we have two broad ways in which to explain things, either by reference to what causes an event, or by virtue of what rationalizes an event. As a result, it seems natural to talk of two basic explanatory perspectives or stances, one pertaining to the causes of things, the other pertaining to the reasons for things.

The question is one of how far we can trust our speculations regarding the latter beyond this platitudinous observation. One might ask, for instance, if intentionality is a heuristic, which is to say, a specialized problem solver, then what are its conditions of applicability? The mere fact that this is an open question means that things like the philosophical question of knowledge, to give just one example, should be divided into intentional and mechanical incarnations–at the very least. Otherwise, given the ‘narcissistic idiosyncrasy’ of the former, we need to consider whether the kinds of conundrums that have plagued epistemology across the ages are precisely what we should expect. Chained to the informatic bottleneck of metacognition, epistemology has been trading in low-dimensional projections all along, attempting time and again to wring universality out of what amount to metacognitive glimpses of parochial cognitive heuristics. There’s a very real chance the whole endeavour has been little more than a fool’s errand.

The real question is one of why, as philosophers, we should bother entertaining the intentional stance. If the aim of philosophy really is, as Sellars has it, “to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term,” if explanatory scope is our goal, then understanding intentionality amounts understanding it in functional terms, which is to say, as something that can only be understood in terms of the information it neglects. What is the adaptive explanatory ecology of any given intentional concept? What was it selected for? And if it is ‘specialized,’ would that not suggest incompatibility with different (i.e., theoretical) cognitive contexts? Given what little information we have, what arbitrates our various metacognitive glimpses, our perpetually underdetermined interpretations, allowing us to discriminate between any stage on the continuum of the reliable and the farcical?

Short of answers to these questions, we cannot even claim to be engaging in educated as opposed to mere guesswork. So to return to “The Normal Well-tempered Mind,” what does Dennett mean when he says that neurons are best seen as agents? Does he mean that cellular machinery is complicated machinery, and so ill-served when conceptualized as a ‘mere switch’? Or does he mean they really are like little people, organized in little tribes, battling over little hopes and little crimes? I take it as obvious that he means the former, and that his insistence on the latter is more the ersatz product of a commitment he made long ago, one he has invested far too much effort in to relinquish.

‘Feral neurons’ are a metaphoric conceit, an interesting way to provoke original thought, perhaps, a convenient facon de parler in certain explanatory contexts, but more an attempt to make good on an old and questionable argument than anything, one that would have made a younger Dennett, the one who wrote “Mechanism and Responsibility,” smile and scowl as he paused to conjure some canny and critical witticism. Intentionality, as the history of philosophy should make clear, is an invitation to second-order controversy and confusion. Perhaps what we have here is a potential empirical basis for the infamous Wittgensteinian injunction against philosophical language games. Attributing intentionality in first-order contexts is not only well and fine, it’s unavoidable. But as soon as we make second-order claims on the basis of metacognitive deliberation, say things like, ‘Knowledge is justified, true belief,’ we might as well be playing Monopoly using the pieces of Risk, ‘deriving’ theoretical syntaxes constrained–at that point–by nothing ‘out there.’

On BBT, ‘knowledge’ simply is what it has to be if we agree that the life science paradigm cuts reality as close to the joints as anything we have ever known: a system of mechanical bets, a swarm of secondary asteroids following algorithmic trajectories, ‘miraculously’ averting disaster time and again.

Breathtakingly complex.

Alien.