Three Pound Brain

No bells, just whistling in the dark…

Tag: science

Are Minds like Witches? The Catastrophe of Scientific Progress (by Ben Cain)

by rsbakker

machine brain

.

As scientific knowledge has advanced over the centuries, informed people have come to learn that many traditional beliefs are woefully erroneous. There are no witches, ghosts, or disease-causing demons, for example. But are cognitive scientists currently on the verge of showing also that belief in the ordinarily-defined human self is likewise due to a colossal misunderstanding, that there are no such things as meaning, purpose, consciousness, or personal self-control? Will the assumption of personhood itself one day prove as ridiculous as the presumption that some audacious individuals can make a pact with the devil?

Progress and a World of Mechanisms

According to this radical interpretation of contemporary science, everything is natural and nature consists of causal relationships between material aggregates that form systems or mechanisms. The universe is thus like an enormous machine except that it has no intelligent designer or engineer. Atoms evolve into molecules, stars into planets, and at least one planet has evolved life on its surface. But living things are really just material objects with no special properties. The only efficacious or real property in nature, very generally speaking, is causality, and thus the real question is always just what something can do, given its material structure, initial conditions, and the laws of nature. As one of the villains of The Matrix Reloaded declares, “We are slaves to causality.” Thus, instead of there being people or conscious, autonomous minds who use symbols to think about things and to achieve their goals, there are only mechanisms, which is to say forces acting on complex assemblies of material components, causing the system to behave in one way rather than another. Just as the sun acts on the Earth’s water cycle, causing oceans to evaporate and thus forming clouds that eventually rain and return the water via snowmelt runoff and groundwater flow to the oceans, the environment acts on an animal’s senses, which send signals to its brain whereupon the brain outputs a more or less naturally selected response, depending on whether the genes exercise direct or indirect control over their host. Systems interacting with systems, as dictated by natural laws and probabilities—that’s all there is, according to this interpretation of science.

How, then, do myths form that get the facts so utterly wrong? Myths in the pejorative sense form as a result of natural illusions. Omniscience isn’t given to lowly mammals. To compensate for their being thrown into the world without due preparation, as a result of the world’s dreadful godlessness, some creatures may develop the survival strategy of being excessively curious, which drives them often to err on the side not of caution but of creativity. We track not just the patterns that lead us to food or shelter, but myriad other structures on the off-chance that they’re useful. And as we evolve more intelligence than wisdom, we creatively interpret these patterns, filling the blanks in our experience with placeholder notions that indicate both our underlying ignorance and our presumptuousness. In the case of witches, for example, we mistake some hapless individual’s introversion and foreignness for some evil complicity in suffering that’s actually due merely to bad luck and to nature’s heartlessness. Given enough bumbling and sanctimony, that lack of information about a shy foreigner results in the burning of a primate for allegedly being a witch. A suitably grotesque absurdity for our monstrously undead universe.

And in the corresponding case of personhood itself, the lack of information about the brain causes our inquisitive species to reify its ignorance, to mistake the void found by introspection for spirit or mind which our allegedly wise philosophers then often interpret as being all that’s ultimately real. That is, we try to control ourselves along with our outer environment, to enhance our fitness to carry our genes, but because our brain didn’t evolve to reveal its mechanisms to themselves, the brain outputs nonsense to satisfy its curiosity, and so the masses mislead themselves with fairytales about the supernatural property of personhood, misinterpreting the lack of inner access as being miraculous direct acquaintance with oneself by something called self-consciousness. We mislead ourselves into concluding that the self is more than the brain that can’t understand its operations without scientific experimentation. Instead, we’re seduced into dogmatizing that our blindness to our neural self is actually magical access to a higher, virtually immaterial self.

Personhood and the Natural Reality of Illusions

So much for the progressive interpretation of science. I believe, however, that this interpretation is unsustainable. The serpent’s jaws come round again to close on the serpent’s own tail, and so we’re presented with yet another way to go spectacularly wrong; that is, the radical, progressive naturalist joins the deluded supernaturalist in an extravagant leap of logic. To see this, realize that the above picture of nature can be no picture at all. To speak of a picture, a model, a theory, or a worldview, or even of thinking or speaking in general, as these words are commonly defined is, of course, forbidden to the austere naturalist. There are no symbols in this interpretation which is no interpretation; there are only phases in the evolution of material systems, objects caught between opposing forces that change according to ceteris paribus laws which are not really laws. Roughly speaking—and remember that there’s no such thing as speaking—there’s only causality in nature. There are no intentional or normative properties, no reference, purpose, or goodness or badness.

In the unenlightened mode of affecting material systems, this “means” that if you interpret scientific progress as entailing that there are no witches, demons, or people in general, in the sense that the symbols for these entities are vacuous, whereas other symbols enjoy meaningful status such as the science-friendly words, “matter,” “force,” “law,” “mechanism,” “evolution,” and so forth, you’ve fallen into the same trap that ensnares the premodern ignoramus who fails to be humbled by her grievous knowledge deficit. All symbols are equally bogus, that is, supernatural, according to the foregoing radical naturalism. Thus, this radical must divest herself not just of the premodern symbols, but of the scientific ones as well—assuming, that is, she’s bent on understanding these symbols in terms of the naïve notion of personhood which, by hypothesis, is presently being made obsolete by science. So for example, if I say, “Science has shown that there are no witches, and the commonsense notion of the mind is likewise empty,” the radical naturalist is hardly free to interpret this as saying that premodern symbols are laughable whereas modern scientific ones are respectable. In fact, strictly speaking, she fails to be a thoroughgoing eliminativist as soon as she assumes that I’ve thereby said anything at all. All speaking is illusion, for the radical naturalist; there are only forces acting on material systems, causing those systems to behave, to exercise their material capacities, whereupon the local effects might feed back into a larger system, leading to cycles of average collective behaviour. There is no way of magically capturing that mechanistic reality in symbolic form; instead, there’s just the illusion of doing so.

How, then, should scientific progress be understood, given that there’s no such things as scientific theories, progress, or understanding, as these things are commonly defined? In short, what’s the uncommon, enlightened way of understanding science (which is actually no sort of understanding)? What’s the essence of postmodern, scientific mysticism, as we might think of it? In other words, what will the posthuman be doing once her vision is unclouded with illusions of personhood and so is filled with mechanisms as such? The answer must be put in terms, once again, of causality. Scientific enlightenment is a matter (literally) of being able to exercise greater control over certain systems than is afforded by those who lack scientific tools. In short, assuming we define ourselves as a species in terms of the illusions of a supernatural self, the posthuman who embraces radical naturalism and manages to clear her head of the cognitive vices that generate those illusions will be something of a pragmatist. She’ll think in terms of impersonal systems acting and reacting to each other and being forced into this or that state, and she’ll appreciate how she in turn is driven by her biochemical makeup and evolutionary history to survive by overpowering and reshaping her environment, aided by this or that trait or tool.

Radical, eliminativistic naturalism thus implies some version of pragmatism. The version not implied would be one that defines usefulness in terms of the satisfaction of personal desires. (And, of course, there would really be some form of causality instead of any logical implication.) But the point is that for the eliminativist, an illusion-free individual would think purely in terms of causality and of materialistic advantage based on a thorough knowledge of the instrumental value of systems. She’d be pushed into this combative stance by her awareness that she’s an animal that’s evolved with that survivalist bias, and so her scientific understanding wouldn’t be neutral or passive, but supplemented by a more or less self-interested evaluation of systems. She’d think in terms of mechanisms, yes, but also of their instrumental value to her or to something with which she’s identified, although she wouldn’t assume that anyone’s survival, including hers, is objectively good.

For example, the radical naturalist might think of systems as posing problems to be solved. The posthuman, then, would be busy solving problems, using her knowledge to make the environment more conducive to her. She wouldn’t think of her knowledge as consisting of theories made up of symbols; instead, she’d see her brain and its artificial extensions as systems that enable her to interact successfully with other systems. The success in question would be entirely instrumental, a matter of engineering with no presumption that the work has any ultimate value. There could be no approval or disapproval, because there would be no selves to make such judgments, apart from any persistence of a deluded herd of primates. The re-engineered system would merely work as designed, and the posthuman would thereby survive and be poised to meet new challenges. This would truly be work for work’s sake.

What, then, should the enlightened pragmatist say about the dearth of witches? Can she sustain the sort of positivistic progressivism with which I began this article? Would she attempt to impact her environment by making sounds that are naively interpreted as meaning that science has shown there are no witches? No, she would “say” only that the neural configuration leading to behaviour associated with the semantic illusion that certain symbols correspond to witchy phenomena has causes and effects A and B, whereas the neural configuration leading to so-called enlightened, modern behaviour, often associated with the semantic illusion that certain other symbols correspond to the furious buying and selling of material goods and services and to equally tangible, presently-conventional behaviour thus has causes and effects C and D. Again, if everything must be perceived in terms of causality, the neural states causing certain primates to be burned as witches should be construed solely in terms of their causes and effects. In short, the premodern, allegedly savage illusion of witchcraft loses its sting of embarrassment, because that illusion evidently had causal power and thus a degree of reality. Cognitive illusions aren’t nothing at all; they’re effects of vices like arrogance, self-righteousness, impertinence, irrationality, and so forth, and they help to shape the real world. There’s no enlightened basis for any normative condemnation of such an illusion. All that matters is the pragmatic, instrumental judgment of something’s effectiveness at solving a problem.

Yes, if there’s no such thing as the meaning of a symbol, there are no witches, in that there’s no relation of non-correspondence between “witch” and creatures that would fit the description. Alas, this shouldn’t comfort the radical naturalist since there can likewise be no negative semantic relation between “symbol” and symbols to make sense of that statement about the nonexistence of witches. If naturalism forces us to give up entirely on the idea of intentionality, we mustn’t interpret the question of something’s nonexistence as being about a symbol’s failure to pick out something (since there would be no such thing as a symbol in the first place). And if we say there are no symbols, just as there are no witches or ghosts or emergent and autonomous minds, we likewise mustn’t think this is due merely to any semantic failure.

What, then, must nonexistence be, according to radical naturalism? It must be just relative powerlessness. To say that there are no witches “means” that the neural states involved in behaviour construed in terms of witchcraft are relatively powerless to systematically or reliably impact their environment. Note that this needn’t imply that the belief in witches is absolutely powerless. After all, religious institutions have subdued their flocks for millennia based on the ideology of demons, witches and the like, and so the pragmatist mustn’t pretend she can afford to “say” that witches have a purely negative ontological status. Again, just because there aren’t really any witches doesn’t mean there’s no erroneous belief in witchcraft, and that belief itself can have causal power. The belief might even conceivably lead to a self-fulfilling prophecy in which case something like witchcraft will someday come into being. At any rate, the belief in witches opens up problems to be solved by engineering (whether to side with the oppressive Church or to overthrow it, etc.), and that would be the enlightened posthuman’s only concern with respect to witches.

Indeed, a radical naturalist who understands the cataclysmic implications of scientific progress has no epistemic basis whatsoever for belittling the causal role of a so-called illusion like witchcraft. Again, some neural states have causes and effects A and B while others have causes and effects C and D—and that’s it as far as objective reality is concerned. On top of this, at best, there’s pragmatic instrumentalism, which raises the question merely of the usefulness of the belief in witches. Is that belief entirely useless? Obviously not, as Western history attests. Is the belief in witches immoral or beneath our dignity as secular humanists? The question should be utterly irrelevant, since morality and dignity are themselves illusions, given radical naturalism; moreover, the “human” in “humanist” must be virtually empty. What an enlightened person could say with integrity is just that the belief in witches benefits some primates more than others, by helping to establish a dominance hierarchy.

The same goes for the nonexistence of minds, personhood, consciousness, semantic meaning, or purpose. If these things are illusions, so what? Illusions can have causal power, and the radical naturalist must distinguish between causal relations solely by assigning them their instrumental value, noting that some effects help some primates to survive by solving certain problems, while hindering others. Illusions are thus real enough for the truly radical naturalist. In particular, if the brain tries to discover its mechanisms through introspection and naturally comes up empty, that need not be the end of the natural process. The cognitive blind spot delivers an illusion of mentality or of immaterial spirituality, which in turn causes primates to act as if there were such things as cultures consisting of meaningful symbols, moral values and the like. We’d be misled into creating something that nevertheless exists as our creation. Just as the whole universe might have popped into existence from nothing, according to quantum mechanics, cognitive science might entail that personhood develops from the introspective experience of an inner emptiness. In fact, we’re not empty, because our heads are full of brain matter. But the tool of introspection can be usefully misapplied, as it evidently causes the whole panoply of culture-dependent behaviours.

What is it, then, to call personhood a mere illusion? What’s the difference between illusion and reality, for the radical naturalist, given that both can have causal power in the domain of material systems? If we say that illusions depend on ignorance of certain mechanisms, this turns all mechanisms into illusions and deprives us of so-called reality, assuming none of us is omniscient. As long as we select which mechanisms and processes to attend to in our animalistic dealings with the environment, we all live in bubble worlds based on that subjectivity which thus has quasi-transcendental status. To illustrate, notice that when the comedian Bill Maher mocks the Fox News viewer for living in the Fox Bubble and for being ignorant of the “real world,” Maher forgets that he too lives in a culture, albeit in a liberal rather than a conservative one, and that he doesn’t conceive of everything with the discipline of strict impersonality or objectivity, as though he were the posthuman mystic.

What seems to be happening here is that the radical naturalist is liable to identify with a science-centered culture and thus she’s quick to downgrade the experience of those who prefer the humanities, including philosophy, religion, and art. From the science-centered perspective, we’re fundamentally animals caught in systems of causality, but we nevertheless go on to create cultures in our bumbling way, blissfully ignorant of certain mechanistic realities and driven by cognitive vices and biases as we allow ourselves to be mesmerized by the “illusion” of a transcendent, immaterial self.  But there’s actually no basis here for any value judgment one way or the other. From a barebones scientific “perspective,” the institution of science is as illusory as witchcraft. All that’s real are configurations of material elements that evolve in orderly ways—and witchcraft and personhood are free to share in that reality as illusions. Judging by the fact that the idea of witches has evidently caused some people to be treated accordingly and that the idea of the personal self has caused us to create a host of artificial, cultural worlds within the indifferent natural one, there appears to be more than enough reality to go around.

Advertisements

Science, Nihilism, and the Artistry of Nature (by Ben Cain)

by rsbakker

nihilism image

Technologically-advanced societies may well destroy themselves, but there are two other reasons to worry that science rather than God will usher in the apocalypse, directly destroying us by destroying our will to live. The threat in question is nihilism, the loss of faith in our values and thus the wholesale humiliation of all of us, due to science’s tendency to falsify every belief that’s traditionally comforted the masses. The two reasons to suspect that science entails nihilism are that scientists find the world to be natural (fundamentally material, mechanical, and impersonal), whereas traditional values tend to have supernatural implications, and that scientific methods famously bypass intuitions and feelings to arrive at the objective truth.

These two features of science, being the content of scientific theories and the scientific methods of inquiry might seem redundant, since the point about methods is that science is methodologically naturalistic. Thus, the point about the theoretical content might seem to come as no surprise. By definition, a theory that posits something supernatural wouldn’t be scientific. While scientists may be open to learning that the world isn’t a natural place, making that discovery would amount to ending or at least transforming the scientific mode of inquiry. Nevertheless, naturalism, the worldview that explains everything in materialistic and mechanistic terms, isn’t just an artifact of scientific methods. What were once thought to be ghosts and gods and spirits really did turn out to be natural phenomena.

Moreover, scientific objectivity seems a separate cause of nihilism in that, by showing us how to be objective, paradigmatic scientists like Galileo, Newton, and Darwin showed us also how to at least temporarily give up on our commonsense values. After all, in the moment when we’re following scientific procedures, we’re ignoring our preferences and foiling our biases. Of course, scientists still have feelings and personal agendas while they’re doing science; for example, they may be highly motivated to prove their pet theory. But they also know that by participating in the scientific process they’re holding their feelings to the ultimate test. Scientific methods objectify not just the phenomenon but the scientist; as a functionary in the institution, she must follow strict procedures, recording the data accurately, thinking logically, and publishing the results, making her scientific work as impersonal as the rest of the natural world. In so far as nonscientists understand this source of science’s monumental success, we might come to question the worth of our subjectivity, of our private intuitions, wishes, and dreams which scientific methods brush aside as so many distortions.

Despite the imperative to take scientists as our model thinkers in the Age of Reason, we might choose to ignore these two threats to our naïve self-image. Nevertheless, the fear is that distraction, repression, and delusion might work only for so long before the truth outs. You might think, on the contrary, that science doesn’t entail nihilism, since science is a social enterprise and thus it has a normative basis. Scientists are pragmatic and so they evaluate their explanations in terms of rational values of simplicity, fruitfulness, elegance, utility, and so on. Still, the science-centered nihilist can reply, those values might turn out to be mechanisms, as scientists themselves would discover, in which case science would humiliate not just the superstitious masses but the pragmatic theorists and experimenters as well. That is, science would refute not only the supernaturalist’s presumptions but the elite instrumentalist’s view of scientific methods. Science would become just another mechanism in nature and scientific theories would have no special relationship with the facts since from this ultra-mechanistic “perspective,” not even scientific statements would consist of symbols that bear meaning. The scientific process would be seen as consisting entirely of meaningless, pointless, and amoral causal relations—just like any other natural system.

I think, then, this sort of nihilist can resist that pragmatic objection to the suspicion that science entails nihilism and thus poses a grave, still largely unappreciated threat to society. There’s another objection, though, which is harder to discount. The very cognitive approach which is indispensible to scientific discovery, the objectification of phenomena, which is to say the analysis of any pattern in impersonal terms of causal relations, is itself a source of certain values. When we objectify something we’re thereby well-positioned to treat that thing as having a special value, namely an aesthetic one. Objectification overlaps with the aesthetic attitude, which is the attitude we take up when we decide to evaluate something as a work of art, and thus objects, as such, are implicitly artworks.

 

Scientific Objectification and the Aesthetic Attitude

 

There’s a lot to unpack there, so I’ll begin by explaining what I mean by the “aesthetic attitude.” This attitude is explicated differently by Kant, Schopenhauer, and others, but the main idea is that something becomes an artwork when we adopt a certain attitude towards it. The attitude is a paradoxical one, because it involves a withholding of personal interest in the object and yet also a desire to experience the object for its own sake, based on the assumption that such an experience would be rewarding. When an observer is disinterested in experiencing something, but chooses to experience it because she’s replaced her instrumental or self-interested perspective with an object-oriented one so that she wishes to be absorbed by what the object has to offer, as it were, she’s treating the object as a work of art. And arguably, that’s all it means for something to be art.

For example, if I see a painting on a wall and I study it up close with a view to stealing it, because all the while I’m thinking of how economically valuable the painting is, I’m personally interested in the painting and thus I’m not treating it as art; instead, for me the painting is a commodity. Suppose I have no ulterior motive as I look at the painting, but I’m also bored by it and so I’m not passively letting the painting pour its content into me, as it were, which is to say that I have no respect for such an experience in this case, and so I’m not giving the painting a fair chance to captivate my attention, I’m likewise not treating the painting as art. I’m giving it only a cursory glance, because I lack the selfless interest in letting the painting hold all of my attention and so I don’t anticipate the peculiar pleasure from perceiving the painting that we associate with an aesthetic experience. Whether it’s a painting, a song, a poem, a novel, or a film, the object becomes an artwork when it’s regarded as such, which requires that the observer adopt this special attitude towards it.

Now, scientific objectivity plainly isn’t identical to the aesthetic attitude. After all, regardless of whether scientists think of nature as beautiful when they’re studying the evidence or performing experiments or formulating mechanistic explanations, they do have at least one ulterior motive. Some scientists may have an economic motive, others may be after prestige, but all scientists are interested in understanding how systems work. Their motive, then, is a cognitive one—which is why they follow scientific procedures, because they believe that scientific objectification (mechanistic analysis, careful collection of the data, testing of hypotheses with repeatable experiments, and so on) is the best means of achieving that goal.

However, this cognitive interest posits a virtual aesthetic stance as the means to achieve knowledge. Again, scientists trust that their personal interests are irrelevant to scientific truth and that regardless of how they prefer the world to be, the facts will emerge as long as the scientific methods of inquiry are applied with sufficient rigor. To achieve their cognitive goal, scientists must downplay their biases and personal feelings, and indeed they expect that the phenomenon will reveal its objective, real properties when it’s scientifically scrutinized. The point of science is for us to get out of the way, as much as possible, to let the world speak with its own voice, as opposed to projecting our fantasies and delusions onto the world. Granted, as Kant explained, we never hear that voice exactly—what Pythagoras called the music of the spheres—because in the act of listening to it or of understanding it, we apply our species-specific cognitive faculties and programs. Still, the point is that the institution of science is structured in such a way that the facts emerge because the scientific form of explanation circumvents the scientists’ personalities. This is the essence of scientific objectivity: in so far as they think logically and apply the other scientific principles, scientists depersonalize themselves, meaning that they remove their character from their interaction with some phenomenon and make themselves functionaries in a larger system. This system is just the one in which the natural phenomenon reveals its causal interrelations thanks to the elimination of our subjectivity which would otherwise personalize the phenomenon, adding imaginary and typically supernatural interpretations which blind us to the truth.

And when scientists depersonalize themselves, they open themselves up to the phenomenon: they study it carefully, taking copious notes, using powerful technologies to peer deeply into it, and isolating the variables by designing sterile environments to keep out background noise. This is very like taking up the aesthetic attitude, since the art appreciator too becomes captivated by the work itself, getting lost in its objective details as she sets aside any personal priority she may have. Both the art appreciator and the scientist are personally disinterested when they inspect some object, although the scientist is often just functionally or institutionally so, and both are interested in experiencing the thing for its own sake, although the artist does so for the aesthetic reward whereas the scientist expects a cognitive one. Both objectify what they perceive in that they intend to discern only the subtlest patterns in what’s actually there in front of them, whether on the stage, in the picture frame, or on the novel’s pages, in the case of fine art, or in the laboratory or the wild in the case of science. Thus, art appreciators speak of the patterns of balance and proportion, while scientists focus on causal relations. And the former are rewarded with the normative experience of beauty or are punished with a perception of ugliness, as the case may be, while the latter speak of cognitive progress, of science as the premier way of discovering the natural facts, and indeed of the universality of their successes.

Here, then, is an explanation of what David Hume called the curious generalization that occurs in inductive reasoning, when we infer that because some regularity holds in some cases, therefore it likely holds in all cases. We take our inductive findings to have universal scope because when we reason in that way, we’re objectifying rather than personalizing the phenomenon, and when we objectify something we’re virtually taking up the aesthetic attitude towards it. Finally, when we take up such an attitude, we anticipate a reward, which is to say that we assume that objectification is worthwhile—not just for petty instrumental reasons, but for normative ones, which is to say that objectification functions as a standard for everyone. When you encounter a wonderful work of art, you think everyone ought to have the same experience and that someone who isn’t as moved by that artwork is failing in some way. Likewise, when you discover an objective fact of how some natural system operates, you think the fact is real and not just apparent, that it’s there universally for anyone on the planet to confirm.

Of course, inductive generalization is based also on metaphysical materialism, on the assumptions that the world is made of atoms and that a chunk of matter is just the sort of thing to hold its form and to behave in regular ways regardless of who’s observing it, since material things are impersonal and thus they lack any freedom to surprise. But scientists persist in speaking of their cognitive enterprise as progressive, not just because they assume that science is socially useful, but because scientific findings transcend our instrumental motives since they allow a natural system to speak mainly for itself. Moreover, scientists persist in calling those generalizations laws, despite the unfortunate personal (theistic) connotations, given the comparison with social laws. These facts indicate that inductive reasoning isn’t wholly rational, after all, and that the generalizations are implicitly normative (which isn’t to say moral), because the process of scientific discovery is structurally similar to the experience of art.

 

Natural Art and Science’s True Horror

 

Some obvious questions remain. Are natural phenomena exactly the same as fine artworks? No, since the latter are produced by minds whereas the former are generated by natural forces and elements, and by the processes of evolution and complexification. Does this mean that calling natural systems works of art is merely analogical? No, because the similarity in question isn’t accidental; rather, it’s due to the above theory of art, which says that art is nothing more than what we find when we adopt the aesthetic attitude towards it. According to this account, art is potentially everywhere and how the art is produced is irrelevant.

Does this mean, though, that aesthetic values are entirely subjective, that whether something is art is all in our heads since it depends on that perspective? The answer to this question is more complicated. Yes, the values of beauty and ugliness, for example, are subjective in that minds are required to discover and appreciate them. But notice that scientific truth is likewise just as subjective: minds are required to discover and to understand such truth. What’s objective in the case of scientific discoveries is the reality that corresponds to the best scientific conclusions. That reality is what it is regardless of whether we explain it or even encounter it. Likewise, what’s objective in the case of aesthetics is something’s potential to make the aesthetic appreciation of it worthwhile. That potential isn’t added entirely by the art appreciator, since that person opens herself up to being pleased or disappointed by the artwork. She hopes to be pleased, but the art’s quality is what it is and the truth will surface as long as she adopts the aesthetic attitude towards it, ignoring her prejudices and giving the art a chance to speak for itself, to show what it has to offer. Even if she loathes the artist, she may grudgingly come to admit that he’s produced a fine work, as long as she’s virtually objective in her appreciation of his work, which is to say as long as she treats it aesthetically and impersonally for the sake of the experience itself. Again, scientific objectivity differs slightly from aesthetic appreciation, since scientists are interested in knowledge, not in pleasant experience. But as I’ve explained, that difference is irrelevant since the cognitive agenda compels the scientist to subdue or to work around her personality and to think objectively—just like the art beholder.

So do beauty and ugliness exist as objective parts of the world? As potentials to reward or to punish the person who takes up anything like the aesthetic attitude, including a stance of scientific objectification, given the extent of the harmony or disharmony in the observed patterns, for example, I believe the answer is that those aesthetic properties are indeed as real as atoms and planets. The objective scientist is rewarded ultimately with knowledge of how nature works, while someone in the grip of the aesthetic attitude is rewarded (or punished) with an experience of the aesthetic dimension of any natural or artificial product. That dimension is found in the mechanical aspect of natural systems, since aesthetic harmony requires that the parts be related in certain ways to each other so that the whole system can be perceived as sublime or otherwise transcendent (mind-blowing). Traditional artworks are self-contained and science likewise deals largely with parts of the universe that are analyzed or reduced to systems within systems, each studied independently in artificial environments that are designed to isolate certain components of the system.

Now, such reduction is futile in the case of chaotic systems, but the grandeur of such systems is hardly lessened when the scientist discovers how a system which is sensitive to initial conditions evolves unpredictably as defined by a mathematical formula. Indeed, chaotic systems are comparable to modern and postmodern art as opposed to the more traditional kind. Recent, highly conceptual art or the nonrepresentational kind that explores the limits of the medium is about as unpredictable as a chaotic system. So the aesthetic dimension is found not just in part-whole relations and thus in beauty in the sense of harmony, but in free creativity. Modern art and science are both institutions that idealize the freedom of thought. Freed from certain traditions, artists now create whatever they’re inspired to create; they’re free to experiment, not to learn the natural facts but to push the boundaries of human creativity. Likewise, modern scientists are free to study whatever they like (in theory). And just as such modernists renounce their personal autonomy for the sake of their work, giving themselves over to their muse, to their unconscious inclinations (somewhat like Zen Buddhists who abhor the illusion of rational self-control), or instead to the rigors of institutional science, nature reveals its mindless creativity when chaotic systems emerge in its midst.

But does the scientist actually posit aesthetic values while doing science, given that scientific objectification isn’t identical with the aesthetic attitude? Well, the scientist would generally be too busy doing science to attend to the aesthetic dimension. But it’s no accident that mathematicians are disproportionately Platonists, that early modern scientists saw the cosmic order as attesting to God’s greatness, or that postmodern scientists like Neal deGrasse Tyson, who hosts the rebooted television show Cosmos, labour to convince the average American that naturalism ought to be enough of a religion for them, because the natural facts are glorious if not technically miraculous. The question isn’t whether scientists supply the world with aesthetic properties, like beauty or ugliness, since those properties preexist science as objective probabilities of uplifting or depressing anyone who takes up the aesthetic attitude, which attitude is practically the same as objectivity. Instead, the question here might be whether scientific objectivity compels the scientist to behold a natural phenomenon as art. Assuming there are nihilistic scientists, the answer would have to be no. The reason for this would be the difference in social contexts, which accounts for the difference between the goals and rewards. Again, the artist wants a certain refined pleasure whereas the scientist wants knowledge. But the point is that the scientist is poised to behold natural systems as artworks, just in so far as she’s especially objective.

Finally, we should return to the question of how this relates to nihilism. The fear, raised above, was that because science entails nihilism, the loss of faith in our values and traditions, scientists threaten to undermine the social order even as they lay bare the natural one. I’ve questioned the premise, since objectivity entails instead the aesthetic attitude which compels us to behold nature not as arid and barren but as rife with aesthetic values. Science presents us with a self-shaping universe, with the mindless, brute facts of how natural systems work that scientists come to know with exquisite attention to detail, thanks to their cognitive methods which effectively reveal the potential of even such systems to reward or to punish someone with an aesthetic eye. For every indifferent natural system uncovered by science, we’re well-disposed to appreciating that system’s aesthetic quality—as long as we emulate the scientist and objectify the system, ignoring our personal interests and modeling its patterns, such as by reducing the system to mechanical part-whole relations. The more objective knowledge we have, the more grist for the aesthetic mill. This isn’t to say that science supports all of our values and traditions. Obviously science threatens some of them and has already made many of them untenable. But science won’t leave us without any value at all. The more objective scientists are and the more of physical reality they disclose, the more we can perceive the aesthetic dimension that permeates all things, just by asking for pleasure rather than knowledge from nature.

There is, however, another great fear that should fill in for the nihilistic one. Instead of worrying that science will show us why we shouldn’t believe there’s any such thing as value, we might wonder whether, given the above, science will ultimately present us with a horrible rather than a beautiful universe. The question, then, is whether nature will indeed tend to punish or to reward those of us with aesthetic sensibilities. What is the aesthetic quality of natural phenomena in so far as they’re appreciated as artworks, as aesthetically interpretable products of undead processes? Is the final aesthetic judgment of nature an encouraging, life-affirming one that justifies all the scientific work that’s divorced the facts from our mental projections or will that judgment terrorize us worse than any grim vision of the world’s fundamental neutrality? Optimists like Richard Dawkins, Carl Sagan and Tyson think the wonders of nature are uplifting, but perhaps they’re spinning matters to protect science’s mystique and the secular humanistic myth of the progress of modern, science-centered societies. Perhaps the world’s objectification curses us not just with knowledge of many unpleasant facts of life, but with an experience of the monstrousness of all natural facts.

Who’s Afraid of Reduction? Massimo Pigliucci and the Rhetoric of Redemption

by rsbakker

On the one hand, Massimo Pigliucci is precisely the kind of philosopher that I like, one who eschews the ingroup temptations of the profession and tirelessly reaches out to the larger public. On the other hand, he is precisely the kind of philosopher I bemoan. As a regular contributor to the Skeptical Inquirer, one might think he would be prone to challenge established, academic opinions, but all too often such is not the case. Far from preparing his culture for the tremendous, scientifically-mediated transformations to come, he spends a good deal of his time defending the status quo–rationalizing, in effect, what needs to be interrogated through and through. Even when he critiques authors I also disagree with (such as Ray Kurzweil on the singularity) I find myself siding against him!

Burying our heads in the sand of traditional assumption, no matter how ‘official’ or ‘educated,’ is pretty much the worst thing we can do. Nevertheless, this is the establishment way. We’re hard-wired to essentialize, let alone forgive, the conditions responsible for our prestige and success. If a system pitches you to any height, well then, that is a good system indeed, the very image of rationality, if not piety as well. Tell a respectable scholar in the Middle Ages that the sun wasn’t the centre of the universe or that man wasn’t crafted in God’s image and he might laugh and bid you good day or scowl and alert the authorities—but he would most certainly not listen, let alone believe. In “Who Knows What,” his epistemological defense of the humanities, Pigliucci reveals what I think is just such a defensive, dismissive attitude, one that seeks to shelter what amounts to ignorance in accusations of ignorance, to redeem what institutional insiders want to believe under the auspices of being ‘skeptical.’ I urge everyone reading this to take a few moments to carefully consider the piece, form judgments one way or another, because in what follows, I hope to show you how his entire case is actually little more than a mirage, and how his skepticism is as strategic as anything to ever come out of Big Oil or Tobacco.

“Who Knows What” poses the question of the cognitive legitimacy of the humanities from the standpoint of what we really do know at this particular point in history. The situation, though Pigluicci never references it, really is quite simple: At long last the biological sciences have gained the tools and techniques required to crack problems that had hitherto been the exclusive province of the humanities. At long last, science has colonized the traditional domain of the ‘human.’ Given this, what should we expect will follow? The line I’ve taken turns on what I’ve called the ‘Big Fat Pessemistic Induction.’ Since science has, without exception, utterly revolutionized every single prescientific domain it has annexed, we should expect that, all things being equal, it will do the same regarding the human–that the traditional humanities are about to be systematically debunked.

Pigluicci argues that this is nonsense. He recognizes the stakes well enough, the fact that the issue amounts to “more than a turf dispute among academics,” that it “strikes at the core of what we mean by human knowledge,” but for some reason he avoids any consideration, historical or theoretical, of why there’s an issue at all. According to Pigluicci, little more than the ignorance and conceit of the parties involved lies behind the impasse. This affords him the dialectical luxury of picking the softest of targets for his epistemological defence of the humanities: the ‘greedy reductionism’ of E. O. Wilson. By doing so, he can generate the appearance of putting an errant matter to bed without actually dealing with the issue itself. The problem is that the ‘human,’ the subject matter of the humanities, is being scientifically cognized as we speak. Pigliucci is confusing the theoretically abstract question of whether all knowledge reduces to physics with the very pressing and practical question of what the sciences will make of the human, and therefore the humanities as traditionally understood. The question of the epistemological legitimacy of the humanities isn’t one of whether all theories can somehow be translated into the idiom of physics, but whether the idiom of the humanities can retain cognitive legitimacy in the wake of the ongoing biomechanical rennovation of the human. It’s not a question of ‘reducing’ old ways of making sense of things so much as a question of leaving them behind the way we’ve left so many other ‘old ways’ behind.

As it turns out, the question of what the sciences of the human will make of the humanities turns largely on the issue of intentionality. The problem, basically put, is that intentional phenomena as presently understood out-and-out contradict our present, physical understanding of nature. They are quite literally supernatural, inexplicable in natural terms. If the consensus emerging out of the new sciences of the human is that intentionality is supernatural in the pejorative sense, then the traditional domain of the humanities is in dire straits indeed. True or false, the issue of reductionism is irrelevant to this question. The falsehood of intentionalism is entirely compatible with the kind of pluralism Pigluicci advocates. This means Pigliucci’s critique of reductionism, his ‘demolition project,’ is, well, entirely irrelevant to the practical question of what’s actually going to happen to the humanities now that the sciences have scaled the walls of the human.

So in a sense, his entire defence consists of smoke and mirrors. But it wouldn’t pay to dismiss his argument summarily. There is a way of reading a defence that runs orthogonal to his stated thesis into his essay. For instance, one might say that he at least establishes the possibility of non-scientific theoretical knowledge of the human by sketching the limits of scientific cognition. As he writes of mathematical or logical ‘facts’:

take a mathematical ‘fact’, such as the demonstration of the Pythagorean theorem. Or a logical fact, such as a truth table that tells you the conditions under which particular combinations of premises yield true or false conclusions according to the rules of deduction. These two latter sorts of knowledge do resemble one another in certain ways; some philosophers regard mathematics as a type of logical system. Yet neither looks anything like a fact as it is understood in the natural sciences. Therefore, ‘unifying knowledge’ in this area looks like an empty aim: all we can say is that we have natural sciences over here and maths over there, and that the latter is often useful (for reasons that are not at all clear, by the way) to the former.

The thing he fails to mention, however, is that there’s facts and then there’s facts. Science is interested in what things are and how they work and why they appear to us the way they do. In this sense, scientific inquiry isn’t concerned with mathematical facts so much as the fact of mathematical facts. Likewise, it isn’t so much concerned with what Pigliucci in particular thinks of Brittany Spears as it is how people in general come to evaluate consumer goods. As a result, we find researchers using these extrascientific facts as data points in attempts to derive theories regarding mathematics and consumer choice.

In other words, Pigliucci’s attempt to evidence the ‘limits of science’ amounts to a classic bait-and-switch. The most obvious question that plagues his defence has to be why he fails to offer any of the kinds of theories he takes himself to be defending in the course of making his defence. How about deconstruction? Conventionalism? Hermeneutics? Fictionalism? Psychoanalysis? The most obvious answer is that they all but explode his case for forms of theoretical cognition outside the sciences. Thus he provides a handful of what seem to be obvious, non-scientific, first-order facts to evidence a case for second-order pluralism—albeit of a kind that isn’t relevant to the practical question of the humanities, but seems to make room for the possibility of cognitive legitimacy, at least.

(It’s worth noting that this equivocation of levels (in an article arguing the epistemic inviolability of levels, no less!) cuts sharply against his facile reproof of Krauss and Hawking’s repudiation of philosophy. Both men, he claims, “seem to miss the fact that the business of philosophy is not to solve scientific problems,” begging the question of just what kind of problems philosophy does solve. Again, examples of philosophical theoretical cognition are found wanting. Why? Likely because the only truly decisive examples involve enabling scientists to solve scientific problems!)

Passing from his consideration of extrascientific, but ultimately irrelevant (because non-theoretical) non-scientific facts, Pigliucci turns to enumerating all the things that science doesn’t know. He invokes Godel (which tends to be an unfortunate move in these contexts) commits the standard over-generalization of his technically specific proof of incompleteness to the issue of knowledge altogether. Then he gives us a list of examples where, he claims, ‘science isn’t enough.’ The closest he comes to the real elephant in the room, the problem of intentionality, runs as follows:

Our moral sense might well have originated in the context of social life as intelligent primates: other social primates do show behaviours consistent with the basic building blocks of morality such as fairness toward other members of the group, even when they aren’t kin. But it is a very long way from that to Aristotle’s Nicomachean Ethics, or Jeremy Bentham and John Stuart Mill’s utilitarianism. These works and concepts were possible because we are biological beings of a certain kind. Nevertheless, we need to take cultural history, psychology and philosophy seriously in order to account for them.

But as was mentioned above, the question of the cognitive legitimacy of the humanities only possesses the urgency it does now because the sciences of the human are just getting underway. Is it really such ‘a very long way’ from primates to Aristotle? Given that Aristotle was a primate, the scientific answer could very well be, ‘No, it only seems that way.’ Science has a long history of disabusing us of our sense of exceptionalism, after all. Either way, it’s hard to see how citing scientific ignorance in this regard bears on the credibility of Aristotle’s ethics, or any other non-scientific attempt to theorize morality. Perhaps the degree we need to continue relying on cultural history, psychology, and philosophy is simply the degree we don’t know what we’re talking about! The question is the degree to which science monopolizes theoretical cognition, not the degree to which it monopolizes life, and life, as Pigliucci well knows—as a writer for the Skeptical Inquirer, no less—is filled with ersatz guesswork and functional make-believe.

So, having embarked on an argument that is irrelevant to the cognitive legitimacy of the humanities, providing evidence merely that science is theoretical, then offering what comes very close to an argument from ignorance, he sums by suggesting that his pluralist picture is indeed the very one suggested by science. As he writes:

The basic idea is to take seriously the fact that human brains evolved to solve the problems of life on the savannah during the Pleistocene, not to discover the ultimate nature of reality. From this perspective, it is delightfully surprising that we learn as much as science lets us and ponder as much as philosophy allows. All the same, we know that there are limits to the power of the human mind: just try to memorise a sequence of a million digits. Perhaps some of the disciplinary boundaries that have evolved over the centuries reflect our epistemic limitations.

The irony, for me at least, is that this observation underwrites my own reasons for doubting the existence of intentionality as theorized in the humanities–philosophy in particular. The more we learn about human cognition, the more alien to our traditional assumptions it becomes. We already possess a mountainous case for what might be called ‘ulterior functionalism,’ the claim that actual cognitive functions are almost entirely inscrutable to theoretical metacognition, which is to say, ‘philosophical reflection.’ The kind of metacognitive neglect implied by ulterior functionalism raises a number of profound questions regarding the conundrums posed by the ‘mental,’ ‘phenomenal,’ or ‘intentional.’ Thus the question I keep raising here: What role does neglect play in our attempts to solve for meaning and consciousness?

What we need to understand is that everything we learn about the actual architecture and function of our cognitive capacities amounts to knowledge of what we have always been without knowing. Blind Brain Theory provides a way to see the peculiar properties belonging to intentional phenomena as straightforward artifacts of neglect—as metacognitive illusions, in effect. Box open the dimensions of missing information folded away by neglect, and the first person becomes entirely continuous with the third—the incompatibly between the intentional and the causal is dissolved. The empirical plausibility of Blind Brain Theory is an issue in its own right, of course, but it serves to underscore the ongoing vulnerability of the humanities, and therefore, the almost entirely rhetorical nature of Pigliucci’s ‘demolition.’ If something like the picture of metacognition proposed by Blind Brain Theory turns out to be true, then the traditional domain of the humanities is almost certainly doomed to suffer the same fate as any other prescientific theoretical domain. The bottomline is as simple as it is devastating to Pigluicci’s hasty and contrived defence of ‘who knows what.’ How can we know whether the traditional humanities will survive the cognitive revolution?

Well, we’ll have to wait and see what the science has to say.

 

Life as Perpetual Motion Machine: Adrian Johnston and the Continental Credibility Crisis

by rsbakker

In Thinking, Fast and Slow, Daniel Kahneman cites the difficulty we have distinguishing experience from memory as the reason why we retrospectively underrate our suffering in a variety of contexts. Given the same painful medical procedure, one would expect an individual suffering for twenty minutes to report a far greater amount than an individual suffering for half that time or less. Such is not the case. As it turns out duration has “no effect whatsoever on the ratings of total pain” (380). Retrospective assessments, rather, seem determined by the average of the pain’s peak and its coda.

Absent intellectual effort, the default is to remove the band-aid slowly.

Far from being academic, this ‘duration neglect,’ as Kahneman calls it, places the therapist in something of a bind. What should the physician’s goal be? The reduction of the pain actually experienced, or the reduction of the pain remembered. Kahneman provocatively frames the problem as a question of choosing between selves, the ‘experiencing self’ that actually suffers the pain and the ‘remembering self’ that walks out of the clinic. Which ‘self’ should the therapist serve? Kahneman sides with the latter. “Memories,” he writes, “are all we get to keep from our experience of living, and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self” (381). As he continues:

“Confusing experience with the memory of it is a compelling cognitive illusion—and it is the substitution that makes us believe a past experience can be ruined. The experiencing self does not have a voice. The remembering self is sometimes wrong, but it is the one that keeps score and governs what we learn from living, and it is the one that makes decisions. What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experience. This is the tyranny of the remembering self.” 381

There’s many, many ways to parse this fascinating passage, but what I’m most interested in is the brand of tyranny Kahneman invokes here. The use is metaphoric, of course, referring to some kind of ‘power’ that remembering possesses over experience. But this ‘power over’ isn’t positive: the ‘remembering self’ is no ‘tyrant’ in the interpersonal or political sense. We aren’t talking about a power that one agent holds over another, but rather the way facts belonging to one capacity, experiencing, regularly find themselves at the mercy of another, remembering.

Insofar as the metaphor obtains at all, you could say the power involved is the power of selection. Consider the sum of your own sensorium this very moment—the nearly sub-audible thrum of walled-away urban environs, the crisp white of the screen, the clamour of meandering worry on your margins, the smell of winter drafts creeping through lived-in spaces—and think of how wane and empty it will have become when you lie in bed this evening. With every passing heartbeat, the vast bulk of experience is consigned to oblivion, stranding us with memories as insubstantial as coffee-rings on a glossy magazine.

It has to be this way, of course, for both brute biomechanical and evolutionary developmental reasons. The high-dimensionality of experience speaks to the evolutionary importance of managing ongoing environmental events. The biomechanical complexity required to generate this dimensionality, however, creates what might be called the Problem of Indisposition. Since any given moment of experience exhausts our capacity to experience, each subsequent moment of experience all but utterly occludes the moment prior. The astronomical amounts of information constitutive of momentary experience is all but lost, ‘implicit’ in the systematic skeleton of ensuing effects to be sure, but inaccessible to cognition all the same.

Remembering, in other words, is radically privative. As a form of subsequent experiencing, the machinery involved generating the experience remembered has been retasked. Accordingly, the question of just what gets selected becomes all important. The phenomenon of duration neglect noted above merely highlights one of very many kinds of information neglected. In this instance, it seems, evolution skimped on the metacognitive machinery required to reliably track and rationally assess certain durations of pain. Remembering the peak and coda apparently packed a bigger reproductive punch.

Kahneman likens remembering to a tyrant because selectivity, understood at the level of agency, connotes power. The automaticity of this selectivity, however, suggests that abjection is actually the better metaphor, that far from being a tyrant, remembering is more a captive to the information available, more a prisoner in Plato’s Cave, than any kind of executive authority.

If any culprit deserves the moniker of ‘tyrant’ here, it has to be neglect. Why do so many individuals  choose to remove the band-aid slowly? Because information regarding duration plays far less a roll than information regarding intensity. Since the mechanisms responsible for remembering systematically neglect such information, that information possesses no downstream consequences for the machinery of decision-making. What we have traditionally called memory consists of a fractionate system of automata scattered throughout the brain. What little they cull from experiencing is both automatic and radically heuristic. Insofar as the metaphor of ‘tyrant’ applies at all, it applies to the various forms of neglect suffered by conscious cognition, the myriad scotomas constraining the possibilities of ‘remembering experience’—or metacognition more generally.

Kahneman’s distinction wonderfully illustrates the way the lack of information can have positive cognitive effects. Band-aids get pulled slowly because only a spare, evolutionarily strategic fraction of experiencing can be remembered. We only recall enough of experience, it seems safe to assume, to solve the kinds of problems impacting our paleolithic ancestors’ capacity to reproduce. This raises the general question of just what kinds of problems we should expect metacognition—given the limitations of its access and resources—to be able to solve.

Or put more provocatively, the question that philosophy has spent millennia attempting to evade in the form of skepticism: If we don’t possess the metacognitive capacity to track the duration of suffering, why should we expect theoretical reflection to possess the access and capacity to theoretically cognize the truth of experience otherwise? Given the sheer complexity of the brain, the information consciously accessed is almost certainly adapted to various, narrow heuristic functions. It’s easy to imagine specialized metacognitive access and processing adapting to solve specialized problems possessing reproductive benefits. But it seems hard to imagine why evolution would select for the ability to theoretically intuit experience for what it is. Even worse, theoretical reflection is an exaptation, a cultural achievement. As such, we should expect it to be a naive metacognitive consumer, taking all information absent any secondary information regarding that information’s sufficiency.

In other words, not only should we expect theoretical reflection to be blind, we should also expect it to be blind to its own blindness.

It is this question of neurobiological capacity and evolutionary problem-solving that I want to bring to Adrian Johnston’s project to materially square the circle of subjectivity—or as he puts it, to secure “the possibility of a gap between, on the one hand, a detotalized, disunified plethora of material substances riddled with contingencies and conflicts and, on the other hand, the bottom-up surfacing out of these substances of the recursive, self-relating structural dynamics of cognitive, affective, and motivational subjectivity—a subjectivity fully within but nonetheless free at certain levels from material nature” (209).

I’ve considered several attempts by different Continental philosophers to deal with the challenges posed by the sciences of the mind over the past three years: Quentin Meillasoux in CAUSA SUIcide, Levi Bryant in The Ptolemaic Restoration, Martin Hagglund in Reactionary Atheism, and Slavoj Zizek in Zizek Hollywood, each of which has received thousands of views. With Meillasoux I focussed on his isolation of ‘correlation’ as a problematic ontological assumption, and the way he seemed to think he need only name it as such, and all the problems of subjectivity raised by Hume and normativity raised by Wittgenstein could just be swept under the philosophical rug. With Bryant I focussed on the problem of dogmatic ontologism, the notion that naming correlation as a problem somehow warranted a return to the good old preKantian days, where we could make ontological assertions without worrying about our capacity to make such claims. With Hagglund I raised issues with his interpretation of Derrida as an early thinker of ‘ultratranscendental materialism,’ showing how the concepts at issue were intentional through and through, and thus thoroughly incompatible with the natural scientific project. With Zizek I focussed on the way his deflationary ontology of negative subjectivity arising from some ‘gap’ in the real, aside from simply begging all the questions it purported to answer, amounted to an ontologization of what is far more parsimoniously explained as a cognitive illusion.

And, of course, I took the opportunity to demonstrate the explanatory power of the Blind Brain Theory in each case, the way each of these approaches actually exploit various metacognitive illusions to make their case.

Now, having recently completed Johnston’s Prolegomena to Any Future Materialism: The Outcome of Contemporary French Philosophy, I’ve come to realize that these thinkers* are afflicted with the same set of recurring problems, problems which must be overcome if anything approaching a compelling account of the kind Johnston sets as his goal is to be had. These might be enumerated as follows:

Naivete Problem: With the qualified exception of Zizek, these authors seem largely (and in some cases entirely) ignorant of the enormous philosophical literature dealing with the problems intentionality poses for materialism/physicalism. They also seem to have scant knowledge of the very sciences they claim to be ‘grounding.’

No Cognitive Guarantee Problem: These authors take it as given that radical self-deception is simply not a possible outcome of a mature neuroscience–that something resembling subjectivity as remembered is ‘axiomatic.’ In all fairness, this is a common presumption of those critical of the eliminativist implications of the sciences of the brain. Rose and Abi-Rached, for instance, make it the centrepiece of their attempt to defang the neuroscientific threat to social science in their Neuro: The New Brain Sciences and the Management of the Mind. (Their strategy is twofold: on the one hand, they (like some of the authors considered here) give a conveniently narrow characterization of the threat in terms of subjectivity, arguing that the findings of neuroscience in this regard are simply confirming the subject-decentering theoretical insights already motivating much of the social sciences. Then they essentially cherry-pick researchers and commentators in the field who confirm their thesis without giving dissenters a hearing.) The unsettling truth is that wholesale, radical deception regarding who and what we are is entirely possible (evolution only cares about accuracy insofar as it pays reproductive dividends), and actually already a matter of empirical fact regarding a handful of cognitive capacities.

Talk Is Cheap Problem: There is a decided tendency among these authors to presume the effectiveness of metaphysical argumentation, to not only think that ontological claims merit serious attention in the sciences, but that the threat posed is merely ideological and not material. Rehearsing old arguments against determinism (especially when it’s the Second Law of Thermodynamics that needs to be refuted) will make no difference whatsoever once the brain ceases to be a ‘grey box’ and becomes continuous with our technology.

Implausible Continuity Problem: All of these authors ignore what I call the Big Fat Pessimistic Induction: the fact that, all things being equal, we should expect science to revolutionize the human as radically as it has revolutionized every other natural domain now that the brain has become empirically tractable. They assume, rather, that the immunity the opacity of the brain had granted their tradition historically will somehow continue.

Metacognitive Reliability Problem: All of these authors overlook the potentially crippling issue of metacognitive deception, despite the mounting evidence of metacognitive unreliability. I should note that this tendency is common in Analytic Philosophy of Mind as well (but less and less so as the years pass).

Intentional Dissociation Problem: All of these authors characterize the cognitive scientific threat in the narrow terms of subjectivity rather than intentionality broadly construed, the far more encompassing rubric common to Analytic philosophy. Given the long Continental tradition of critiquing commonly held conceptions of subjectivity, the attractiveness of this approach is understandable, but no less myopic.

I think Prolegomena to Any Future Materialism: The Outcome of Contemporary French Philosophy suffers from all these problems—clearly so. What follows is not so much a review—I’ll await the final book of his trilogy for that (for a far more balanced consideration see Stephan Craig Hickman’s serial review here, here, here, and here)—as a commentary on the general approach one finds in many Continental materialisms as exemplified by Johnston. What all these authors want is some way of securing—or salvaging—some portion of the bounty of spirit absent spirit. They want intentionality absent theological fantasy, and materialism absent nihilistic horror. What I propose is a discussion of the difficulties any such project must overcome—a kind of prolegomena to Johnston’s Prolegomena—and a demonstration why he cannot hope to succeed short of embracing the very magical thinking he is so quick to deride.

Insofar as this is a blog post, part of a living, real time debate, I heartily encourage partisans of his approach to sound off. I am by no means a scholar of any of these authors, so I welcome corrections of misinterpretations. Strawmen teach few lessons, and learn none whatsoever. But I also admit to a certain curiosity given the optimistic stridency of so much of Johnston’s rhetoric. “From my perspective,” he writes in a recent interview, “these naturalists are overconfident aggressors not nearly as well-armed as they believe themselves to be. And, the anti-naturalists react to them with unwarranted fear, buying into the delusions of their foes that these enemies really do wield scientifically-solid, subject-slaying weapons.” I’m sure everyone reading this would love to see what kind of walk accompanies this talk! From my quite contrary perspective, the only way a book like this could be written is for the lack of any sustained interaction with those holding contrary views. Write for your friends long enough, and your writing becomes friendly.

In my own terms, Johnston is an explicit proponent of what might be called noocentrism, the last bastion, now that geocentrism and biocentrism have been debunked, of the intuition that we are something special. Freud, of course, famously claimed to have accomplished this overthrow, to have inflicted the third great ‘narcissistic wound,’ when he had only camouflaged the breastworks by carving intentionality along different mortices. Noocentrism represents an umbrella commitment to our metacognitive intuitions regarding the various efficacies of experience, and these are the intuitions that Johnston explicitly seeks to vindicate. He is ‘preoccupied,’ as he puts it, “with constructing an ontology of freedom” (204). Since any such ontology contradicts the prevailing understanding of the natural arising out of the sciences–how can freedom arise in a nature where everything is in-between, a cog for indifferent forces?–the challenge confronting any materialism is one of explaining subjectivity in a materially consistent manner. As he puts it in his recent Society and Space interview:

“For me, the true ultimate test of any and every materialism is whether it can account in a strictly materialist (yet non-reductive) fashion for those phenomena seemingly most resistant to such an account. Merely dismissing these phenomena (first and foremost, those associated with subjectivity) as epiphenomenal relative to a sole ontological foundation (whether as Substance, Being, Otherness, Flesh, Structure, System, Virtuality, Difference, or whatever else) fails this test and creates many more problems than it supposedly solves.”

Naturalizing consciousness and intentionality—or in Johnston’s somewhat antiquated jargon, explaining the material basis of subjectivity—is without a doubt the holy grail, not only of contemporary philosophy of mind, but of several sciences as well. And he is quite right to insist, I think, that any such naturalization that simply eliminates intentional phenomena (along the lines of Alex Rosenberg’s position, say) hasn’t actually naturalized anything at all. If consciousness and intentionality don’t exist as we intuit them, then we need some account of why we intuit them as such. Elimination, in other words, has to explain why elimination is required in the first place.

But global eliminativist materialist approaches (such as Rosenberg’s and my own) are actually very rare. In contemporary debates, philosophers and researchers tend to be eliminativists or antirealists about specific intentional phenomena, qualia, content, norms, or so on, rather than all intentional phenomena. This underscores two problems that loom large over Johnston’s account, at least as it stands in this first volume. The first has to do with what I called the Intentional Dissociation Problem above, the fact that the problem of subjectivity is simply a subset of the larger problem of intentionality. It falls far short of capturing the ‘problem space’ that Johnston purports to tackle. Some philosophers (Pete Mandik comes to mind) are eliminativists about subjectivity, yet realists about other semantic phenomena.

The second has to do with the fact that throughout the course of the book he repeatedly references reductive and eliminative materialisms as his primary rhetorical foil without actually engaging any of the positions in any meaningful way. Instead he references Catherine Malabou’s perplexing work on neuroplasticity, stating that “one need not fear that bringing biology into the picture of a materialist theory of the subject leads inexorably to a reductive materialism of a mechanistic and/or eliminative sort; such worries are utterly unwarranted, based exclusively on an unpardonable ignorance of several decades of paradigm-shifting discoveries in the life sciences” (Prolegomena, 29). Why? Apparently because epigenetics and neural plasticity “ensure the openness of vectors and logics not anticipated or dictated by the bump-and-grind efficient causality of physical particles alone” (29).

Comments like these—and one finds them scattered throughout the text—demonstrates a problematic naivete regarding his subject matter. One could point out that quantum indeterminacy actually governs the ‘determinism’ he attributes to physical particles. But the bigger problem—the truly ‘unpardonable ignorance’—is that it shows how little he seems to understand the very problem he has set out to solve. His mindset seems to be as antiquated as the sources he cites. He seems to think, for instance, that ‘mechanism’ in the brain sciences refers to something nonstochastic, ‘clockwork,’ that the spectre of Laplace is what drives the unwarranted claims of reductive/eliminative materialists. ‘Decades of research revealing indeterminacy, and still they speak of mechanisms?’

As hard as it is to believe, Johnston pretty clearly thinks the primary problem materialism poses for subjectivity is the problem of determinism. But the problem, simply put, is nothing other than the Second Law of Thermodynamics, the exceptionless irreflexivity of the natural. Ontological freedom is every bit as incompatible with the probabilistic as it is the determined. The freedom of noise is no freedom at all.

This, without a doubt, is his single biggest argumentative oversight, the one that probably explains his wholesale dismissal of any would-be detractor such as myself. His foe here is entropy, not some anachronistic conception of clockwork determinism. Only an appreciation of this allows an appreciation of the difficulty the task Johnston has set himself. Forget the thousands of years of tradition, the lifetime of familiarity, the system of concepts anchored, forget that Johnston is arguing for the most beloved thing—your exceptionality—set aside all this, and what remains, make no mistake, is a perpetual motion machine, something belonging to reality but obeying laws of its own.

So how does one theoretically rationalize a perpetual motion machine?

The metaphor is preposterous, of course, even though it remains analogous in the most important respect. Johnston literally believes it’s possible to “be a partisan of a really and indissolubly free subject while simultaneously and without incoherence or self-contradiction remaining entirely faithful to the uncompromising atheism and immanentism of the combative materialist tradition” (176). He thinks that certain real, physical systems (you and me, as luck would have it) do not obey physical law, at least not the way every single system effectively explained through the history of natural science obeys physical law.

What makes the metaphor preposterous, however, is the apparent immediacy of subjectivity, the way it strikes us as a source of some kind upon reflection, hemmed not by astronomical neural complexities, but by rules, goals, rationality. In a basic sense, what could be more obvious? This is what we experience!

Or… is it just what we remember?

And here’s the rub. The problem that Johnston has set himself to solve is a dastardly one indeed, far, far more difficult than he seems to imagine. Even with the dazzling assurance of experience, a perpetual motion machine is pretty damn hard thing to explain. The fact that most everyone is dazzled by subjectivity in its myriad guises doesn’t change the fact that they are, quite explicitly, betting on a perpetual motion machine. There’s a reason, after all, why everyone but everyone who’s attempted what Johnston has set out to achieve has failed. “Empty-handed adversaries,” as Johnston claims in the same interview, “do not deserve to be feared.” But if they’re empty-handed, then they must know kung-fu, or something lethal, because so far they’ve managed to kill every single theory such as his!

But when you start interrogating that ‘dazzling assurance,’ when you consider just how much we remember, things become even more difficult for Johnston. Because the fact is, we really don’t remember all that much. Certain things escape memory simply because they escape experience altogether. Our brains, for instance, have no more access to the causal complexities of their own function than they do to those of others, so they rely on powerful, yet imperfect systems, ‘fast and frugal heuristics,’ to solve (explain, predict, and manipulate) themselves and others. When abnormalities occur in these systems, such as those belonging, say, to autism spectrum disorder, our capacity to solve is impaired.

As the history of philosophy attests, we seem to experience next to nothing regarding the actual function of these systems, or at least nothing we can remember in the course of pondering our various forms of intentional problem solving. All we seem to intuit are a series of problem-solving modes that we simply cannot square with the problem-solving modes we use to engineer and understand mechanical systems. And, most importantly, we seem to experience (or remember) nothing of just how little we experience (or remember). And so the armchair perpetually remains a live option.

I say ‘most importantly’ because this means remembering doesn’t simply overlook its incapacities, it neglects them. When it comes to experience, we remember everything there is to be remembered, always. We rarely have any inkling of what’s bent, bleached, or lost. What is lost to the system, does not exist for the system, even as something lost.

Add neglect and suddenly a good number of intentional peculiarities begin to make frightening sense. Why, for instance, should we be surprised that problem solving modes adapted to solve complex causal systems absent causal information cannot themselves make sense of causal information? We are mechanically embedded in our environments in such a way that we cannot cognize ourselves as so embedded, and so are forced to cognize ourselves otherwise, acausally, relying on heuristics that theoretical reflection transforms into rules, goals, and reasons, hazy obscurities at the limits of discrimination.

We are astronomically complicated causal systems that cannot remember themselves as such, amnesiac machines that take themselves for perpetual motion machines for the profundity of their forgetting. At any given moment, what we remember is all there is; there is nothing else to blame, no neuromechanistic background we might use to place our thoughts and experiences in their actual functional context, namely, the machinery that bullets and spirochetes and beta-amyloid plaques can destroy. We do not simply lack the access and the resources to intuit ourselves for what we are (something), we lack the resources intuit this lack of resources. Thus the myth of perpetual motion, our conviction in what Johnston calls the “self-determining spontaneity of transcendental subjects.”

The limits of remembering, in other words, provide an elegant, entirely naturalistic, explanation for our metacognitive intuitions of spontaneity, the almost inescapable sense that thought has to represent some kind of fundamental discontinuity in being. Since we cannot cognize the actual activity of cognition, that activity—the function of flesh and blood neural circuits that would seize were you to suffer a midcerebral arterial stroke this instant—does not exist for metacognition. All the informational dimensions of this medial functionality, the dimensions of the material, vanish into oblivion, stranding us with a now that always seems to be the same now, despite its manifest difference, a life that is always in the mysterious process of just beginning.

But Johnston doesn’t buy this story. For him, we actually do remember everything we need to remember to theoretically fathom experience. For him, the fact of subjectivity is nothing less than an “axiomatic intuition” (204), as dazzling as dazzling can be. He never explains how this magic might be possible, how any brain could possibly possess the access and resources to fathom its structure and dynamics in anything but radically privative ways, but then he’s not even aware this is a problem (or more likely, he assumes Freud and Lacan have already solved this problem for him). For him, self-determining spontaneity—perpetual motion—is simply a positive fact of what we are. Everything is remembered that needs to be remembered.

The problem, he’s convinced, doesn’t lie with us. So in order to pass his own test, to craft a materialism absent cryptotheological elements that nevertheless explains (as opposed to explains away) all the perplexing phenomena of intentionality, he needs some different account of nature.

He’s not alone in this regard. The vast majority of theorists who tackle the many angles of this problem are intentional realists of some description. But for many, if not most of them, the tactic is to posit empirical ignorance: though we presently cannot puzzle through the conundrums of intentional phenomena, proponents of so-called ‘spooky emergence’ contend, advances in cognitive neuroscience (and/or physics) will somehow vindicate our remembering. Consciousness and intentionality, they believe, are emergent phenomena, novel physical properties pertaining to as yet unknown natural mechanisms.

Johnston also appropriates the term ‘emergentism’ to describe his project, but it’s hard to see it as much more than a ‘cool by association’ ploy. Emergentism provides a way for physicalists (materialists) to redeem something ‘perpetual enough’ short of committing to ontological pluralism. Emergentists, in other words, are naturalists, convinced that “philosophy can and should limit itself to a deontologized epistemology with nothing more than, at best, a complex conception of the cognizing mental apparatus” (204).

This ‘article of faith,’ however, is one that Johnston explicitly rejects, claiming that “thought cannot indefinitely defer fulfilling its duty to build a realist and materialist ontology” (204). So be warned, no matter how much he helps himself to the term, Johnston is no ‘emergentist’ in the standard sense. He’s an avowed ontologist, as he has to be, given the Zizekian frame he uses to mount his theoretical chassis. “[A] theory of the autonomous negativity of self-relating subjectivity always is accompanied, at a minimum implicitly, by the shadow of a picture of being (as the ground of such subjectivity) that must be made explicit sooner or later” (204). Elsewhere, he writes, “I am tempted to characterize my transcendental materialism as an emergent dual-aspect monism, albeit with the significant qualification that these ‘aspects’ and their eradicable divisions (such as mind and matter, the asubjective and subjectivity, and the natural and the more-than-natural) enjoy the heft of actual existence” (180), that is, he’s a kind of dual-aspect monist so long as the dualities are not aspectual!

Insofar as perpetual motion machines (like autonomous subjects) pretty clearly violate nature as science presently conceives it, one might say that Johnston’s ontological emergentism is honest in a manner that naturalistic emergentism is not. As an eliminative naturalist who finds the notion of systems that violate the laws of physics arising as a consequence of those laws ‘spooky,’ I’m inclined to think so. But in avoiding one credibility conundrum he has simply inherited another, namely, our manifest inability to arbitrate ontological claim-making.

Johnston himself recognizes this problem of ontological credibility, insofar as he makes it the basis of his critiques of Badiou and Meillassoux, who suffer, he argues, “from a Heideggerean hangover, specifically, an acceptance unacceptable for (dialectical) materialism of the veracity of ontological difference, or a clear-cut distinction between the ontological and the ontic” (170). ‘Genuine materialism,’ as he continues, “does not grant anyone the low-effort luxury of fleeing into the uncluttered, fact-free ether of ‘fundamental ontology’ serenely separated from the historically shifting stakes of ontic disciplines” (171). And how could it, now that the machinery of human cognition itself lies on the examination table? He continues, “Although a materialist philosophy cannot be literally falsifiable as are Popperian sciences, it should be contestable as receptive, responsive, and responsible vis-a-vis the sciences” (171).

This, for me, is the penultimate line of the book, the thread from which the credibility of Johnston’s whole project hangs. As Johnston poses the dilemma:

“… the quarrels among the prior rationalist philosophers about being an sich are no more worth taking philosophically seriously than silly squabbles between sci-fi writers about whose concocted fantasy-world is truer or somehow more ‘superior’ than the others; such quarrels are nothing more than fruitless comparisons between equally hallucinatory apples and oranges, again resembling the sad spectacle of a bunch of pulp fiction novelists bickering over the correctness-without-criteria of each others’ fabricated imaginings and illusions.” 170

And yet nowhere could I find any explanation of how his own ontology manages to avoid this ‘fantasy world trap,’ to be ‘receptive’ or ‘responsive’ or ‘responsible’ to any of the sciences—to be anything other than another fundamental ontology, albeit one that rhetorically approves of the natural scientific project. The painful, perhaps even hilarious fact of the matter is that Johnston’s picture of intentionally rising from the cracks and gaps of an intrinsically contradictory reality happens to be the very ontological trope I use to structure the fantasy world of The Second Apocalypse!

There can be little doubt that he believes his picture somehow is receptive, responsive, and responsible, thinking, as he does, that his account

“… will not amount merely to compelling philosophy and psychoanalysis, in a lopsided, one-way movement, to adapt and conform to the current state of the empirical, experimental sciences, with the latter and their images of nature left unchanged in the bargain. Merging philosophy and psychoanalysis with the sciences promises to force profound changes, in a two-way movement, within the latter at least as much as within the former.” 179

Given the way science has ideologically and materially overrun every single domain it has managed to colonize historically, this amounts to a promise to force a conditional surrender with words—unless, that is, he has some gobsmacking way to empirically motivate (as opposed to verify) his peculiar brand of ontological emergentism.

But the closest he comes to genuinely explaining the difference between his ‘good’ ontologism and the ‘bad’ ontologism of those he critiques comes near the end of the text, where he espouses what might be called a qualified Darwinianism, one where “the chasm dividing unnatural humanity from natural animality is … not a top-down imposition inexplicably descending from the enigmatic heights of an always-already there ‘Holy Spirit’ … but, instead a ‘gap’ signalling a transcendence-in-immanence” (178). To advert to Dennettian terms, one might suggest that Johnston sees the bad ontologism of Badiou and Meillasoux as offering ‘skyhooks,’ unexplained explainers set entirely outside the blind irreflexivity of nature. His own good ontologism, on the other hand, he conceives phylogenetically, which is to say more in terms of what Dennett would call ‘cranes,’ a complicating continuity of natural processes and mechanisms culminating in ‘virtual machines’ that we then mistake for skyhooks.

Or perhaps we should label them ‘crane-hooks,’ insofar as Johnston envisions a ‘gap’ or ‘contradiction’ written into the very fundamental structure of existence, a wedge that bootstraps subjectivity as remembered

A perpetual motion machine.

The charitable assumption to make at this point is that he’s saving this bombshell for the ensuing text. But given the egregious way he mischaracterizes the difficulties of his project at the beginning of the text, it’s hard to believe he has much in the way combustible material. As we saw, he flat out equivocates the concrete mechanistic threat—the way the complexities of technology are transforming the complexities of life into more technology—with the abstract philosophical problem of determinism. Creeping depersonalization–be it the medicalization of individuals in numerous institutional (especially educational) contexts, or the ‘nudge’ tactics ubiquitously employed throughout commercial society, or institutional reorganization based on data mining techniques–is nothing if not an obvious social phenomenon. When does it stop? Is there really some essential ‘gap’ between you and all the buzzing, rumbling systems about you, the negentropic machinery of life, the endless lotteries that comprise evolution, the countless matter conversion engines that are stars? Does mechanism, engineered or described, eventually bump into the edge of mere nature, bounce from some redemptive contradiction in the fabric of being? One that just happens to be us?

Are we the perpetual motion machine we’ve sought in vain for millennia?

The fact is, one doesn’t have to look far to conclude that Johnston’s ontologism is just more bad ontology, the same old empty cans strung in a different configuration. After all, he takes the dialectical nature of his materialism quite seriously. As he writes:

“… naturalizing human being (i.e., not allowing humans to stand above-and-beyond the natural world in some immaterial, metaphysical zone) correlatively entails envisioning nature as, at least in certain instances, being divided against itself. An unreserved naturalization of humanity must result in a defamiliarization and reworking of those most foundational and rudimentary proto-philosophical images contributing to any picture of material nature. The new, fully secularized materialism (inspired in part by Freudian-Lacanian psychoanalysis) to be developed and defended in Prolegomena to Any Future Materialism is directly linked to this notion of nature as the self-shattering, internally conflicted existence of a detotalized material immanence.” 19-20

What all this means is that nature, for Johnston, is intrinsically contradictory. Now contradictions are at least three things: first, they logically entail everything; second, they’re analytically difficult to think; and third, they’re conceptually semantic, which is to say, intentional through and through. Setting aside the way the first two considerations raise the spectres of obscurantism and sophistry (where better hide something stolen?), the third should set the klaxons wailing for even those possessing paraconsistent sympathies. Why? Simply because saying that reality is fundamentally contradictory amounts to saying that reality is fundamentally intentional. And this means that what we have here, in effect, is pretty clearly a kind of anthropomorphism, the primary difference being, jargon aside, that it’s a different kind of anthropos that is being externalized, namely, the fragmented, decentred, and oh-so-dreary ‘postmodern subject.’

I don’t care how inured to a discourse’s foibles you become, this has to be a tremendous problem. Johnston writes, “a materialist theory of the subject, in order to adhere to one of the principal tenets of any truly materialist materialism (i.e., the ontological axiom according to which matter is the sole ground), must be able to explain how subjectivity emerges out of materiality—and, correlative to this, how materiality must be configured in and of itself so that such an emergence is a real possibility” (27). Now empirically speaking, we have no clue ‘how materiality must be configured’ because we do not, as yet, understand the mechanisms underwriting consciousness and intentionality. Johnston, of course, rhetorically dismisses this ongoing, ever advancing empirical project, as an obvious nonstarter. He has determined, rather, that the only way subjectivity can be naturally understood is if we come to see that nature itself is profoundly subjective…

I can almost hear Spinoza groaning from his grave on the Spui.

If the contradiction of the human can only be ‘explained’ by recourse to some contradiction intrinsic the entire universe, then why not simply admit that the contradiction of the human cannot be explained? Just declare yourself a mysterian of some kind–I dunno. Johnston devotes considerable space critiquing Meillasoux for using ‘hyperchaos’ as an empty metaphysical gimmick, a post hoc way to rationalize the nonmechanistic efficacy of intentional phenomena. And yet it’s hard to see how Johnston gives his reader even this much, insofar as he’s simply taken the enigma of intentionality and painted it across the cosmos—literally so!

Johnston references the ‘sad spectacle of a bunch of pulp fiction novelists’ arguing their worlds’ (170), but as someone who’s actually participated in that (actually quite hilarious) spectacle, I can assure everyone that we, unlike the sad spectacle of Continental materialists arguing their worlds, know we’re arguing fictions. What makes such spectacles sad is the presumption to a cognitive authority that simply does not exist. Arguing the intrinsically dialectical nature of materiality is of a par with arguing intelligent design, save that the intuitions motivating intelligent design are more immediate (they require nowhere near as much specialized training to appreciate), and that its proponents have done a tremendous amount of work to make their position appear receptive, responsive, and responsible to the sciences they would, in the spirit of share-and-share alike, ‘complement with a deeper understanding.’

A contradictory materiality is an anthropomorphic materiality. It provides redemption, not understanding of some decentred-me-friendly world that science has been unable to find. In his attempt to materially square the circle of subjectivity, Johnston invents a stripped down, intellectualized fantasy world, and then embarks on a series of ‘fruitless comparisons between equally hallucinatory apples and oranges’ (170). And how could it be any other way when all of these pulp philosophy thinkers are trapped arguing memories?

Vivid ones to be sure, but memories all the same.

The vividness, in fact, is a large part of the whole bloody problem. It means that no matter how empty our metacognitive intuitions regarding experience are, they generally strike us as sufficient: What, for instance, could be more obvious than our normative understanding of rules? But there’s powerful evidence suggesting our feeling of willing is only contingently connected to our actions (a matter of interpretation). There’s irrefutable evidence that our episodic memory is not veridical. Likewise, there is powerful evidence suggesting our explanations of our behaviour are only contingently related to our actions (a matter of interpretation). Even if you dispute the findings (with laboratory results, one would hope), or think that psychoanalysis is somehow vindicated by these findings (rather than rendered empirically irrelevant), the fact remains that none of the old assumptions can be trusted.

Do you have any metacognitive sense of the symphony of subpersonal heuristic systems operating inside your skull this very instant, the kinds of problems they’ve adapted to solve versus the kinds of problems that can only generate impasse and confusion? Of course not. The titanic investment in time and resources required to isolate what little we have isolated wouldn’t have been required otherwise. We are almost entirely blind to what we are and what we do. But because we are blind to that blindness, we confuse what little we do see with everything to be seen. We therefore become the ‘object’ that cannot be an ‘object,’ the thing that cannot be intuitively cognized in time and space, that strikes us with the immediacy of this very moment, that appears to somehow stand outside a nature that is all-encompassing otherwise.

The system outside the picture, somehow belonging and not belonging

Or as I once called it, the ‘occluded frame.’

And this just follows from our mechanical nature. For a myriad of reasons, any system originally adapted to systematically engage environmental systems will be structurally incapable of systematically engaging itself in the same manner. So when it develops the capacity to ask, as we have developed the capacity to ask, ‘What am I?’ it will have grounds to answer, ‘Of this world, and not of this world.’

To say, precisely because it is a mechanism, ‘I am contradiction.’

As with the crude thumbnail given above, the Blind Brain Theory attempts to naturalistically explain away the peculiarities of intentionality and phenomenality in terms of neglect. Since we cannot intuit our profound continuity with our environments, we intuit ourselves otherwise, as profoundly discontinuous with our environments. This discontinuity, of course, is the cornerstone of the problem of understanding what we are. Before, when the brain remained a black box, we could take it for granted, we could leverage our ignorance in ways that catered to our conceits, especially our perennial desire to be the great exception to the natural. So long as the box remained sealed, we could speak of beetles without fear of contradiction.

Now that the box has been cracked open with nary a beetle to be found, all those speculative discourses reliant upon our historical ignorance find themselves scrambling. They know the pattern, even if they are loath to speak of it or, like Johnston, prone to denial. Nevertheless, science is nothing if not imperial and industrial. It displaces aboriginal discourses, delegitimizes them in the course of revolutionizing any given domain. Humans, meanwhile, are hardwired to rationalize their interests. When their claims to status and authority are threatened, the moral and intellectual deficiencies of their adversary simply seems obvious. So it should come as no surprise that specialists in those discourses are finally rousing themselves from their ingroup slumber to defend what they must consider manifest authority and hard-earned privileges.

But they face a profound dilemma when it comes to prosecuting their case against science—a dilemma not one of these Continentalists has yet to acknowledge. Before, in the good old black box days, they could rely on simple pejoratives like ‘positivism’ and ‘scientism’ to do all the heavy lifting, simply because science reliably fell silent when it came to issues within their domain. The bind they find themselves in now, however, could scarce be more devious. The most obvious problem lies in the revolutionary revision of their subject matter—the thinking human. But the subject matter of the human is also the subject of the matter, the activity that makes the understanding of any subject matter possible. Continentalists, of course, know this, because it provides the basis for their ontological priority claims. They are describing, so they think, what makes science possible. This is what grants them diplomatic transcendental immunity when they take up residence in scientific domains. But Johnston isolates the dilemma—his dilemma—himself when he points out the empty nature of the Ontological Difference.

Foucault actually provides the most striking image of this that I know of with his analysis of the ‘emprico-transcendental doublet called man’ in The Order of Things. What is transpiring today can be seen as a battle for the soul of the darkness that comes before thought. Is it ontological as so much of philosophy insists? Or is it ontic as science seems to be in the process of discovering? So long as our ontic conditions remained informatically impoverished, so long as the brain remained a black box, then the dazzling vividness of our remembering could easily overcome our abstract, mechanistic qualms. We could rely on the apparent semantic density of ‘lived life’ or ‘conditions of possibility’ or ‘language games’ or ‘epistemes’ or so on (and so on) to silence the rumble of an omnivorous science. We could dwell in the false peace of trench warfare, a stalemate between two general, apparently antithetical claims to one truth. As Foucault writes:

“… either this true discourse finds its foundation and model in the empirical truth whose genesis in nature and in history it retraces, so that one has an analysis of the positivist type (the truth of the object determines the truth of the discourse that describes its foundation); or the true discourse anticipates the truth whose nature and history it defines; it sketches it out in advance and foments it from a distance, so that one has a discourse of the eschatological type (the truth of the philosophical discourse constitutes the truth in formation).” 320

Foucault, of course, has stacked the deck in this characterization of epistemological modes—simply posing the (historically contingent) problem of the human in terms of an ‘empirico-transcendental doublet’ is to concede authority to the transcendental—but he was nevertheless astute–or at least evocative–in his assessment of the form of the problem (as seen from within the subject/object heuristic). Again, as he writes:

“The true contestation of positivism and eschatology does not lie, therefore, in a return to actual experience (which rather, in fact, provides them with confirmation by giving them roots); but if such a contestation could be made, it would be from the starting-point of a question which may well seem aberrant, so opposed is it to what has rendered the whole of our thought historically possible. This question would be: Does man really exist?” 322

A question that was both prescient in his day and premature, given that the empirical remained, for most purposes, locked out of the black box of the human. For all his historicism, Foucault failed to look at this dilemma historically, to realize (as Adorno arguably did) that short of some form reason capable of contesting scientific claims on the human, the domain of the human was doomed to be overrun by scientific reason, and that discourses such as his would eventually be reduced to the status of alchemy or astrology or religion.

And herein lies the rub for Johnston. He thinks the key to a viable Continental materialism turns on getting the ontological nature of the what right, when the problem resides in the how. He says as much himself: anybody can cook up and argue a fantasy world. In my own lectures on fantasy, the most fictional of fictions, I always stress how the anthropomorphic ‘secondary worlds’ depicted could only be counted as ‘fantastic’ given the cognitive dominion of science. This, I think, is the real anxiety lurking beneath his work (despite all his embarrassing claims about ‘empty handed foes’). The only thing preventing the obvious identification of his secondary worlds as fantastic was the scientific inscrutability of the human. Now that the human is becoming empirically scrutable across myriad dimensions, now that the informatic floodgates have been cranked open—now that his claims have a baseline of comparison—the inexorable processes that rendered the anthropomorphic fantastic across external nature are beginning to render internal meaning fantastic as well.

Why do pharmaceuticals impact us? Man is a machine. Why do cochlear implants function? Man is a machine. Why do head injuries so profoundly reorganize experience? Man is a machine. The Problem of Mechanism is material first and only secondarily philosophical. Given what I know about the human capacity for self-deception (having followed the science for years now), I have no doubt that the vast majority of people will find refuge in ‘mere words,’ philosophical or theological rationalization of this or that redeeming ‘axiomatic posit.’ This is what makes the Singularity so bloody crucial to these kinds of debates (and what puts thinkers like David Roden so tragically far ahead of his peers). When we become indistinguishable from our machinery, or when our machines make kindergarten scribbles of our greatest works of genius, will we persist insisting on our ontological exceptionality then?

Or will the ‘human’ merely refer to some eyeless, larval stage? Will noocentrism be seen as last of the three great Centripetal Conceits?

Short of discovering some Messianic form of reason—a form of cognition capable of overpowering a scientific cognition that can cure blindness and vaporize cities—attempts to argue Messianic realities a la Continental materialism are doomed to fail before they even begin. Both the how and the what of the traditional humanities are under siege. As it stands, the profundity of this attack can still be partially hidden, so long as one’s audience wants to be reassured and has no real grasp of the process. A good number of high profile researchers are themselves apologists for the humanistic status quo, so one can, as defenders of various religious beliefs are accustomed, pluck many heartening quotes from the enemy’s own mouth. But since it is the rising tide of black-box information that has generated this legitimacy crisis, it seems more than a little plausible to presume that it will deepen and deepen, until finally it yawns abyssal, no matter how many well-heeled words are mustered to do battle against it.

Not matter how many Johnston’s pawn their cryptotheological perpetual motion machines.

Our only way to cognize our experiencing is via our remembering. The thinner this remembering turns out to be—and it seems to be very thin—the more we should expect to be dismayed and confounded by the sciences of the brain. At the same time we should expect a burgeoning market for apologia, for rationalizations that allow for the dismissal and domestication of the threats posed. Careers will be made, celebrated ones, for those able to concoct the most appealing and slippery brands of theoretical snake-oil. And meanwhile the science will trundle on, the incompatible findings will accumulate, and those of us too suspicious to believe in happy endings will be reduced to arguing against our hopes, and for the honest appraisal of the horror that confronts us all.

Because the bandage of our traditional self-conception will be torn away quicker than you think.

.

* POSTSCRIPT (17/01/2014): Levi Bryant, it should be noted, is an exception in several respects, and it was remiss of me to include him without qualification. A concise overview of his position can be found here.

Ancient and Modern Enlightenment: from Noosphere to Technosphere (by Ben Cain)

by rsbakker

Enlightenment is elite cognition, the seeing past collective error and illusion to a hidden reality. But the ancient idea of enlightenment differs greatly from the modern one and there may be a further shift in the postmodern era. I’ll try to shed some light on enlightenment, by pursuing these comparisons.

.

Ancient Enlightenment: Monism and Personification

Enlightenment in the ancient world was made possible by a falling away from our mythopoeic, nomadic prehistory. In that Paleolithic period, symbolized by the wild Enkidu in the Epic of Gilgamesh and by the biblical Adam in Eden, there was no enlightenment since everything was thoroughly personified and so nothing could have been perceived as unfamiliar or alien to the masses. The world was experienced as a noosphere, filled with mentality. Only after the rise of sedentary civilization in the Neolithic Era, when farming replaced nomadic hunting in 10,000 BCE, which allowed for much larger populations, was there a loss of that enchanted mode of experience which actually depended on a sort of blissful collective ignorance. As a population increases, the so-called Law of Oligarchy takes hold, which means that social power must be concentrated to avoid civilizational collapse. Dominance hierarchies are established and those in the lower classes become envious of the stronger and more privileged members who are sure to display their greater wealth and access to women with symbols of their higher status. By doing so, each social class learns its boundaries so that the social structure won’t be overridden, which would invite anarchy.

As Rousseau argued, civilization was the precondition of what we might call the sin of egoism. Contrary to Rousseau, prehistoric life wasn’t utopian; at least, objectively, human life in the Paleolithic Era was likely quite savage. But the ancients seemed to have an easier time perceiving the world in magical terms, judging from the evidence of their religions and extrapolating from what we know of children’s experience, given their similar dearth of content to occupy their collective memory. Thus, even as they killed each other over trifles, the prehistoric people would have interpreted such horror as profoundly meaningful. In any case, I think Rousseau is right that civilization made possible a falling away from a kind of intrinsic innocence. Specifically, the increased social specialization led to an epistemic inequality. As food was stored and more and more people lived together, there was greater need for practical knowledge in such areas as architecture, medicine, sanitation, and warfare. The elites became decadent and alienated from nature, since they found themselves free to indulge their appetites with artificial diversions, as specialists took care of the necessities of survival such as the harvesting of food or the defense of the borders. These elites codified the myths that expressed the population’s mores, but while the uneducated majority clung to their naïve, anthropocentric traditions, the cynical and self-absorbed elites more likely regarded the folk tales as superstitions.

Here, then, was the origin of enlightenment as the opposite of wholesale ignorance—and this was a normative dichotomy. Enlightenment was good and its opposite, mental darkness, was bad. Whereas prior to civilization everyone was enlightened, in a sense, or at least everyone deferred to the shaman’s interpretation of how the spiritual and material worlds are intermixed, civilized people came to believe there’s a secret perspective which alone imparts the ultimate truth, leaving the majority in relative ignorance. As for the content of the enlightened worldview in the ancient world, this was informed by both the egoism and the cynicism that distinguished the hierarchical civilization from the prehistoric past. The content thus had two elements: monism and personification. On the one hand, reality was thought to be a unity, whereas the world appeared to be a multiplicity. Enlightenment was the ability to see past the illusion of change, to the underlying timeless interconnection between all events. Again, in the mythopoeic world, there was no distinction between reality and appearance, because mental projections were given equal weight with the material unfolding of events. The world was a magical place. But the enlightened person had to recover a distorted memory of that childlike, mythopoeic vision, as it were, by theorizing a unity beyond the disenchanted multiplicity that confronted the civilized ancients.

On the other hand, ultimate reality was generally personified. So the absolute unity was called God, equated with the self, and often compared to the particular human who actually ruled the land. That is, the civilizational structure was projected onto the spirit world and the gods were used as symbols to reassure the ancients that their social order was just. There was such personification even in Buddhism, specifically in the Mahayana variety, according to which Bodhisattvas are worshipped and Buddha nature is thought to take not just the inconceivable and thus impersonal form, but ghostly or celestial as well as physical ones.

Ancient enlightenment thus had to reconcile the urge to personify, which was a remnant of the mythopoeic experience that was exacerbated by the advent of egoism even among the masses, and which the elites came to use for political purposes, with the world’s alien, indifferent oneness. That theoretical oneness expressed especially the elites’ growing alienation from nature and their nostalgia for the presumed innocence of the earlier, nomadic period. Monism made egoism out to be preconditioned by ignorance, since if the world were really an ultimate unity, the apparent self’s independence would be an illusion. But because egoism had numerous social and economic causes, the enlightened worldview retained some anthropomorphic projections onto the unity, to rationalize the nature of the civilized individual. There were degrees of enlightenment, so that one or the other factor, impersonal metaphysical unity or personification, predominated. For example, in the Eastern religions, the anthropomorphisms were stripped away as the enlightened person was thought to experience a transcendent unity, in a purified state of consciousness. Alternatively, the monotheistic Western traditions generally took a personal deity to be the highest principle.

.

Modern Enlightenment: Objectivity and Artificialization

The next epochal change was the birth of modern civilization in the European Renaissance and Scientific Revolution, followed by the Enlightenment and the Industrial Revolution. This transition was marked by profound advances in investigative techniques, which presented the educated upper classes with an altogether impersonal world. Instead of being horrified by this new knowledge, modernists relished the opportunity to conquer a material world that has no prior rights or else they sought refuge in the halfway house of deism. In any case, modernists were forced to reconceptualize the idea of enlightenment. Whereas the ancient kind posited a metaphysical unity that was somehow both transcendent and personal, modernists eventually eliminated personhood altogether, not just in metaphysics but in psychology. And so modern enlightenment is an appreciation of the implications of thoroughgoing metaphysical naturalism. The real world is still a hidden unity and scientists seek to uncover the causal pattern that establishes that unity. Thus, the dichotomy between the reality of the hidden spirit world and the illusion of mundane plurality in the spatiotemporal field of opposites became the split between a rational understanding of nature’s impersonality, as confirmed by the impartiality of cause and effect, and the naïve personification of anything, including ultimate reality or the human self. Enlightened modernists are materialists who think that mind is an illusion and that fundamental reality is bound to be alien to our sensibilities.

However, the conception of enlightenment as a matter of rationality, set off against the darkness of superstition, can’t hold, because rationality is a personal matter which takes for granted the illusion of the personal self. The modern myth of enlightenment as merely the courage to follow the logic and the evidence where they lead can’t be the whole story of the great transition to the modern period. Something else must have happened, not just a rise of rational neutrality, if rationality itself is merely peripheral. Instead of seeing modern enlightenment in terms of the symbol of the Light of Reason, and thus as a mental phenomenon, we should see it as technological: modernists exited the Dark Age through their technological advances which literally made the world brighter in the case of the commercial use of electricity. More broadly, modern enlightenment is the expansion of the “Light” of Artificiality, which makes for a wealth of historical data points. After all, what makes a dark age dark is the lack of lasting evidence of the culture’s identity, due to massive illiteracy and the absence of durable technologies that tell the tale. All of that changed with the printing press and the computer, for example. A Bright Age, then, is bright with cultural information and the light rays should be thought of as being transmitted especially to future historians.

Commercial light bulbs were patented in the late 19th C, although scientists studied electricity as early as 1600 CE. The Age of Enlightenment is primarily an 18th C. period, so the world didn’t literally become much brighter during the modern Enlightenment. However, the paradigmatic rationality of Enlightenment intellectuals, especially that of Isaac Newton, led directly to the Industrial Revolution in the late 18th and early 19th centuries, which included the invention of the light bulb. So we should look at modern enlightenment as beginning with the myth of rationality and giving way to wonder at the undeniable reality of recent technological advance. First came the light of Reason, then scientists realized that personhood and thus reason are illusory. But all along, the modern process was set in motion which replaced the darkness of nature with the light of artificiality (with technological incarnations of culture which endure and testify to our historical identity). Thus, modern enlightenment is only inchoately the dichotomy between neutral (non-personifying) reason and ignorance; the real distinction is between natural, pristine reality, which is dark and monstrous precisely because of its impersonality, and the light we bring to the world by impressing our stamp into it—not subjectively through mere theological interpretation or magical supposition, as in the mythopoeic period, but through the inexorable, objective spread of modern technology.

What’s monumental about modernity isn’t that some white male Europeans learned to think more rigorously, thanks to the scientific methods they invented. Of course, there are such methods, but modern enlightenment shouldn’t be personalized. When you characterize the new kind of enlightenment in that way, you’re left with incoherence since naturalism won’t support naïve personification. Instead, modern enlightenment must be thought of as a great widening of perspective, so that instead of projecting our ego onto indifferent nature, we eliminate our ego through existential encounters with nature’s monstrosity which humiliate us, doing away with our pretensions. Left thusly vacated, the real world is free to flow through us, as it were. In this case, the glory goes not to the great scientists, regardless of how exoteric modern history is told; the scientific methods, for example, must be part of nature’s self-overcoming on our planet, due to a shift from biological processes to artificial ones.

Scientific methods of thought are algorithms which presage the functions of high technology, as in the computer. In other words, before mass technology there was massive regimentation of intellectual life, whereas prior to the Scientific Revolution, social regimentation was confined to the army, to government, farming, and the like, while the business of discovering the nature of reality was still a free-wheeling affair. Ancient philosophy was mostly an artistic kind of speculation, although there are protoscientific aspects of ancient Greek and Indian philosophies. The Presocratics, for example, followed the logic of their hypotheses, however counterintuitive those hypotheses may have been. But what made the Scientific Revolution so special, objectively speaking, was a social transformation. Instead of being ruled mainly by biological norms, such as by the instinct of preserving the genes through sexual reproduction, which were thinly rationalized by the art of myth-making, a new dynamic was introduced: what Jacques Ellul called the necessity of efficiency as a matter of technique.

All species employ techniques, because they’re adapted to their environment, but the Scientific Revolution was the birth of an impersonal, regimented subculture of cognitive elites, one that’s modeled more and more on the machines made possible by that cognitive labour. In place of personification, mystification, or artistic speculation, there’s surrender to rational technique, to algorithms, and to the other scientific methods (public and repeatable testing of hypotheses, mathematical precision, and so on). It’s as though in depersonalizing ourselves, thanks to skepticism, the disempowerment of the Catholic Church, and so forth, we allowed nature’s impersonality to flow more easily through our social structures. Whereas hitherto, our bodies were governed by evolutionary norms and our minds were consumed by myths and illusions of personhood, which we projected onto nature so that we became doubly deluded, modernists abandoned personification, which freed the mind to mimic what the rest of the universe is doing, namely to flow in what I call an undead (impersonal but not inert) fashion.

We still personify techniques when we think of them teleologically, as having a mentally represented goal. However, even if there’s no divine mind desiring nature to end in some way, natural processes do have ends, which is just to say that there are natural processes, as such, or changes that have initial conditions, transitional periods, and probable points of termination. The more we understand nature, the wider our field of vision until we think of everything as a cosmic whole having a beginning (the Big Bang), a middle (evolution and complexification in space and time), and an end state, such as the universe’s heat death. What we call the scientific methods, then, or the more efficient modern techniques of rational thought, are really—according to the enlightened modernist—an inflowing of some underlying natural process besides biological evolution, one which begins with ultra-rational cognition and continues with the elimination of the noosphere and with the transformation of the biosphere into the technosphere.

.

Counter-Enlightenment and the Return of Mythopoeic Reverie

As long as we’re depersonalizing enlightenment, we should note the Counter-Enlightenment period which leads from the Romantics and other early critics of modern hyper-rationality to postmodern relativism and general jadedness. I won’t attempt to adjudicate this debate here, but I want to close by reflecting on whether the Counter-Enlightenment should be interpreted as an omen indicating that modern enlightenment will itself be transformed. Again, if we ignore the psychological and social levels of inquiry, since an enlightened modernist must regard them as misleading, we can look at historical developments as stages of some larger process. Natural selection explains the design of living bodies, but not the cultural shifts between elite forms of cognition. From mythopoeic animism, to the middle ground of ancient mystical theism, to modern naturalism, there’s a clear elimination of personhood from grand theories. Moreover, there’s exponential progress in technical innovation, as modernists have come to divorce rationality from artistic interpretation. Rather than seeing herself as similar to a shaman, in being a wise person, healer, or hero for venturing into the unknown, an enlightened modernist is more likely to think of herself as a glorified calculator. Modern cognition is hyper-rational in that logic for us is demythologized, and the sciences are separate from the arts and from the humanities, which means that scientific cognition is inhuman (objective and neutral). Science is thus the indwelling of natural mechanisms, due to a breakdown in resistance from religious delusions, resulting in the perfection of the artificial world. Modern geniuses are distorted mirrors held up to undead nature, the reflected image being a technological bastardization of the monstrous original.

And yet we may be witnessing here a cycle rather than a linear progression. Technology may allow us to recover the mythopoeic union of object and subject, so that modern objectivity overcomes itself through its technological progeny. After all, the artificial world caters to our whims and so exacerbates egoism and the urge to personify. Whereas modern enlightenment began with a vision of a lifeless, mechanical universe, the postmodern kind is much less arid and austere. This is because postmodernists are immersed in an artificial world which turns fantasies into realities on a minute-by-minute basis, thus perhaps fulfilling the promise of mythopoeic speculation. For example, if you’re hungry, you may ask your smartphone where the nearest restaurant is and that phone will speak to you; next, you’ll follow the signs in your car which adjusts to your preferences in a hundred ways, and you’ll arrive at the restaurant and be served without having to hunt or cook the animal yourself. The prehistoric fantasy was that nature is alive. Modernists discovered that everything is at best undead and certainly devoid of purpose or of mental, as opposed to biological, life. But perhaps postmodernists are realizing that the world was undead whereas it’s now being imbued with purpose and brought to nonbiological life by us through technology. Instead of mythologizing the world, we postmodernists artificialize it, and whereas natural mechanisms train us to be animals following evolutionary rhythms, artificial mechanisms may train us to be something else entirely, such as infantilized consumers that recapture the prehistoric sense of being at the world’s all-important center, thanks to our history of taming the hostile wilderness.

 

 

 

 

Scientism and the Artistic Side of Knowledge (by Benjamin Cain)

by rsbakker

.

How should someone who accepts the scientific picture think of the relation between the arts and the sciences? By “scientific picture” I mean the content of scientific theories, of course, but also the scientific methods of explanation and the questions that can be answered by those methods. One option, which I’ll call “scientism,” is to say that scientific explanations are the only stories worth telling, that if a statement can’t be tested or translated into precise, mathematical language, the statement should have no part in our view of what’s real. I’ll call a defender of scientism a scientific absolutist, since this defender says the scientific picture of reality is complete in that it exhausts everything we should say about the world; plus, “scientific imperialist,” which is sometimes used here, is pejorative and “scientist” is taken. Scientism is opposed to what I’ll call “pluralism,” to the view that scientific methods aren’t the only worthwhile ways of talking about the real world.

.

Is Scientism Coherent?

There’s some reason to think that scientism isn’t a stable option, after all. The question is how exactly the scientistic thesis should be formulated. Let’s assume, for example, that the scientific picture includes Scott Bakker’s Blind Brain Theory or at least some theory in cognitive science that fulfills our worst fear about the conflict between what scientists say we are and what we intuitively, traditionally assume we are. In particular, let’s assume that the folk ideas of meaning and values are incompatible with science. That is to say, symbols don’t relate to the world in the way we naively think they do and nothing is really good or bad. On the contrary, let’s assume that cognitive scientists will soon be able to explain precisely how these folk illusions arise, in terms of biochemical processes. And we can even assume, then, that that knowledge will be disseminated in the business community, enabling the elites to exploit those processes as far as the law will allow. Just as scientists have no need of the God hypothesis, there will be no scientific reason to speak of the meaning of symbols, the truth of statements, or the value of anything. These folk ways of speaking will be deflated. To be sure, they might persist, just as there are still theists long after the dawn of the Age of Reason, but the folk concepts won’t add to the scientific picture of reality, they’ll make no sense within that picture, and they’ll be undercut by the scientific explanation of their appearance.

Notice that were the scientific way of speaking of the folk concepts to presuppose those concepts, scientism would undercut itself more than anything else. By “presuppose” here I mean to assume as part of scientism’s story of what’s going on. A scientific absolutist can grant that so-called meanings and values exist (as well as consciousness, freewill, and the other elements of the folk view of us), but the absolutist can’t endorse the folk way of speaking of these things. (In philosophy of language jargon, the absolutist can grant the extension but not the intension of “meaning,” “value,” and so on, which is to say that she can grant that those words apply to something, without subscribing to the way those words picture that thing.) So instead of saying that a symbol’s meaning is its representational relationship to what the symbol’s about, the absolutist might say that that relationship is an illusion caused by the brain’s ability only to caricature its real, neurological processes when the brain resorts to intuition or to any discourse that posits something other than a field of causally interacting material bodies.

But I think it’s difficult to sustain a counterintuitive way of speaking of folk concepts. For example, how would the absolutist define “illusion”? The intuitive, folk way would be to say that an illusion is a part of reality that a creature’s naturally led to misunderstand. Thus, when a stick in water appears bent, the appearance is an illusion because the stick is really straight and so there’s a mismatch between the perception and the reality. Now it’s just that sort of alleged mismatch which the absolutist is trying to call an illusion, which is to say that the absolutist needs a causal, counterintuitive idea of illusion to explain away the representational and normative folk understanding of the difference between reality and illusory (erroneous, misleading) appearance. So while the folk psychologist has the (seemingly unscientific) concepts of meaning, truth, and value at her disposal, the absolutist dispenses with those concepts, perhaps by redefining the relation between reality and illusion. So what would that redefinition entail?

Presumably, the absolutist would be able to explain how an illusion arises in causal terms. But even if we know exactly how the appearance of the bent stick in water is caused, in terms of how the brain processes the light that refracts as it passes through the water, do we thereby know everything there is to know about the effect of that process, that is, the appearance of that stick? Suppose our scientific knowledge of that process enabled us to predict how that appearance in turn would affect the creature that labours under it. Would this complete account of where an illusion lies in the causal nexus tell us what an illusion is? Whether that causal account would satisfy our curiosity or exhaust everything there is to say about the reality of illusions is a separate issue, to which I’ll soon turn. My worry at present is just about whether the scientific absolutist needs more than a causal understanding of illusions to formulate scientism as the thesis that the scientific picture of reality is the complete picture. In particular, if all we’re entitled to say about illusions is that they’re caused in a certain way and that they have certain effects, we certainly can’t infer that illusions are bad or therefore that a story which speaks favourably of illusions is necessarily worse than any other story.

Of course, as defined, the scientific theory of us would have no business speaking of the difference between “better or worse” or indeed of the act of “speaking of something” in the first place–at least, not if the theory were to employ those notions as they’re intuitively understood. The scientific picture would either eliminate those normative and semantic concepts or replace them with radically different ones. But can the scientific absolutist afford to be so radical? How can we formulate the exclusiveness of the scientific picture–indeed its completeness or its superiority to the intuitive one–without falling back on the normative and semantic notions? From the scientific viewpoint which presents only impersonal causes and effects, nothing would be superior to anything else nor would anything be complete in the sense of being an adequate representation. So is scientism itself a necessarily intuitive idea? How can we speak of the threat that science poses to the commonsense view of ourselves, once we accept the scientific picture which dispenses with the very notions that seem instrumental in making the relevant distinctions?

Well, we can start by looking at history and appreciating that there’s certainly been some conflict between our naive worldview and the one that scientists have developed. We can then induce that because scientific progress has left behind plenty of wreckage in the form of abandoned intuitions about how the world works, eventually no such intuitions will be left standing; that is, there will be nothing left merely for us to intuit, because the complete scientific story of what causes what will be at our fingertips–assuming our species survives long enough to complete the scientific picture, of course. In that case, the scientific account will eventually be the only one that’s actually used. Notice how this formulation avoids the normative talk of science’s superiority. Instead of saying that the scientific picture is a better representation than the prescientific one, a comparison to which the scientific absolutist isn’t entitled, we can say that as a matter of sheer causality, one way of talking will endure while all others will be left behind. This is to say only that science will persist in the natural process in which we engage with the world, whereas nonscientific narratives will not last as long. Crudely put: in a pissing contest, science wins.

Now if that’s all scientism amounts to, I see no illicit presuppositions in it, no hidden appeals to prescientific notions that are no part of the scientific picture. However, we’re not out of the woods yet, since now we should wonder whether that scientific picture of reality would be complete. If we know that all prescientific accounts would eventually be abandoned as a result of the unfolding of a natural process, do we thereby know why that would happen? We’d know that one thing would lead to another and some material bodies in the universe (naively thought of as persons) would stop engaging in some form of behaviour (talking about meaning and value), but would this be a complete theory of what’s going on in that part of the world?

Of course, our intuitions scream “No!” because we’ve evolved the instinct of seeing psychological and social patterns wherever we look, and thus, given just that dry causal story of science’s ultimate victory, we’d beg to be told why those future people would choose to favour the scientific picture to the exclusion of all other viewpoints. And then Pandora’s Box would be opened and out would fly all the intuitive concepts: would science prove to be better than all other viewpoints according to some epistemic or aesthetic ideals (the values of truth, simplicity, elegance, fruitfulness, and so on)? Would science be superior in pragmatic terms, empowering people more than any other viewpoint and more efficiently satisfying their desires? But those questions about the reasons why a process turns out as it does call for answers framed by the intuitive concepts. Therefore, someone looking just at the scientific picture would be as dismissive of those questions as she’d be of those concepts. If you think it’s hard to imagine how anyone could be so dismissive rather than seeing the point of asking the epistemic or folk psychological questions, you doubt scientism and may be in the grip of the illusion our species is destined to see past. In any case, the scientific absolutist must maintain that any semantic, normative, or pragmatic reason why science would outlast intuition is as excluded from the scientific theory as is any other prescientific notion.

So if scientism is the contention that science potentially provides us with the complete theory of the real world, we should interpret this as saying not just that science is the way of completing the causal account of the world, but that the causal account is the final one offered by a sufficiently intelligent and long-lived species, since any such species is part of a natural process that compels it to abandon intuition in favour of science.

The completeness of the scientific theory therefore isn’t a matter of semantic adequacy or of normative superiority, but of finality and endurance, which are matters merely of measurement. Take any sequence of causes and effects and you have the potential to measure which properties last longer than others. For example, have a look at those sped-up videos of people walking through a city, so that the individuals are blurred together, allowing the viewer to pick up on patterns such as whether brown hair is more prevalent than blonde in that region or how often people stop at a certain spot. This must be the sort of comparison that’s left to the scientific absolutist when she says that science threatens not just theism but all our commonsense notions, including our notions of meaning and morality. What she must mean isn’t that science is closest to the Truth or even that science is more useful than commonsense. If she appeals to those intuitions as part of what the scientific picture alone will compel us to say, she contradicts what she says about the counterintuitiveness of science. No, scientism as I’ve defined it implies only that the denuded, amoral, meaningless scientific picture is our destiny, our final portal on the world because of the natural process we’ve been part of all along.

And so the horror of scientific progress is that through science, nature inexorably dehumanizes us, stripping us of our cherished intuitions so that we’re blinded to illusions and we come to think like a computer that’s capable only of calculations and quantitative measurements, not of qualitative judgments. This is the old romantic complaint about science, except that instead of saying that science robs the world of its beauty, the scientistic point is that the real world is neither beautiful nor ugly and that that world will force us to behold that neutrality. Our illusions that we prop up with intuitions and cognitive biases are fantasies we distract ourselves with even as nature’s impersonality is all around us. Not even our personal identity will preserve us from that dread vision of the undead god, which is the mindlessly evolving natural plenum, the field of colliding material bodies, if our personhood too is an illusion of which science will relieve us.

Finally, there’s one more objection along these lines. As I said, the scientific picture includes the content of scientific theories but also the practice of science itself that produces them. After all, the point of scientism isn’t just that people will possibly have a complete understanding of nature, but that science alone makes that understanding likely. But at least as understood intuitively, scientific methods involve epistemic, aesthetic, and pragmatic standards that scientists want their theories to meet. So while we presently indulge in the prescientific talk of normativity, the suspicion is that science tends to conflict with our intuitions. And yet if science is the only kind of knowledge, how will scientists understand their scientific practice scientifically, if such methods appear normative? For the statement of scientism to be coherent, that appearance of how science itself works would likewise have to be illusory and so science would have to be part of a natural process that can be understood in purely causal, value-neutral terms.

.

Intuition and Analogy: The Artistic Side of Knowledge

So much on the coherence of scientism. One objection that now presents itself is whether we should think it likely that science will be the last narrative standing, given that science is a product of brains like ours, brains of mammals that evidently enjoy and perhaps even need our illusions to survive long enough to complete the scientific picture. If we are as cognitive scientists describe, our rationality is quite imperfect; we’ve evolved numerous mental shortcuts, called “heuristics,” which produce cognitive biases and these are innate so they persist even as we’ve evidently learned to circumvent them with scientific methods. This is to say that the induction outlined above may be flawed. Yes, science has refuted a great many of our intuitive speculations, but this doesn’t mean science has made us more rational on the whole. One by one, our speculations are exposed as fallacious or delusory, but what if our capacity for such speculation is inexhaustible? What if the conflict between science and intuition isn’t zero-sum? Just because science advances, that needn’t mean intuition retreats. Indeed, most people are still religious even after the Scientific Revolution and the spread of the internet and communications technologies. Perhaps the growth of Islam is only a temporary backlash, but the larger point remains that although most people today are at least potentially better informed than the majority of any other period, that doesn’t mean we’re less attached to our delusions. Indeed, our delusions are still manifold and plentiful, be they religious, political, cultural, or personal (self-directed).

Notice the reason this doubt is relevant to scientism as defined: to ensure that scientism is coherent, I’ve had to reduce that thesis to a prediction about how we would all be talking, were our species to survive long enough to complete science. This is strictly a matter of probability, of what will likely be left at the end of a causal chain, ceteris paribus. We’ve had to eliminate folk notions of the truth or usefulness of science since although we may presently indulge in such illusions, the scientific absolutist is interested in science’s ultimate relation to rival conceptions of the world and in the final analysis, were only science left standing, the suspicion is that the finished scientific picture will provide no grounds for intuiting science’s superiority, since those naturalists will understand everything in terms of pure causality. Therefore, scientism is weak if its prediction is unlikely, and we have plenty of reasons to doubt that prediction–indeed, sterling scientific reasons. As I said, we’ve evolved mental modules that compel us to read psychological and social patterns into data, thus compelling us to survive by working together in groups. This is why we personify our surroundings and why we see ghosts, goblins, and gods around every corner. Mind you, we also have evidence in favour of scientism, including the fact that modern science is still a relatively young discipline and there’s also the transhuman prospect of using technology to alter our brains or genes, so that we’ll come to prefer the scientific picture to the illusions.

This point about transhumanism raises another problem with scientism, though, which is that the prediction is less interesting if it posits a future that’s radically different from the present, because the prediction might as well then invoke a miracle. The induction motivating the fear that science conflicts with our commonsense self-image says that because science–as commonly understood–has steadily undermined so many intuitions, science will eventually undermine them all. There’s no longer any such induction if “science” in the conclusion refers to posthuman science which is dissimilar from the present-day kind, since induction rests on our confidence that the future will be like the past. (That’s why miracles are improbable, according to the philosopher David Hume.) So the scientific absolutist must assume that the science responsible for completing the counterintuitive picture of the world will work like present-day science.

That may imply that we should have at least an inkling of how present-day science can be understood in strictly causal terms, without positing the ideals that motivate research and experimentation. We now understand scientific methods–both at the individual and social levels–in terms of certain epistemic, aesthetic, and pragmatic values that govern certain processes. For example, we think science is eminently rational and this calls to mind a normative view of logical rules we think we ought to follow. Perhaps this view of rationality is illusory and what’s really going on is that our concept of a rule is just a low-resolution caricature of actual neural processes. Perhaps, but I think the absolutist has a burden of proof here to show that the intuitive picture of science doesn’t add to our present understanding of science. The absolutist can’t appeal to a gulf between present and finished science, because that spoils the induction which is a key piece of evidence for the scientistic prediction that science and not any of the normative arts tells us all there is to know about reality. This means the absolutist must show that there’s currently no benefit to thinking of science in normative terms, that this way of thinking really is just an idle, illusory byproduct.

Frankly, what gives me pause here is the persistence with which the intuitive notions crop up even in the scientific picture, and to anticipate a bit, I see this pattern in roughly Kantian terms. For example, the scientistic fear is that science opposes our intuitions, so that our normative view of reason is belied by the cognitive scientific account of so-called neurofunctions, which are naturally selected processes in the brain. I am very suspicious of the biological talk of functions, since I think Darwin showed why the appearance of teleology in organic processes is illusory. And yet the talk persists; indeed, it’s irresistible. But without an intelligent designer, the cryptoteleological talk of biofunctions is misleading. Natural selection means only that environmental conditions don’t kill off the hosts of certain genes so that those genes keep replicating body types that have certain traits which enable them to cope with those conditions. That’s the real causal story in evolutionary biology, so we needn’t appeal to the metaphor of functionality when speaking of neural processes, such as those the brain can’t intuit well and so can only drastically simplify without doing science. The metaphor is a comparison of the relation between a person and a human-designed artifact, and that between God and all his creatures. Now, that metaphor may be undermined by Darwin, but it does speak to a ubiquitous practice we have of using metaphors for the cognitive purpose of exploiting our grasp of the familiar.

This is why even scientific theories are littered with metaphors. Just as the talk of biofunctions is anthropocentric, so too is the talk of mechanisms. The metaphor of the mechanism derives from the deistic assumption that nature is a deterministic machine built by God. Quantum mechanics has undermined the deterministic view of causality and thus the clockwork metaphor, just as natural selection has undermined teleological functionalism. And with the loss of determinism, our concept of causality might have to change. Most physicists think of causality in platonic terms, as reflecting a timeless, mathematical order, as though laws of nature were spelled out in a Book of Nature. Without a lawgiver, the very notion of a law of nature too becomes an anthropocentric metaphor, an outdated comparison of natural laws with social ones. Perhaps physical laws aren’t timeless but they evolve as the physicist Lee Smolin theorizes. In any case, his picture would require yet another metaphorical stretch of the imagination, a comparison of the evolution of life on our planet with the evolution of universes in a multiverse. Even the concept of a heuristic in cognitive science is a metaphor from computing that mixes up natural and social laws, when applied to modules in the brain. A heuristic is a programmed rule of thumb or educated guess, which acts as a fall-back plan so that the computer doesn’t have to follow every step of an algorithm when searching for a solution in a poorly understood domain. Certain neural processes are at best similar to heuristics in intelligently-programmed computers.

A scientific absolutist will want to remark at this point that these metaphors all betray intuitions that science will eventually overrun, but I think this misses the point. Again, our experience is that the metaphors and intuitions disappear only serially, one by one, but the reservoir of intuition seems bottomless. Moreover, we oversimplify matters when we contrast science with commonsense intuition, since metaphors which build on commonsense are found in science itself. Now, the absolutist can say that intuitions keep popping up in our theories because we evolved to fall prey to the cognitive illusion of projecting our naive self-image onto the unfamiliar. But this raises the question of just what knowledge is supposed to be such that science is the only source of it, according to scientism.

There seem to be at least two sides of knowledge. There’s the quantification side, the ability to measure a phenomenon, to describe it with great precision, which may allow us to predict how the phenomenon will change. If we can predict what something will do under certain conditions, that’s often a sign we understand the thing, but there’s a second side of knowledge which is harder to put into words because it’s just understanding itself. Measuring and predicting how a system works isn’t the same as identifying what the system is in reality. We might encounter an extraterrestrial artifact and be able to predict what it will do if we push one button rather another, after sufficient trial and error with the object, but we might still not know what the artifact was intended to do. What is it then to understand something, to know what it really is? More to the point, does a complete causal account of nature suffice for understanding?

If we can predict everything that will happen in the world, because we have a complete induction based on past experience of which observations followed which other ones as natural processes unfolded, do we understand what’s happening? The computer in philosopher John Searle’s Chinese Room argument can appear to speak a language by following algorithms for displaying certain messages when shown certain other ones, but the computer doesn’t speak the language at the semantic level. The computer blindly follows the algorithms while lacking the concepts that a natural language speaker would typically associate with the language’s vocabulary. The computer can calculate which messages should follow other ones, but it doesn’t understand what’s being said. Moreover, quantum mechanics furnishes us with an impeccable example of how measurement and prediction aren’t the same as understanding. Physicists can measure with great accuracy what happens at the subatomic level, but they have barely any idea what’s really going on there; they interpret the results of the experiments with a number of models (the multiverse, quantum logic, Copenhagen interpretation, and so on), and if any of these models offers a hope that physicists can understand the bizarre findings, that’s because the model translates the exotic mathematical statements into intuitive, natural language.

I think, then, that if the complete scientific picture were to include only a map of all natural processes, without any intuitive metaphors to classify the patterns, the picture would enable scientists to predict and to control processes but not to understand them. The scientific absolutist will say that any so-called understanding supplied by semantics, by usefully categorizing phenomena according to certain cognitive criteria is yet more illusion which science is bound to overcome. But now we arrive at a mere definitional matter, because this so-called illusion is the way that mammals like us tend to perceive things as a basis for understanding them. The illusion in which a stick appears bent in water isn’t exactly like the illusion of using metaphors to personify alien phenomena, because there’s no interesting variety in the way all our brains process the light exiting from the water, whereas there’s a rich variety in the way we use metaphors and other analogies. This is why translating an advanced use of language is so difficult, because mental associations link together in a network of subtle connotations that encompasses a whole cultural way of interpreting the world and you’re either in a culture or you’re not. Science is more universal and yet is sufficiently subjective that the metaphors in scientific theories express human experience.

So the absolutist is free to define “knowledge” in a way that excludes the work done by intuitions and analogies, but I think this violates the above principle about the need to preserve the scientistic induction by not appealing to posthumanity. There is, after all, a counter-induction, which is that because new intuitions always arise to replace the old ones (even in science), our knowledge will never be intuition-free. The Kantian point about knowledge, then, is that if we tend to anthropomorphize things, to understand them by extending our intuition-based caricature of a self-image onto less familiar parts of the world, using analogies to bridge the emotional gap and to make us comfortable with the alien Other, this tendency is a cognitive faculty in its own right. Our capacities for intuition, speculation, and anthropomorphic prejudgment are filters through which we interpret the world. You can call these interpretations laughable illusions, if you like, but that would be like calling even the finished scientific picture an illusion because that picture is something offered merely by a creature that does what it does rather than doing something else. We mammals do what we do, and that includes cognizing the world at emotional and analogical levels.

Where you have metaphors you have art and thus you have aesthetic ideals, and this is a serious problem for the scientific absolutist. Recall that scientism is the prediction that science will eclipse the arts when it comes to telling us about the real world. The assumption is that the arts, including philosophy and religion, deal with meaning and values, and so the absolutist infers that knowledge doesn’t require meaning or value. Again, perhaps if you confine knowledge to the abilities to measure, predict, and control, knowledge can be meaningless and amoral, and so fit for dehumanized automata, such as the drones that might make for efficient workers in a crony capitalistic dystopia. But knowledge as it’s been produced all around the world, including in Europe during the Scientific Revolution and since the very beginning of our rational endeavours, has been classified and so understood in intuitive, emotionally comforting terms. We understand things by humanizing them, by looking for patterns and seeing ourselves in those patterns, so that we can feel we’re not so alone after all, we’re similar somehow to everything that’s knowable by us. Our theories contain metaphors that express intuitive leaps of imagination that identify such similarities, and so our theories are stories we tell. That’s why scientists and mathematicians insist that elegance in theory-building counts in their fields. We’re mammals that enjoy telling each other stories, and this is where scientists benefit from an artistic sensibility, which is developed in the humanities, to create new theories, new leaps of imagination, and to evaluate which story establishes the best paradigm in revolutionary times. Knowledge isn’t just bean-counting, after all. Reason has an artistic, emotional side; the European Renaissance in the arts set the stage for the modern Scientific Revolution. We’re driven to understand the world in the first place, long after our evolutionary fitness has been secured, because we’re irrationally curious or greedy or we want to minimize our existential angst.

What about the scientific, mechanistic picture which seems to mock our manifest image, our intuitive view of ourselves as rational, free, conscious persons? Here we need to distinguish between elimination and reduction. If science shows that our personal qualities don’t really exist at all, we’ll be in big trouble, but this is next to impossible. Even at the end of Cartesian doubt, if we imagine we’re really brains in vats being tricked by a demon, we’re content to be pragmatic in assuming that such metaphysical reality doesn’t matter, because we live in the apparent world in which we’re rational, free, conscious people. Science has undermined our superstitious prejudices not so much by showing that what our ancestors were talking about was nothing at all, but by explaining the phenomena in more useful ways. Instead of thinking your house is haunted, think of shifts in the earth that make the walls creak. Instead of thinking the sun goes around the Earth, think of it the other way around. Instead of God, have a Big Bang quantum fluctuation. And instead of an immaterial spirit created by God, we have a complex natural history of causes and effects that evolved our brain which gives us certain abilities such as limited rationality, freedom, and consciousness. A mechanistic story about how intuitions form doesn’t eliminate intuitions from the face of the earth; instead, the story redescribes the origin of something that our ancestors spoke of in terms of simple dualism. Indeed, if nature were to build a person, we should expect there’d be a causal story about the process of evolution, but that story wouldn’t supplant the philosophical or religious one that just takes the natural origin for granted as at least a stage in some larger process, a process perceived by mammals that prefer the comfort of illusions.

.

The Remaining Horrors of Scientific Progress

What I’ve just said in the last section amounts to a defense of pluralism. Still, even a pluralist should fear horrors of scientific progress, besides the potential for technological blowback. First, this pluralism shows at best that the scientistic prediction is improbable, but there’s a readily-understood way for the mechanistic picture to be the last one standing, after all. We might forget the artistic side of our thinking. As Nietzsche said, metaphors become concretized over time so that they lose their freshness and they’re eventually taken as literal rather than figurative. We forget the comparisons that gave rise to the metaphors that are implicit in the meaning of our words, and so we think our natural language gives us a transparent window on the world, whereas that language expresses human biases at every turn. Scientists prefer artificial languages that aren’t so burdened by parochial experiences, so even if their theories remain metaphorical, perhaps we’ll stop speaking natural languages, influenced as we are by the computers we interact with more and more. Maybe our imagination, emotion, intuition, and creativity will atrophy as our habits continue to be shaped by our artificial environments. Then again, we’d be looking not so much at a scientific revelation of what we’ve always really been, but at a transformation of human nature for the worse.

Second, although the mechanistic picture needn’t conflict with the intuitive one, since the former can explain how the latter emerges, the two may nevertheless conflict in certain instances. Not all values, ideals, meanings, and metaphors are equal, and so there’s a need for them to cohere with science’s causal picture of how things work. Thus, science continues to challenge our lazy, obsolete intuitions which aren’t so much falsified by the causal theory, but rendered counterproductive and uninspiring in philosophical or religious terms. Ancient myths of supernatural, personal dimensions and vain conceits of our centrality to the world are exceedingly hard to maintain alongside the scientific picture. I don’t think this means we should settle just for the scientific, causal point of view; instead, we should create better myths to satisfy our artistic side.

Brassier’s Divided Soul

by rsbakker

Aphorism of the Day: If science is the Priest and nature is the Holy Spirit, then you, my unfortunate friend, are Linda Blair.

.

And Jesus asked him, “What is your name?” He replied, “My name is Legion, for we are many.”  – Mark 5:9

.

For decades now the Cartesian subject–whole, autonomous and diaphanous–has been the whipping-boy of innumerable critiques turning on the difficulties that beset our intuitive assumptions of metacognitive sufficiency. A great many continental philosophers and theorists more generally consider it the canonical ‘Problematic Ontological Assumption,’ the conceptual ‘wrong turn’ underwriting any number of theoretical confusions and social injustices. Thinkers across the humanities regularly dismiss whole theoretical traditions on the basis of some perceived commitment to Cartesian subjectivity.

My long time complaint with this approach lies in its opportunism. I entirely agree that the ‘person’ as we intuit it is ‘illusory’ (understood in some post-intentional sense). What I’ve never been able to understand, especially given post-structuralism’s explicit commitment to radical contextualism, was the systematic failure to think through the systematic consequences of this claim. To put the matter bluntly: if Descartes’ metacognitive subject is ‘broken,’ an insufficient fragment confused for a sufficient whole, then how do we know that everything subjective isn’t likewise broken?

The real challenge, as the ‘scientistic’ eliminativism of someone like Alex Rosenberg makes clear, is not so much one of preserving sufficient subjectivity as it is one of preserving sufficient intentionality more generally. The reason the continental tradition first lost faith with the Cartesian and Kantian attempts to hang the possibility of intentional cognition from a subjective hook is easy enough to see from a cognitive scientific standpoint. Nietzsche’s ‘It thinks’ is more than pithy, just as his invocation of the physiological is more than metaphorical. The more we learn about what we actually do, let alone how we are made, the more fractionate the natural picture–or what Sellars famously called the ‘scientific image’–of the human becomes. We, quite simply, are legion. The sufficient subject, in other words, is easily broken because it is the most egregious illusion.

But it is by no means the only one. The entire bestiary of the ‘subjective’ is on the examination table, and there’s no turning back. The diabolical possibility has become fact.

Zipper Back

Let’s call this the ‘Intentional Dissociation Problem,’ the problem of jettisoning the traditional metacognitive subject (person, mind, consciousness, being-in-the-world) while retaining some kind of traditional metacognitive intentionality–the sense-making architecture of the ‘life-world’–that goes with it. The stakes of this problem are such, I would argue, that you can literally use it to divide our philosophical present from our past. In a sense, one can forgive the naivete of the 20th century critique of the subject simply because (with the marvellous exception of Nietzsche) it had no inkling of the mad cognitive scientific findings confronting us. What is willful ignorance or bad faith for us was simply innocence for our teachers.

It is Wittgenstein, perhaps not surprisingly, who gives us the most elegant rendition of the problem, when he notes, almost in passing (see Tractatus, 5.542), the way so-called propositional attitudes such as desires and beliefs only make sense when attributed to whole persons as opposed to subpersonal composites. Say that Scott believes p, desires p, enacts p, and is held responsible for believing, desiring, and enacting. One night he murders his neighbour Rupert, shouting that he believes him a threat to his family and desires to keep his family safe. Scott is, one would presume, obviously guilty. But afterward, Scott declares he remembers only dreaming of the murder, and that while awake he has only loved and respected Rupert, and couldn’t imagine committing such a heinous act. Subsequent research reveals that Scott suffers from somnambulism, the kind associated with ‘homicidal sleepwalking’ in particular, such that his brain continually tries to jump from slow-wave sleep to wakefulness, and often finds itself trapped between with various subpersonal mechanisms running on ‘wake mode’ while others remain in ‘sleep mode.’ ‘Whole Scott’ suddenly becomes ‘composite Scott,’ an entity that clearly should not be held responsible for the murder of his neighbour Rupert. Thankfully, our legal system is progressive enough to take the science into account and see justice is done.

The problem, however, is that we are fast approaching the day where any scenario where Scott murders Rupert could be parsed in subpersonal terms and diagnosed as a kind of ‘malfunction.’ If you have any recent experience teaching public school you are literally living this process of ‘subpersonalization’ on a daily basis, where more and more the kinds of character judgements that you would thoughtlessly make even a decade or so ago are becoming inappropriate. Try calling a kid with ADHD ‘lazy and irresponsible,’ and you have identified yourself as lazy and irresponsible. High profile thinkers like Dennett and Pinker have the troubling tendency of falling back on question-begging pragmatic tropes when considering this ‘spectre of creeping exculpation’ (as Dennett famously terms it in Freedom Evolves). In How the Mind Works, for instance, Pinker claims “that science and ethics are two self-contained systems played out among the same entities in the world, just as poker and bridge are different games played with the same fifty-two-card deck” (55)–even though the problem is precisely that these two systems are anything but ‘self-contained.’ Certainly it once seemed this way, but only so long as science remained stymied by the material complexities of the soul. Now we find ourselves confronted by an accelerating galaxy of real world examples where we think we’re playing personal bridge, only to find ourselves trumped by an ever-expanding repertoire of subpersonal poker hands.

The Intentional DissociationProblem, in other words, is not some mere ‘philosophical abstraction;’ it is part and parcel of an implacable science-and-capital driven process of fundamental subpersonalization that is engulfing society as we speak. Any philosophy that ignores it, or worse yet, pretends to have found a way around it, is Laputan in the most damning sense. (It testifies, I think, to the way contemporary ‘higher education’ has bureaucratized the tyranny of the past, that at such a time a call to arms has to be made at all… Or maybe I’m just channelling my inner Jeremiah–again!)

In continental circles, the distinction of recognizing both the subtlety and the severity of the Intentional Dissociation Problem belongs to Ray Brassier, one of but a handful of contemporary thinkers I know of who’ve managed to turn their back on the apologetic impulse and commit themselves to following reason no matter where it leads–to thinking through the implications of an institutionalized science truly indifferent to human aspiration, let alone conceit. In his recent “The View from Nowhere,” Brassier takes as his task precisely the question of whether rationality, understood in the Sellarsian sense as the ‘game of giving and asking for reasons,’ can survive the neuroscientific dismantling of the ontological self as theorized in Thomas Metzinger’s magisterial Being No One.

The bulk of the article is devoted to defending Metzinger’s neurobiological theory of selfhood as a kind of subreptive representational device (the Phenomenal Self Model, or PSM) from the critiques of Jurgen Habermas and Dan Zahavi, both of whom are intent on arguing the priority of the transcendental over the merely empirical–asserting, in other words, that playing normative (Habermas) or phenomenological (Zahavi) bridge is the condition of playing neuroscientific poker. But what Brassier is actually intent on showing is how the Sellarsian account of rationality is thoroughly consistent with ‘being no one.’

As he writes:

Does the institution of rationality necessitate the canonization of selfhood? Not if we learn to distinguish the normative realm of subjective rationality from the phenomenological domain of conscious experience. To acknowledge a constitutive link between subjectivity and rationality is not to preclude the possibility of rationally investigating the biological roots of subjectivity. Indeed, maintaining the integrity of rationality arguably obliges us to examine its material basis. Philosophers seeking to uphold the privileges of rationality cannot but acknowledge the cognitive authority of the empirical science that is perhaps its most impressive offspring. Among its most promising manifestations is cognitive neurobiology, which, as its name implies, investigates the neurobiological mechanisms responsible for generating subjective experience. Does this threaten the integrity of conceptual rationality? It does not, so long as we distinguish the phenomenon of selfhood from the function of the subject. We must learn to dissociate subjectivity from selfhood and realize that if, as Sellars put it, inferring is an act – the distillation of the subjectivity of reason – then reason itself enjoins the destitution of selfhood. (“The View From Nowhere,” 6)

The neuroscientific ‘destitution of selfhood’ is only a problem for rationality, in other words, if we make the mistake of putting consciousness before content. The way to rescue normative rationality, in other words, is to find some way to render it compatible with the subpersonal–the mechanistic. This is essentially Daniel Dennett’s perennial argument, dating all the way back to Content and Consciousness. And this, as followers of TPB know, is precisely what I’ve been arguing against for the past several months, not out of any animus to the general view–I literally have no idea how one might go about securing the epistemic necessity of the intentional otherwise–but because I cannot see how this attempt to secure meaning against neuroscientific discovery amounts to anything more than an ingenious form of wishful thinking, one that has the happy coincidence of sparing the discipline that devised it. If neuroscience has imperilled the ‘person,’ and the person is plainly required to make sense of normative rationality, then an obvious strategy is to divide the person: into an empirical self we can toss to the wolves of cognitive science and into a performative subject that can nevertheless guarantee the intentional.

Let’s call this the Soul-Soul strategy’ in contradistinction to the Soul-First strategies of Habermas and Zahavi (or the Separate-but-Equal strategy suggested by Pinker above). What makes this option so attractive, I think, anyway, is the problem that so cripples the Soul-First and the Separate-but-Equal options: the empirical fact that the brain comes first. Gunshots to the head put you to sleep. If you’ve ever wondered why ‘emergence’ is so often referenced in philosophy of mind debates, you have your answer here. If Zahavi’s ‘transcendental subject,’ for instance, is a mere product of brain function, then the Soul-First strategy becomes little more than a version of Creationism and the phenomenologist a kind of Young-Earther. But if it’s emergent, which is to say, a special product of brain function, then he can claim to occupy an entirely natural, but thoroughly irreducible ‘level of explanation’–the level of us.

This is far and away the majority position in philosophy, I think. But for the life of me, I can’t see how to make it work. Cognitive science has illuminated numerous ways in which our metacognitive intuitions are deceptive, effectively relieving deliberative metacognition of any credibility, let alone its traditional, apodictic pretensions. The problem, in other words, is that even if we are somehow a special product of brain function, we have no reason to suppose that emergence will confirm our traditional, metacognitive sense of ‘how it’s gotta be.’ ‘Happy emergence’ is a possibility, sure, but one that simply serves to underscore the improbability of the Soul-First view. There’s far, far more ways for our conceits to be contradicted than confirmed, which is likely why science has proven to be such a party crasher over the centuries.

Splitting the soul, however, allows us to acknowledge the empirically obvious, that brain function comes first, without having to relinquish the practical necessity of the normative. Therein lies its chief theoretical attraction. For his part, Brassier relies on Sellars’ characterization of the relation between the manifest and the scientific images of man: how the two images possess conceptual parity despite the explanatory priority of the scientific image. Brain function comes first, but:

The manifest image remains indispensable because it provides us with the necessary conceptual resources we require in order to make sense of ourselves as persons, that is to say, concept-governed creatures continually engaged in giving and asking for reasons. It is not privileged because of what it describes and explains, but because it renders us susceptible to the force of reasons. It is the medium for the normative commitments that underwrite our ability to change our minds about things, to revise our beliefs in the face of new evidence and correct our understanding when confronted with a superior argument. In this regard, science itself grows out of the manifest image precisely insofar as it constitutes a self-correcting enterprise. (4)

Now this is all well and fine, but the obvious question from a relentlessly naturalistic perspective is simply, ‘What is this ‘force’ that ‘reasons’ possess?’ And here it is that we see the genius of the Soul Soul strategy, because the answer is, in a strange sense, nothing:

Sellars is a resolutely modern philosopher in his insistence that normativity is not found but made. The rational compunction enshrined in the manifest image is the source of our ability to continually revise our beliefs, and this revisability has proven crucial in facilitating the ongoing expansion of the scientific image. Once this is acknowledged, it seems we are bound to conclude that science cannot lead us to abandon our manifest self-conception as rationally responsible agents, since to do so would be to abandon the source of the imperative to revise. It is our manifest self-understanding as persons that furnishes us, qua community of rational agents, with the ultimate horizon of rational purposiveness with regard to which we are motivated to try to understand the world. Shorn of this horizon, all cognitive activity, and with it science’s investigation of reality, would become pointless. (5)

Being a ‘subject’ simply means being something that can act in a certain way, namely, take other things as intentional. Now I know first hand how convincing and obvious this all sounds from the inside: it was once my own view. When the traditional intentional realist accuses you of reducing meaning to a game of make-believe, you can cheerfully agree, and then point out the way it nevertheless allows you to predict, explain, and manipulate your environment. It gives everyone what the they want: You can yield explanatory priority to the sciences and yet still insist that philosophy has a turf. Wither science takes us, we need not move, at least when it comes to those ‘indispensable, ultimate horizons’ that allow us to make sense of what we do. It allows the philosopher to continue speaking in transcendental terms without making transcendental commitments, rendering it (I think anyway) into a kind of ‘performative first philosophy,’ theoretically innoculating the philosopher against traditional forms of philosophical critique (which require ontological commitment to do any real damage).

The Soul-Soul strategy seems to promise a kind of materialism without intentional tears. The problem, however, is that cognitive science is every bit as invested in understanding what we do as in describing what we are. Consider Brassier’s comment from above: “It is our manifest self-understanding as persons that furnishes us, qua community of rational agents, with the ultimate horizon of rational purposiveness with regard to which we are motivated to try to understand the world.” From a cognitive science perspective one can easily ask: Is it? Is it our ‘manifest understanding of ourselves’ that ‘motivates us,’ and so makes the scientific enterprise possible?

Well, there’s a growing body of research that suggests we (whatever we may be) have no direct access to our motives, but rather guess with reference to ourselves using the same cognitive tools we use to guess at the motives of others. Now, the Soul-Soul theorist might reply, ‘Exactly! We only make sense to ourselves against a communal background of rational expectations…’ but they have actually missed the point. The point is, our motivations are occluded, which raises the possibility that our explanatory guesswork has more to do with social signalling than with ‘getting motivations right.’ This effectively blocks ‘motivational necessity’ as an argument securing the ineliminability of the intentional. It also raises the question of what kind of game are we actually playing when we play the so-called ‘game of giving and asking for reasons.’ All you need consider is the ‘spectre’ neuromarketing in the commercial or political arena, where one interlocutor secures the assent of the other by treating that other subpersonally (explicitly, as opposed to implicitly, which is arguably the way we treat one another all the time).

Any number of counterarguments can be adduced against these problems, but the crucial thing to appreciate is that these concerns need only be raised to expose the Soul-Soul strategy as mere make-believe. Sure, our brains are able to predict, explain, and manipulate certain systems, but the anthropological question requiring scientific resolution is one of where ‘we’ fit in this empirical picture, not just in the sense of ‘destitute selves,’ but in every sense. Nothing guarantees an autonomous ‘level of persons,’ not incompatibility with mechanistic explanation, and least of all speculative appraisals (of the kind, say, Dennett is so prone to make) of its ‘performative utility.’

To sharpen the point: If we can’t even say for sure that we exist the way we think, how can we say that our brains nevertheless do the things we think they do, things like ‘inferring’ or ‘taking-as intentional.’

Brassier writes:

The concept of the subject, understood as a rational agent responsible for its utterances and actions, is a constraint acquired via enculturation. The moral to be drawn here is that subjectivity is not a natural phenomenon in the way in which selfhood is. (32)

But as a doing it remains a ‘natural phenomenon’ nonetheless (what else would it be?). As such, the question arises, Why should we expect that ‘concepts’ will suffer a more metacognitive-intuition friendly fate than ‘selves’? Why should we think the sciences of the brain will fail to revolutionize our traditional normative understanding of concepts, perhaps relegate it to a parochial, but ineliminable shorthand forced upon us by any number of constraints or confounds, or so contradict our presumed role in conceptual thinking as to make ‘rationality’ as experienced a kind of in inter fiction. What we cognize as the ‘game of giving and asking for reasons,’ for all we know, could be little more than the skin of plotting beasts, an illusion foisted on metacognition for the mere want of information.

Brassier writes:

It forces us to revise our concept of what a self is. But this does not warrant the elimination of the category of agent, since an agent is not a self. An agent is a physical entity gripped by concepts: a bridge between two reasons, a function implemented by causal processes but distinct from them. (32)

Is it? How do we know? What ‘grips’ what how? Is the function we attribute to this ‘gripping’ a cognitive mirage? As we saw in the case of homicidal somnambulism above, it’s entirely unclear how subpersonal considerations bear on agency, whether understood legally or normatively more generally. But if agency is something we attribute, doesn’t this mean the sleepwalker is a murderer merely if we take him to be? Could we condemn personal Scott to death by lethal injection in good conscience knowing we need only think him guilty for him to be so? Or are our takings-as constrained by the actual function of his brain? But then how can we scientifically establish ‘degrees of agency’ when the subpersonal, the mechanistic, has the effect of chasing out agency altogether?

These are living issues. If it weren’t for the continual accumulation of subpersonal knowledge, I would say we could rely on collective exhaustion to eventually settle the issue for us. Certainly philosophical fiat will never suffice to resolve the matter. Science has raised two spectres that only it can possibly exorcise (while philosophy remains shackled on the sidelines). The first is the spectre of Theoretical Incompetence, the growing catalogue of cognitive shortcomings that probably explain why it is only science can reliably resolve theoretical disputes. The second is Metacognitive Incompetence, the growing body of evidence that overthrows our traditional and intuitive assumptions of self-transparency. Before the rise of cognitive science, philosophy could continue more or less numb to the pinch of the first and all but blind to the throttling possibility of the latter. Now however, we live in an age where massive, wholesale self-deception, no matter what logical absurdities it seems to generate, is a very real empirical possibility.

What we intuit regarding reason and agency is almost certainly the product of compound neglect and cognitive illusion to some degree. It could be the case that we are not intentional in such a way that we must (short of the posthuman, anyway) see ourselves and others as intentional. Or even worse, it could be the case that we are not intentional in such a way that we can only see ourselves and others as intentional whenever we deliberate on the scant information provided by metacognition–whenever we ‘make ourselves explicit.’ Whatever the case, whether intentionality is a first or second-order confound (or both), this means that pursuing reason no matter where it leads could amount to pursuing reason to the point where reason becomes unrecognizable to us, to the point where everything we have assumed will have to be revised–corrected. And in a sense, this is the argument that does the most damage to Sellar’s particular variant of the Soul-Soul strategy: the fact that science, having obviously run to the limits of the manifest image’s intelligibility, nevertheless continues to run, continues to ‘self-correct’ (albeit only in a way that we can understand ‘under erasure’), perhaps consigning its wannabe guarantor and faux-motivator to the very dust-bin of error it once presumed to make possible.

Battery Wrist

In his recent After Nature interview, Brassier writes:

[Nihil Unbound] contends that nature is not the repository of purpose and that consciousness is not the fulcrum of thought. The cogency of these claims presupposes an account of thought and meaning that is neither Aristotelian—everything has meaning because everything exists for a reason—nor phenomenological—consciousness is the basis of thought and the ultimate source of meaning. The absence of any such account is the book’s principal weakness (it has many others, but this is perhaps the most serious). It wasn’t until after its completion that I realized Sellars’ account of thought and meaning offered precisely what I needed. To think is to connect and disconnect concepts according to proprieties of inference. Meanings are rule-governed functions supervening on the pattern-conforming behaviour of language-using animals. This distinction between semantic rules and physical regularities is dialectical, not metaphysical.

Having recently completed Rosenberg’s The Atheist’s Guide to Reality, I entirely concur with Brassier’s diagnosis of  Nihil Unbound’s problem: any attempt to lay out a nihilistic alternative to the innumerable ‘philosophies of meaning’ that crowd every corner of intellectual life without providing a viable account of meaning is doomed to the fringes of humanistic discourse. Rosenberg, for his part, simply bites the bullet, relying on the explanatory marvels of science and its obvious incompatibilities with meaning to warrant dispensing with the latter. The problem, however, is that his readers can only encounter his case through the lense of meaning, placing Rosenberg in the absurd position of using argumentation to dispel what, for his interlocutors, lies in plain sight.

Brassier, to his credit, realizes that something must be said about meaning, that some kind of positive account must be given. But in the absence of any positive, nihilistic alternative–any means of explaining meaning away–he opts for something deflationary, he turns to Sellars (as did Dennett), and the presumption that meaning pertains to a different, dialectical order of human community and interaction. This affords him the appearance of having it both ways (like Dennett): deference to the priority of mechanism, while insisting on the parity of meaning and reason, arguing, in effect, that we have two souls, one a neurobiological illusion, the other a ‘merely functional’ instrument of enormous purport and power…

Or so it seems.

What I’ve tried to show is that cognitive science cares not a whit whether we characterize our commitments as metaphysical or dialectical, that it is just as apt to give lie to metacognitively informed accounts of what we do as to metacognitively informed accounts of what we are. ‘Inferring’ is no more immune to radical scientific revision than is ‘willing’ or ‘believing’ or ‘taking as’ or what have you. So for example, if the structures underwriting consciousness in the brain were definitively identified, and the information isolated as ‘inferring’ could be shown to be, say, distorted low-dimensional projections, jury-rigged ‘fixes’ to far different evolutionary pressures, would we not begin, in serious discussions of cognition or what have you, to continually reference these limitations to the degree they distort our understanding of the actual activity involved? If it becomes a scientific fact that we are a far different creature in a far different environment than what we take ourselves to be, will that not radically transform any discourse that aspires to be cognitive?

Of course it will.

Perhaps the post-intentional philosophy of the future will see the ‘game of giving and asking for reasons’ as a fragmentary shadow, a comic strip version of our actual activity, more distortion than distillation because neither the information nor the heuristics available for deliberative metacognition are adapted to the needs of deliberative metacognition.

This is one reason why I think ‘natural anosognosia’ is such an apt way to describe our straits. We cannot get past the ‘only game in town sense’, or agency, primarily because there’s nothing else to be got. This is the thing about positing ‘functions’: the assumption is that what we experience does what we think it does the way we think it should. There is no reason to assume this must be the case once we appreciate the ubiquity and the consequences of informatic neglect (and our resulting metacognitive incompetence). We have more than enough in the way of counterintuitive findings to worry that we are about to plunge over a cliff–that the soul, like the sky, might simply continue dropping into an ever deeper abyss. The more we learn about ourselves, the more post hoc and counterintuitive we become. Perhaps this is astronomically the case.

Button Gut

Here’s the funny thing: the naturalistic fundamentals are exceedingly clear. Humans are information systems that coordinate via communicated information. The engineering (reverse or forward) challenges posed by this basic picture are enormous, but conceptually, things are pretty clear–so long as you keep yourself off-screen.

We are the only ‘fundamental mystery’ in the room. The problem of meaning is the problem of us.

In addition to Rosenberg’s Atheist’s Guide to Reality I also recently completed reading Plato’s Camera by Churchland and The Cognitive Science of Science by Thagard and I found the contrast… bracing, I guess. Rosenberg made stark the pretence (or more charitably, promise) marbled throughout Churchland and Thagard, the way they ceaselessly swap between the mechanistic and the intentional as if their descriptions of the first, by the mere fact of loosely correlating to our assumptions regarding the latter, somehow explained the latter. Thagard, for instance, goes so far as to claim that the ‘semantic pointer’ model of concepts that he adapts from Eliasmith (of recent SPAUN fame) solves the symbol grounding problem without so much as mentioning how, when, or where semantic pointers (which are eminently amenable to BBT) gain their hitherto inexplicable normative/intentional properties. In other words, they simply pretend there’s no real problem of meaning–even Churchland! “Ach!” they seem to imply, “Details! Details!”

Rosenberg will have none of it. But since he has no way of explaining ‘us,’ he attempts the impossible: he tries to explain us away without explaining us at all, arguing that we are a problem for neuroscience, not for scientism (the philosophical hyper-naturalism that he sees following from the sciences). He claims ‘we’ are philosophically irrelevant because ‘we’ are inconsistent with the world as described by science, not realizing the ease with which this contention can be flipped into the claim that the sciences are philosophically irrelevant so long as they remain inconsistent with us…

Theoretical dodge-ball will not do. Brassier understands this more clearly than any other thinker I know. The problem of meaning has to be tackled. But unlike Jesus, we have cannot cast the subpersonal out into two thousand suicidal swine. ‘Going dialectical,’ abandoning ‘selves’ for the perceived security of ‘rational agency’ ultimately underestimates the wholesale nature of the revisionary/eliminative threat posed by the cognitive sciences, and the degree to which our intentional self-understanding relies on ignorance of our mechanistic nature. Any scientific account of physical regularities that explains semantic rules in terms that contradict our metacognitive assumptions will revolutionize our understanding of ‘rational agency,’ no matter what definitional/theoretical prophylactics we have in place.

Habermas’ analogy of “a consciousness that hangs like a marionette from an inscrutable criss-cross of strings” (“The Language Game or Responsible Agency and the Problem of Free Will,” 24) seems more and more likely to be the case, even at the cost of our ability to make metacognitive sense of our ‘selves’ or our ‘projects.’ (Evolution, to put the point delicately, doesn’t give a flying fuck about our ability to ‘accurately theorize’). This is the point I keep hammering via BBT. Once deliberative theoretical metacognition has been overthrown, it’s anybody’s guess how the functions we attribute to ourselves and others will map across the occluded, orthogonal functions of our brain. And this simply means that the human in its totality stands exposed to the implacable indifference of science…

I think we should be frightened–and exhilarated.

Our capacity to cognize ourselves is an evolutionary shot in the neural dark. Could anyone have predicted that ‘we’ have no direct access to our beliefs and motives, that ‘we’ have to interpret ourselves the way we interpret others? Could anyone have predicted the seemingly endless list of biases discovered by cognitive psychology? Or that the ‘feeling of willing’ might simply be the way ‘we’ take ownership of our behaviour post hoc? Or that ‘moral reasoning’ is primarily a PR device? Or that our brains regularly rewrite our memories? Think, Hume, the philosopher-prophet, and his observation that Adam could never deduce that water drowns or fire burns short of worldly experience. What we do, like what we are, is a genuine empirical mystery simply because our experience of ourselves, like our experience of earth’s motionless centrality, is the product of scant and misleading information.

The human in its totality stands exposed to the implacable indifference of science, and there’s far, far more ways for our intuitive assumptions to be wrong as opposed to right. I sometimes imagine I’m sitting around this roulette wheel, with fairly everyone in the world ‘going with their gut’ and stacking all their chips on the zeros, so there’s this great teetering tower swaying on intentional green, leaving the rest of the layout empty… save for solitary corner-betting contrarians like me and, I hope, Brassier.

Meathooks: Dennett and the Death of Meaning

by rsbakker

Aphorism of the Day: God is myopia, personality mapped across the illusion of the a priori.

.

In Darwin’s Dangerous Idea, Daniel Dennett attempts to show how Darwinism possesses the explanatory resources “to unite and explain everything in one magnificent vision.” To assist him, he introduces the metaphors of the ‘crane’ and the ‘skyhook’ as a general means of understanding the Darwinian cognitive mode and that belonging to its traditional antagonist:

Let us understand that a skyhook is a “mind-first” force or power or process, an exception to the principle that all design, and apparent design, is ultimately the result of mindless, motiveless mechanicity. A crane, in contrast, is a subprocess or a special feature of a design process that can be demonstrated to permit the local speeding up of the basic, slow process of natural selection, and can be demonstrated to be itself the predictable (or retrospectively explicable) product of the basic process. Darwin’s Dangerous Idea, 76

The important thing to note in this passage is that Dennett is actually trying to find some middle-ground, here, between what might be called the ‘top-down’ intuitions, which suggest some kind of essential break between meaning and nature, and ‘bottom-up’ intuitions, which seem to suggest there is no such thing as meaning at all. What Dennett attempts to argue is that the incommensurability of these families of intuitions is apparent only, that one only needs to see the boom, the gantry, the cab, and the tracks, to understand how skyhooks are in reality cranes, the products of Darwinian evolution through and through.

The arch-skyhook in the evolutionary story, of course, is design. What Dennett wants to argue is that the problem has nothing to do with the concept design per se, but rather with a certain way of understanding it. Design is what Dennett calls a ‘Good Trick,’ a way of cognizing the world without delving into its intricacies, a powerful heuristic selected precisely because it is so effective. On Dennett’s account, then, design really looks like this:

And only apparently looks like this:

Design, in other words, is not the problem–design is a crane, something easily explicable in natural terms. The problem, rather, lies in our skyhook conception of design. This is a common strategy of Dennett’s. Even though he’s commonly accused of eliminativism (primarily for his rejection of ‘original intentionality’), a fair amount of his output is devoted to apologizing for the intentional status quo, and Darwin’s Dangerous Idea is filled with some of his most compelling arguments to this effect.

Now I actually think the situation is nowhere near so straightforward as Dennett seems to think. I also believe Dennett’s ‘redefinitional strategy,’ where we hang onto our ‘folk’ terms and simply redefine them in light of incoming scientific knowledge, is more than a little tendentious. But to see this, we need to understand why it is these metaphors of crane and skyhook capture as much of the issue of meaning and nature as they do. We need to take a closer look.

Darwin’s great insight, you could say, was simply to see the crane, to grasp the great, hidden mechanism that explains us all. As Dennett points out, if you find a ticking watch while walking in the woods, the most natural thing in the world is to assume is that you’ve discovered an intentional artifact, a product of ‘intelligent design.’ Darwin’s world-historical insight was to see how natural processes lacking motive, intelligence, or foresight could accomplish the same thing.

But what made this insight so extraordinary? Why was the rest of the crane so difficult to see? Why, in other words, did it take a Darwin to show us something that, in hindsight at least, should have been so very obvious?

Perspective is the most obvious, most intuitive answer. We couldn’t see because we were in no position to see. We humans are ephemeral creatures, with imaginations that can be beggared by mere centuries, let alone the vast, epochal processes that created us. Given our frame of informatic reference, the universe is an engine that idles so low as to seem cold and dead–obviously so. In a sense, Darwin was asking his peers to believe, or at least consider, a rather preposterous thing: that their morphology only seemed fixed, that when viewed on the appropriate scale, it became wax, something that sloshed and spilled into environmental moulds.

A skyhook, on this interpretation, is simply what cranes look like in the fog of human ignorance, an artifact of myopia–blindness. Lacking information pertaining to our natural origins (and what is more, lacking information regarding that lack), we resorted to those intuitions that seemed most immediate, found ways, as we are prone to do, to spin virtue and flattery out of our ignorance. Waste not, want not.

All this should be clear enough, I think. As ‘brights’ we have an ingrained animus against the beliefs of our outgroup competitors. ‘Intelligent design,’ in our circles at least, is what psychologists call an ‘identity claim,’ a way to sort our fellows on perceived gradients of cognitive authority. As such, it’s very easy to redefine, as far as intentional concepts go. Contamination is contamination, terminological or no. And so we have grown used to using the intuitive, which is to say, skyhook, concept of design ‘under erasure,’ as continental philosophers might say–as a mere facon de parler.

But I fear the situation is nowhere quite so easy, that when we take a close look at the ‘skyhook’ structure of ‘design,’ when we take care to elucidate its informatic structure as a kind of perspectival artifact, we have good reason to be uncomfortable–very uncomfortable. Trading in our intuitive concept of design for a scientifically informed one, as Dennett recommends, actually delivers us to a potentially catastrophic implicature, one that only seems innocuous for the very reason our ancestors thought ‘design’ so obvious and innocuous: ignorance and informatic neglect.

On Dennett’s account, design is a kind of ‘stance’–literally, a cognitive perspective–a computationally parsimonious way of making sense of things. He has no problem with relying on intentional concepts because, as we have seen, he thinks them reliable, at least enough for his purposes. For my part, I prefer to eschew ‘stances’ and the like and talk exclusively in terms of heuristics. Why? For one, heuristics are entirely compatible with the mechanistic approach of the life sciences–unlike stances. As such, they do not share the liabilities of intentional concepts, which are much more prone to be applied out of school, and so carry an increased risk of generating conceptual confusion. Moreover, by skirting intentionality, heuristic talk obviates the threat of circularity. The holy grail of cognitive science, after all, is to find some natural (which is to say, nonintentional) way to explain intentionality. But most importantly, heuristics, unlike stances, make explicit the profound role played by informatic neglect. Heuristics are heuristics (as opposed to optimization devices) by virtue of the way they systematically ignore various kinds of information. And this, as we shall see, makes all the difference in the world.

Recall the question of why we needed Darwin to show us the crane of evolution. The crane was so hard to see, I suggested, because our limited informatic frame of reference–our myopic perspective. So then why did we assume design was the appropriate model? Why, in the absence of information pertaining to natural selection, should design become the default explanation of our biological origins as opposed to, say, ‘spontaniety’? When Origin of the Species was published in 1859, for instance, many naturalists actually accepted some notion of purposive evolution; it was natural selection they found offensive, the mindlessness of biological origins. One can cite many contributing factors in answering this question, of course, but looming large over all of them is the fact that design is a natural heuristic, one of many specialized cognitive tools developed by our oversexed, undernourished ancestors.

By rendering the role of informatic neglect implicit, Dennett’s approach equivocates between ‘circumstantial’ and ‘structural’ ignorance, or in other words, between the mere inability to see and blindness proper. Some skyhooks we can dissolve with the accumulation of information. Others we cannot. This is why merely seeing the crane of evolution is not enough, why we must also put the skyhook of intuitive design on notice, quarantine it: we may be born in ignorance of evolution, but we die with the informatic neglect constitutive of design.

Our ignorance of evolution was never a simple matter of ignorance, it was also a matter of human nature, an entrenched mode of understanding, one incompatible with the facts of Darwinian evolution. Design, it seemed, was obviously true, either outright or upon the merest reflection. We couldn’t see the crane of evolution, not simply because we were in no position to see (given our ephemeral nature), but also because we were in position to see something else, namely, the skyhook of design. Think about the two photos I provided above, the way the latter, the skyhook, was obviously an obfuscation of the former, the crane, not merely because you had the original photo to reference, but because you could see that something had been covered over–because you had access, in other words, to information pertaining to the lack of information. The first photo of the crane strikes us as complete, as informatically sufficient. The second photo of the skyhook, however, strikes us as obviously incomplete.

We couldn’t see the crane of evolution, in other words, not just because we were in position to see something else, the skyhook of design, but because we were in position to see something else and nothing else. The second photo, in other words, should have looked more like this:

Enter the Blind Brain Theory. BBT analyzes problems pertaining to intentionality and consciousness in terms of informatic availability and cognitive applicability, in terms of what information we can reasonably expect conscious deliberation to access, and the kinds heuristic limitations we can reasonably expect it to face. Two of the most important concepts arising from this analysis are apparent sufficiency and asymptotic limitation. Since differentiation is always a matter of more information, informatic sufficiency is always the default. We always need to know more, you could say, to know that we know less. The is why intentionality and consciousness, on the BBT account, confront philosophy and science with so many apparent conundrums: what we think we see when we pause to reflect is limned and fissured by numerous varieties of informatic neglect, deficits we cannot intuit. Thus asymptosis and the corresponding illusion of informatic sufficiency, the default sense that we have all the information we need simply because we lack information pertaining to the limits of that information.

This is where I think all those years I spent reading continental philosophy have served me in good stead. This is also where those without any background in continental thought generally begin squinting and rolling their eyes. But the phenomena is literally one we encounter every day–every waking moment in fact (although this would require a separate post to explain). In epistemological terms, it refers to ‘unknown-unknowns,’ or unk-unks as they are called in engineering. In fact, we encountered its cognitive dynamics just above when puzzling through the question of why natural selection, which seems so obvious to us in hindsight, could count as such a revelation prior to 1859. Natural selection, quite simply, was an unknown unknown. Lacking the least information regarding the crane, in other words, meant that the design seemed the only option, the great big ‘it’s-gotta-be’ of early nineteenth century biology.

In a sense, all BBT does is import this cognitive dynamic–call it, the ‘Only-game-in-town Effect’–into human cognition and consciousness proper. In continental philosophy you find this dynamic conceptualized in a variety of ways, as ‘presence’ or ‘identity thinking,’ for example, in its positive incarnation (sufficiency), or as ‘differance’ or ‘alterity’ in its negative (neglect). But as I say, we witness it everywhere in our collective cognitive endeavours. All you need do is think of the way the accumulation of alternatives has the effect of progressively weakening novel interpretations, such as Kant’s say, in philosophy. Kant, who was by no means a stupid man, could actually believe in the power of transcendental deduction to deliver synthetic a priori truths simply because he was the first. It’s interpretative nature only became evident as the variants, such as Fichte’s, began piling up. Or consider the way contextualizing claims, giving them speakers and histories and motives and so on has the strange effect of relativizing them, somehow robbing them of veridical force. Back in my teaching days, I would illustrate the power of unk-unk via a series of recontextualizations. I would give the example of a young man stabbing an old man, and ask my students if it’s a crime. “Yes,” they would cry. “What could be more obvious!” Then I would start stacking contexts, such as a surrounding mob of other men stabbing one another, then a giant arena filled with screaming spectators watching it all, and so on.

The Only-game-in-town Effect (or the Invisibility of Ignorance), according to BBT, plays an even more profound role within us than it does between us. Conscious experience and cognition as we intuit them, it argues, is profoundly structured ‘by’ unk-unk–or informatic neglect.

This is all just to say that the skyhook of design always fills the screen, so to speak, that it always strikes us as sufficient, and can only be revealed as parochial through the accumulation of recalcitrant information. And this makes plain the astonishing nature of Darwin’s achievement, just how far he had to step out of the traditional conceptual box to grasp the importance of natural selection. At the same time, it also explains why, at least for some, the crane was in the ‘air,’ so to speak, why Darwin ultimately found himself in a race with Wallace. The life sciences, by the middle of the 19th century, had accumulated enough ‘recalcitrant information’ to reveal something of the heuristic parochialism of intuitive design and its inapplicability to the life sciences as a matter of fact, as opposed to mere philosophical reflection a la, for instance, Hume.

Intuitive design is a native cognitive heuristic that generates ‘sufficient understanding’ via the systematic neglect of ‘bottom-up’ causal information. The apparent ‘sufficiency’ of this understanding, however, is simply an artifact of this self-same neglect: as is the case with other intentional concepts, it is notoriously difficult to ‘get behind’ this understanding, to explain why it should count as cognition at all. To take Dennett’s example of finding a watch in the forest: certainly understanding that a watch is an intentional artifact, the product of design, tells you something very important, something that allows you to distinguish watches from rocks, for instance. It also tells you to be wary, that other agents such as yourself are about, perhaps looking for that watch. Watch out!

But what, exactly, is it you are understanding? Design seems to possess a profound ‘resolution’ constraint: unlike mechanism, which allows explanations at varying levels of functional complexity, organelles to cells, cells to tissues, tissues to organs, organs to animal organisms, etc., design seems stuck at the level of the ‘personal,’ you might say. Thus the appropriateness of the metaphor: skyhooks leave us hanging in a way that cranes do not.

And thus the importance of cranes. Precisely because of its variable degrees of resolution, you might say, mechanistic understanding allows us to ‘get behind’ our environments, not only to understand them ‘deeper,’ but to hack and reprogram them as well. And this is the sense in which cranes trump skyhooks, why it pays to see the latter as perspectival distortions of the former. Design, as it is intuitively understood, is a skyhook, which is to say, a cognitive illusion.

And here we can clearly see how the threat of tendentiousness hangs over Dennett’s apologetic redefinitional project. The design heuristic is effective precisely because it systematically neglects causal information. It allows us to understand what systems are doing and will do without understanding how they actually work. In other words, what makes design so computationally effective across a narrow scope of applications, causal neglect, seems to be the very thing that fools us into thinking it’s a skyhook–causal neglect.

Looked at in this way, it suddenly becomes very difficult to parse what it is Dennett is recommending. Replacing the old, intuitive, skyhook design-concept with a new, counterintuitive, crane design-concept means using a heuristic whose efficiencies turn on causal neglect in a manner amenable to causal explanation. Now it seems easy, I suppose, to say he’s simply drawing a distinction between informatic neglect as a virtue and informatic neglect as a vice, but can this be so? When an evolutionary psychologist says, ‘We are designed for persistence hunting,’ are we cognizing ‘designed for’ in a causal sense? If so, then what’s the bloody point of hanging onto concept at all? Or are we cognizing ‘designed for’ in an intentional sense? If so, then aren’t we simply wrong? Or are we, as seems far more likely the case, cognizing ‘designed for’ in an intentional sense only ‘as if’ or ‘under erasure,’ which is to say, as a mere facon de parler?

Either way, the prospects for Dennett’s apologetic project, at least in the case of design, seem to look exceedingly bleak. The fact that design cannot be the skyhook it seems to be, that it is actually a crane, does nothing to change the fact that it leverages computational efficiencies via causal neglect, which is to say, by looking at the world through skyhook glasses. The theory behinds his cranes is impeccable. The very notion of crane-design as a deployable concept, however,is incoherent. And using concepts ‘under erasure,’ as one must do when using ‘design’ in evolutionary contexts, would seem to stand upon the very lip of an eliminativist abyss.

And this is simply an instance of what I’ve been ranting about all along here on Three Pound Brain, the calamitous disjunction of knowledge and experience, and the kinds of distortions it is even now imposing on culture and society. The Semantic Apocalypse.

.

But Dennett is interested in far more than simply providing a new Darwinian understanding of design, he wants to mint a new crane-coinage for all intentional concepts. So the question becomes: To what extent do the considerations above apply to intentionality as a whole? What if it were the case that all the peculiarities, the interminable debates, the inability to ‘get behind’ intentionality in any remotely convincing way–what if all this were more than simply coincidental? Insofar as all intentional concepts systematically neglect causal information, we have ample reason to worry. Like it or not, all intentional concepts are heuristic, not in any old manner, but in the very manner characteristic of design.

Brentano, not surprisingly, provides the classic account of the problem in Psychology From an Empirical Standpoint, some fifteen years after the publication of Origin of the Species:

Every mental phenomenon includes something as object within itself, although they do not all do so in the same way. In presentation something is presented, in judgement something is affirmed or denied, in love loved, in hate hated, in desire desired and so on. This intentional in-existence is characteristic exclusively of mental phenomena. No physical phenomenon exhibits anything like it. We can, therefore, define mental phenomena by saying that they are those phenomena which contain an object intentionally within themselves. 68

No physical phenomena exhibits intentionality, and likewise, no intentional phenomena exhibits anything like causality, at least not obviously so. The reason for this, on the BBT account, is as clear as can be. Most are inclined to blame the computational intractability of cognizing and tracking the causal complexities of our relationships. The idea (and it is a beguiling one) is that aboutness is a kind of evolved discovery, that the exigencies of natural selection cobbled together a brain capable of exploiting a preexisting logical space–what we call the ‘a priori.’ Meaning, or intentionality more generally, on this account is literally ‘found money.’ The vexing question, as always, is one of divining how this logical level is related to the causal.

On the BBT account, the computational intractability of cognizing and tracking the causal complexities of our environmental relationships is also to blame, but aboutness, far from being found money, is rather a kind of ‘frame heuristic,’ a way for the brain to relate itself to its environments absent causal information pertaining to this relation. It presumes that consciousness is a distributed, dynamic artifact of some subsystem of the brain and that, as such, faces severe constraints on its access to information generally, and almost no access to information regarding its own neurofunctionality whatsoever:

It presumes, in other words, that the information available for deliberative or conscious cognition must be, for developmental as well as structural reasons, drastically attenuated. And it’s easy to see how this simply has to be the case, simply given the dramatic granularity of consciousness compared to the boggling complexities of our peta-flopping brains.

The real question–the million dollar question, you might say–turns on the character of this informatic attenuation. At the subpersonal level, ‘pondering the mental’ consists (we like to suppose anyway) in the recursive uptake of ‘information regarding the mental’ by ‘System 2,’ or conscious, deliberative cognition. The question immediately becomes: 1) Is this information adequate for cognition? and 2) Are the heuristic systems employed even applicable to this kind of problem, namely, the ‘problem of the mental’? Is the information (as Dennett seems to assume throughout his corpus) ‘merely compressed,’ which is to say, merely stripped to its essentials to maximize computational efficiencies? Or is it a far, far messier affair? Given that the human cognitive ‘toolkit,’ as they call it in ecological rationality circles, is heuristically adapted to troubleshoot external environments, can we assume that mental phenomena actually lie within their scope of application? Could the famed and hoary conundrums afflicting philosophy of mind and consciousness research be symptoms of heuristic overreach, the application of specialized cognitive tools to a problem set they are simply not adapted to solve?

Let’s call the issue expressed in this nest of questions the ‘Attenuation Problem.’

It’s worth noting at this juncture that although Dennett is entirely on board with the notion that ‘the information available for deliberative or conscious cognition must be drastically attenuated’ (see, for instance, “Real Patterns”), he inexplicably shies from any detailed consideration of the nature of this attenuation. Well, perhaps not so inexplicably. For Dennett, the Good Tricks are good because they are efficacious and because they are winners of the evolutionary sweepstakes. He assumes, in other words, that the Attenuation Problem is no problem at all, simply because it has been resolved in advance. Thus, his apologetic, redefinitional programme. Thus his endless attempts to disabuse his fellow travellers of the perceived need to make skyhooks real:

I know that others find this vision so shocking that they turn with renewed eagerness to the conviction that somewhere, somehow, there just has to be a blockade against Darwinism and AI. I have tried to show that Darwin’s dangerous idea carries the implication that there is no such blockade. It follows from the truth of Darwinism that you and I are Mother Nature’s artefacts, but our intentionality is none the less real for being an effect of millions of years of mindless, algorithmic R and D instead of a gift from on high. Darwin’s Dangerous Idea, 426-7

Cranes are all we have, he argues, and as it turns out, they are more than good enough.

But, as we’ve seen in the case of design, the crane version forces us to check our heuristic intuitions at the door. Given that the naturalization of design requires adducing the very causal information that intuitive design neglects to leverage heuristic efficiencies, there cannot be, in effect, any coherent, naturalized concept of design, as opposed to the employment of intuitive design ‘under erasure.’ Real or not, the skyhook comes first, leaving us to append the rest of the crane as an afterthought. Apologetic redefinition is simply not enough.

And this suggests that something might be wrong with Dennett’s arguments from efficacy and evolution for the ‘good enough’ status of derived intentionality. As it turns out, this is precisely the case. Despite their prima facie appeal, neither the apparent efficacy nor the evolutionary pedigree of our intentional concepts provide Dennett with what he needs.

To see how this could be the case, we need to reconsider the two conceptual dividends of BBT considered above, sufficiency and neglect. Since more information is required to flag the insufficiency of the information (be it ‘sensory’ or ‘cognitive’) broadcast through or integrated into consciousness, sufficiency is the perennial default. This is the experiential version of what I called the ‘Only-game-in-town Effect’ above. This means that insufficiency will generally have to be inferred against the grain of a prior sense of intuitive sufficiency. Thus, one might suppose, evolution’s continued difficulties with intuitive design, and science’s battle against anthropomorphic worldviews more generally: not only does science force us to reason around elements of our own cognitive apparatus, it forces us to overcome the intuition that these elements are good enough to tell us what’s what on their own.

Dennett, in this instance at least, is arguing with the intuitive grain!

Intentionality, once again, systematically neglects causal information. As Chalmers puts it, echoing Leibniz and his problem of the Mill:

The basic problem has already been mentioned. First: Physical descriptions of the world characterize the world in terms of structure and dynamics. Second: From truths about structure and dynamics, one can deduce only further truths about structure and dynamics. And third: truths about consciousness are not truths about structure and dynamics. “Consciousness and Its Place in Nature”

Informatic neglect simply means that conscious experience tells us nothing about the neurofunctional details of conscious experience. Rather, we seem to find ourselves stranded with an eerily empty version of what the life sciences tell us we in fact are, the asymptotic (finite but unbounded) clearing called ‘consciousness’ or ‘mind’ containing, as Brentano puts it, ‘objects within itself.’ What is a mere fractional slice of the neuro-environmental circuit sketched above, literally fills the screen of conscious experience, as it were, appearing something like this:

Which is to say, something like a first-person perspective, where environmental relations appear within a ‘transparent frame’ of experience. Thus all the blank white space around the arrow: I wanted to convey the strange sense in which you are the ‘occluded frame,’ here, a background where the brain drops out, not just partially, not even entirely, but utterly. Floridi refers to this as the ‘one-dimensionality of experience,’ the way “experience is experience, only experience, and nothing but experience” (The Philosophy of Information, 296). Experience utterly fills the screen, relegating the mechanisms that make it possible to oblivion. As I’ve quipped many times: Consciousness is a fragment that constitutively confuses itself for a whole, a cog systematically deluded into thinking it’s the entire machine. Sufficiency and neglect, taken together, mean we really have no way short of a mature neuroscience of determining the character of the informatic attenuation (how compressed, depleted, fragmentary, distorted, etc.) of intentional phenomena.

So consider the evolutionary argument, the contention that evolution assures us that intentional attenuations are generally happy adaptations: Why else would they be selected otherwise?

To this, we need only reply, Sure, but adapted for what? Say subreption was the best way for evolution to proceed: We have sex because we lust, not because we want to replicate our genetic information, generally speaking. We pair-bond because we love, not because we want to successfully raise offspring to the age of sexual maturation, generally speaking. When it comes to evolution, we find more than a few ‘ulterior motives.’ One need only consider the kinds of evolutionary debates you find in cognitive psychology, for instance, to realize that our intuitive sense of our myriad capacities need not line up with their adaptative functions in any way at all, let alone those we might consider ‘happy.’

Or say evolution was only concerned with providing what might be called ‘exigency information’ for deliberative cognition, the barest details required for a limited subset of cognitive activities. One could cobble together a kind of neuro-Wittgensteinian argument, suggest that we do what we do all well and fine, but that as soon as we pause to theorize what we do, we find ourselves limited to mere informatic rumour and innuendo that, thanks to sufficiency, we promptly confuse for apodictic knowledge. It literally could be the case that what we call philosophy amounts to little more than submitting the same ‘mangled’ information to various deliberative systems again and again and again, hoping against hope for a different result. In fact, you could argue that this is precisely what we should expect to be the case, given that we almost certainly didn’t evolve to ‘philosophize.’

In other words, how does Dennett know the ‘intentionality’ he and others are ‘making explicit’ accurately describes the mechanisms, the Good Tricks, that evolution actually selected? He doesn’t. He can’t.

But if the evolutionary argument holds no water, what about Dennett’s argument regarding the out-and-out efficacy of intentional concepts? Unlike the evolutionary complaint, this argument is, I think, genuinely powerful. After all, we seem to use intentional concepts to understand, predict, and manipulate each other all the time. And perhaps even more impressively, we use them (albeit in stripped down form)in formal semantics and all its astounding applications. Fodor, for instance, famously argues that the use of representations in computation provide an all-important ‘compatibility proof.’ Formalization links semantics to syntax, and computation links syntax to causation. It’s hard to imagine a better demonstration of the way skyhooks could be linked to cranes.

Except that, like fitting the belly of Africa into the gut of the Carribean, it never quite seems to work when you actually try. Thus Searle’s famous Chinese Room Argument and Harnad’s generalization of it into the Symbol Grounding Problem. But the intuition persists that it has to work somehow: After all, what else could account for all that efficacy?

Plenty, it turns out. Intentional concepts, no matter how attenuated, will be efficacious the degree to which the brain is efficacious, simply by virtue of being systematically related to the activity of the brain. The upshot of sufficiency and neglect, recall, is that we are prone to confuse what little information we have available for most all the information available. The greater neuro-environmental circuit revealed by third-person science simply does not exist for the first-person, not even as an absence. This generates the problem of metonymicry, or the tendency for consciousness to take credit for the whole cognitive show regardless of what actually happens neurocomputationally back stage. Now matter how mangled our metacognitive understanding, how insufficient the information broadcast or integrated, in the absence of contradicting information, it will count as our intuitive baseline for what works. It will seem to be the very rule.

And this, my view predicts, is what science will eventually make of the ‘a priori.’ It will show it to be of a piece with the soul, which is to say, more superstition, a cognitive illusion generated by sufficiency and informatic neglect. As a neural subsystem, the conscious brain has more than just the environment from which to learn; it also has the brain itself. Perhaps logic and mathematics as we intuitively conceive them are best thought of, from the life sciences perspective at least (that is, the perspective you hope will keep you alive every time you see your doctor), as kinds of depleted, truncated, informatic shadows cast by brains performatively exploring the most basic natural permutations of  information processing, the combinatorial ensemble of nature’s most fundamental, hyper-applicable, interaction patterns.

On this view, ‘computer programming,’ for instance, looks something like:

where essentially, you have two machines conjoined, two ‘implementations’ with semantics arising as an artifact of the varieties of informatic neglect characterizing the position of the conscious subsystem on this circuit. On this account, our brains ‘program’ the computer, and our conscious subsystems, though they do participate, do so under a number of onerous informatic constraints. As a result, we program blind to all aetiologies save the ‘lateral,’ which is to say, those functionally independent mechanisms belonging to the computer and to our immediate environment more generally. In place of any thoroughgoing access to these ‘medial’ (functionally dependent) causal relations, conscious cognition is forced to rely what little information it can glean, which is to say, the cartoon skyhooks we call semantics. Since this information is systematically related to what the brain is actually doing, and since informatic neglect renders it apparently sufficient, conscious cognition decides it’s the outboard engine driving the whole bloody boat. Neural interaction patterns author inference schemes that, thanks to sufficiency and neglect, conscious cognition deems the efficacious author of computer interaction patterns.

Semantics, in other words, can be explained away.

The very real problem of metonymicry allows us to see how Dennett’s famous ‘two black boxes’ thought-experiment (Darwin’s Dangerous Idea, 412-27), far from dramatically demonstrating the efficacy of intentionality, is simply an extended exercise in question-begging. Dennett tells the story of a group of researchers stranded with two black boxes, each containing a supercomputer containing a database of ‘true facts’ about the world only in different programming languages. One box has two buttons labelled alpha and beta, while the second box has three lights coloured yellow, red, and green accordingly. A single wire connects them. Unbeknownst to the researchers, the button box simply transmits a true statement when the alpha button is pushed, which the bulb box acknowledges by lighting the red bulb for agreement, and a false statement when the beta button is pushed, which the bulb box acknowledges by lighting the green bulb for disagreement. The yellow bulb illuminates only when the bulb box can make no sense of the transmission, which is always the case when the researcher disconnect the boxes and, being entirely ignorant of any of these details, substitute signals of their own.

What Dennett wants to show is how these box-to-box interactions would be impossible to decipher short of taking the intentional stance, in which case, as he points out, the communications become easy enough for a child to comprehend. But all he’s really saying is that the coded transmissions between our brains only make sense from the standpoint of our environmentally informed brains–that the communications between them are adapted to their idiosyncrasies as environmentally embedded, supercomplicated systems. He thinks he’s arguing the ineliminability of intentionality as we intuitively conceive it, as if it were the one wheel required to make the entire mechanism turn. But again, the spectre of metonymicry, the fact that, no matter where our intentional intuitions fit on the neurofunctional food chain, they will strike us as central and efficacious even when they are not, means that all this thought experiment shows–all that it can show, in fact–is that our brains communicate in idiosyncratic codes that conscious cognition seems access via intentional intuitions. To assume that our assumptions regarding the ‘intentional’ capture that code without gross, even debilitating distortions, simply begs the question.

The question we want answered is how intentionality as we understand it is related to the efficacy of our brains. We want to know how conscious experience and cognition fits into this far more sophisticated mechanistic picture. Another way of putting this, since it amounts to the same thing, is that we want to know whether it makes any sense doing philosophy as we have traditionally conceived it. How far we can trust our native intuitions regarding intentionality? The irony, of course, is that Dennett himself argues no, at least to the extent that skyhooks are both intuitive and illusory. Efficacy, understood via design, is ‘top-down,’ the artifact of agency, which is to say, another skyhook. The whole point of introducing the metaphor of cranes was to find some way of capturing our ‘skyhookish’ intuitions in a manner amenable to Darwinian evolution. And, as we saw in the case of design, above, this inexorably means using the concept ‘under erasure.’

The way cognitive science may force us to use all intentional concepts.

.

Consciousness, whatever it turns out to be, is informatically localized. We are just beginning the hard work of inventorying all the information, constitutive or otherwise, that slips through its meagre nets. Because it is localized, it lacks access to vast amounts of information regarding its locality. This means that it is locally conditioned in such a way that it assumes itself locally unconditioned–to be a skyhook as opposed to a crane.

A skyhook, of course, that happens to look something like this

which is to say, what you are undergoing this very moment, reading these very words. On the BBT account, the shape of the first-person is cut from the third-person with the scissors of neglect. The best way to understand consciousness as we humans seem to generally conceive it, to unravel the knots of perplexity that seem to belong to it, is to conceive it in privative terms, as the result of numerous informatic subtractions.* Since those subtractions are a matter of neglect from the standpoint of conscious experience and cognition, they in no way exist for conscious experience and cognition, which means their character utterly escapes our ability to cognize, short of the information accumulating in the cognitive sciences. Experience provides us with innumerable assumptions regarding what we are and what we do, intuitions stamped in various cultural moulds, all conforming to the metaphorics of the skyhook. Dennett’s cranes are simply attempts to intellectually plug these skyhooks into the meat that makes them possible, allowing him to thus argue that intentionality is real enough.

Metonymicry shows that the crane metaphor not only fails to do the conceptual heavy lifting that Dennett’s apologetic redefinitional strategy demands, it also fails to capture the ‘position,’ if you will, of our intentional skyhooks relative to the neglected causality that makes them possible. Cranes may be ‘grounded,’ but they still have hooks: this is why the metaphor is so suggestive. Mundane cranes may be, but they can still do the work that skyhooks accomplish via magic. The presumption that intentional concepts do the work we think that they do is, you could say, built right into the metaphoric frame of Dennett’s argument. But the problem is that skyhooks are not ‘cranes,’ rather they are cogs, mechanistic moments in a larger mechanism, rising from neglected processes to discharge neglected functions. They hang in the meat, and the question of where they hang, and the degree to which their functional position matches or approximates their intuitive one remains a profoundly open and entirely empirical question.

Thus, the pessimistic, quasi-eliminativist thrust of BBT: once metonymicry decouples intentionality from neural efficacy, it seems clear there are far more ways for our metacognitive intuitions to be deceived than otherwise.

Either way, the upshot is that efficacy, like evolution, guarantees nothing when it comes to intentionality. It really could be the case that we are simply ‘pre-Darwinian’ with reference to intentionality in a manner resembling the various commitments to design held back in Darwin’s day. Representation could very well suffer the same fate vis a vis the life sciences–it literally could become a concept that we can only use ‘under erasure’ when speaking of human cognition.

Science overcoming the black box of the brain could be likened to a gang of starving thieves breaking into a treasure room they had obsessively pondered for the entirety of their unsavoury careers. They range about, kicking over cases filled with paper, fretting over the fact that they can’t find any gold, anything possessing intrinsic value. Dennett is the one who examines the paper, and holds it up declaring that it’s cash and so capable of providing all the wealth anyone could ever need.

I’m the one-eyed syphilitic, the runt of the evil litter, who points out Jefferson Davis staring up from each and every $50 bill.

.

Notes

* It’s worth pausing to consider the way BBT ‘pictures’ consciousness. First, BBT is agnostic on the issue of how the brain generates consciousness; it is concerned, rather, with the way consciousness appears. Taking a deflationary, ‘working conception’ of nonsemantic information, and assuming three things–that consciousness involves integration of differentiated elements, that it has no way of cognizing information related to its own neurofunctionality, and that it is a subsystematic artifact of the brain–it sees the first-person and all its perplexities as expressions of informatic neglect. Consider the asymptotic margins of visual attention–the way the limits of what you are seeing this very moment cannot themselves be seen. BBT argues that similar asymptotic margins, or ‘information horizons,’ characterize all the modalities of conscious experience–as they must, insofar as the information available to each is finite. The radical step in the picture is see how this trivial fact can explain the apparent structure of the first-person as an asymptotic partitioning of a larger informatic environment. So it suggests that a first-person structural feature as significant and as perplexing as the Now, for instance, can be viewed as a kind of temporal analogue of our visual margin, always apparently the ‘same’ because timing can no more time itself than seeing can see itself, and always different because other cognitive systems (as in the case of vision again) can frame it as another moment within a larger (in this case temporal) environment. Most of the problems pertaining to consciousness, the paradoxicality, the incommensurability, the inexplicability, can be explained if we simply adopt a subsystematic perspective, and start asking what information we could realistically expect to be available for uptake for what kinds of cognitive systems. Thus the radical empirical stakes of BBT: the ‘consciousness’ that remains seems far, far easier to explain than the conundrum riddled one we think we see.