Three Pound Brain

No bells, just whistling in the dark…

Month: May, 2014

Science, Nihilism, and the Artistry of Nature (by Ben Cain)

by rsbakker

nihilism image

Technologically-advanced societies may well destroy themselves, but there are two other reasons to worry that science rather than God will usher in the apocalypse, directly destroying us by destroying our will to live. The threat in question is nihilism, the loss of faith in our values and thus the wholesale humiliation of all of us, due to science’s tendency to falsify every belief that’s traditionally comforted the masses. The two reasons to suspect that science entails nihilism are that scientists find the world to be natural (fundamentally material, mechanical, and impersonal), whereas traditional values tend to have supernatural implications, and that scientific methods famously bypass intuitions and feelings to arrive at the objective truth.

These two features of science, being the content of scientific theories and the scientific methods of inquiry might seem redundant, since the point about methods is that science is methodologically naturalistic. Thus, the point about the theoretical content might seem to come as no surprise. By definition, a theory that posits something supernatural wouldn’t be scientific. While scientists may be open to learning that the world isn’t a natural place, making that discovery would amount to ending or at least transforming the scientific mode of inquiry. Nevertheless, naturalism, the worldview that explains everything in materialistic and mechanistic terms, isn’t just an artifact of scientific methods. What were once thought to be ghosts and gods and spirits really did turn out to be natural phenomena.

Moreover, scientific objectivity seems a separate cause of nihilism in that, by showing us how to be objective, paradigmatic scientists like Galileo, Newton, and Darwin showed us also how to at least temporarily give up on our commonsense values. After all, in the moment when we’re following scientific procedures, we’re ignoring our preferences and foiling our biases. Of course, scientists still have feelings and personal agendas while they’re doing science; for example, they may be highly motivated to prove their pet theory. But they also know that by participating in the scientific process they’re holding their feelings to the ultimate test. Scientific methods objectify not just the phenomenon but the scientist; as a functionary in the institution, she must follow strict procedures, recording the data accurately, thinking logically, and publishing the results, making her scientific work as impersonal as the rest of the natural world. In so far as nonscientists understand this source of science’s monumental success, we might come to question the worth of our subjectivity, of our private intuitions, wishes, and dreams which scientific methods brush aside as so many distortions.

Despite the imperative to take scientists as our model thinkers in the Age of Reason, we might choose to ignore these two threats to our naïve self-image. Nevertheless, the fear is that distraction, repression, and delusion might work only for so long before the truth outs. You might think, on the contrary, that science doesn’t entail nihilism, since science is a social enterprise and thus it has a normative basis. Scientists are pragmatic and so they evaluate their explanations in terms of rational values of simplicity, fruitfulness, elegance, utility, and so on. Still, the science-centered nihilist can reply, those values might turn out to be mechanisms, as scientists themselves would discover, in which case science would humiliate not just the superstitious masses but the pragmatic theorists and experimenters as well. That is, science would refute not only the supernaturalist’s presumptions but the elite instrumentalist’s view of scientific methods. Science would become just another mechanism in nature and scientific theories would have no special relationship with the facts since from this ultra-mechanistic “perspective,” not even scientific statements would consist of symbols that bear meaning. The scientific process would be seen as consisting entirely of meaningless, pointless, and amoral causal relations—just like any other natural system.

I think, then, this sort of nihilist can resist that pragmatic objection to the suspicion that science entails nihilism and thus poses a grave, still largely unappreciated threat to society. There’s another objection, though, which is harder to discount. The very cognitive approach which is indispensible to scientific discovery, the objectification of phenomena, which is to say the analysis of any pattern in impersonal terms of causal relations, is itself a source of certain values. When we objectify something we’re thereby well-positioned to treat that thing as having a special value, namely an aesthetic one. Objectification overlaps with the aesthetic attitude, which is the attitude we take up when we decide to evaluate something as a work of art, and thus objects, as such, are implicitly artworks.

 

Scientific Objectification and the Aesthetic Attitude

 

There’s a lot to unpack there, so I’ll begin by explaining what I mean by the “aesthetic attitude.” This attitude is explicated differently by Kant, Schopenhauer, and others, but the main idea is that something becomes an artwork when we adopt a certain attitude towards it. The attitude is a paradoxical one, because it involves a withholding of personal interest in the object and yet also a desire to experience the object for its own sake, based on the assumption that such an experience would be rewarding. When an observer is disinterested in experiencing something, but chooses to experience it because she’s replaced her instrumental or self-interested perspective with an object-oriented one so that she wishes to be absorbed by what the object has to offer, as it were, she’s treating the object as a work of art. And arguably, that’s all it means for something to be art.

For example, if I see a painting on a wall and I study it up close with a view to stealing it, because all the while I’m thinking of how economically valuable the painting is, I’m personally interested in the painting and thus I’m not treating it as art; instead, for me the painting is a commodity. Suppose I have no ulterior motive as I look at the painting, but I’m also bored by it and so I’m not passively letting the painting pour its content into me, as it were, which is to say that I have no respect for such an experience in this case, and so I’m not giving the painting a fair chance to captivate my attention, I’m likewise not treating the painting as art. I’m giving it only a cursory glance, because I lack the selfless interest in letting the painting hold all of my attention and so I don’t anticipate the peculiar pleasure from perceiving the painting that we associate with an aesthetic experience. Whether it’s a painting, a song, a poem, a novel, or a film, the object becomes an artwork when it’s regarded as such, which requires that the observer adopt this special attitude towards it.

Now, scientific objectivity plainly isn’t identical to the aesthetic attitude. After all, regardless of whether scientists think of nature as beautiful when they’re studying the evidence or performing experiments or formulating mechanistic explanations, they do have at least one ulterior motive. Some scientists may have an economic motive, others may be after prestige, but all scientists are interested in understanding how systems work. Their motive, then, is a cognitive one—which is why they follow scientific procedures, because they believe that scientific objectification (mechanistic analysis, careful collection of the data, testing of hypotheses with repeatable experiments, and so on) is the best means of achieving that goal.

However, this cognitive interest posits a virtual aesthetic stance as the means to achieve knowledge. Again, scientists trust that their personal interests are irrelevant to scientific truth and that regardless of how they prefer the world to be, the facts will emerge as long as the scientific methods of inquiry are applied with sufficient rigor. To achieve their cognitive goal, scientists must downplay their biases and personal feelings, and indeed they expect that the phenomenon will reveal its objective, real properties when it’s scientifically scrutinized. The point of science is for us to get out of the way, as much as possible, to let the world speak with its own voice, as opposed to projecting our fantasies and delusions onto the world. Granted, as Kant explained, we never hear that voice exactly—what Pythagoras called the music of the spheres—because in the act of listening to it or of understanding it, we apply our species-specific cognitive faculties and programs. Still, the point is that the institution of science is structured in such a way that the facts emerge because the scientific form of explanation circumvents the scientists’ personalities. This is the essence of scientific objectivity: in so far as they think logically and apply the other scientific principles, scientists depersonalize themselves, meaning that they remove their character from their interaction with some phenomenon and make themselves functionaries in a larger system. This system is just the one in which the natural phenomenon reveals its causal interrelations thanks to the elimination of our subjectivity which would otherwise personalize the phenomenon, adding imaginary and typically supernatural interpretations which blind us to the truth.

And when scientists depersonalize themselves, they open themselves up to the phenomenon: they study it carefully, taking copious notes, using powerful technologies to peer deeply into it, and isolating the variables by designing sterile environments to keep out background noise. This is very like taking up the aesthetic attitude, since the art appreciator too becomes captivated by the work itself, getting lost in its objective details as she sets aside any personal priority she may have. Both the art appreciator and the scientist are personally disinterested when they inspect some object, although the scientist is often just functionally or institutionally so, and both are interested in experiencing the thing for its own sake, although the artist does so for the aesthetic reward whereas the scientist expects a cognitive one. Both objectify what they perceive in that they intend to discern only the subtlest patterns in what’s actually there in front of them, whether on the stage, in the picture frame, or on the novel’s pages, in the case of fine art, or in the laboratory or the wild in the case of science. Thus, art appreciators speak of the patterns of balance and proportion, while scientists focus on causal relations. And the former are rewarded with the normative experience of beauty or are punished with a perception of ugliness, as the case may be, while the latter speak of cognitive progress, of science as the premier way of discovering the natural facts, and indeed of the universality of their successes.

Here, then, is an explanation of what David Hume called the curious generalization that occurs in inductive reasoning, when we infer that because some regularity holds in some cases, therefore it likely holds in all cases. We take our inductive findings to have universal scope because when we reason in that way, we’re objectifying rather than personalizing the phenomenon, and when we objectify something we’re virtually taking up the aesthetic attitude towards it. Finally, when we take up such an attitude, we anticipate a reward, which is to say that we assume that objectification is worthwhile—not just for petty instrumental reasons, but for normative ones, which is to say that objectification functions as a standard for everyone. When you encounter a wonderful work of art, you think everyone ought to have the same experience and that someone who isn’t as moved by that artwork is failing in some way. Likewise, when you discover an objective fact of how some natural system operates, you think the fact is real and not just apparent, that it’s there universally for anyone on the planet to confirm.

Of course, inductive generalization is based also on metaphysical materialism, on the assumptions that the world is made of atoms and that a chunk of matter is just the sort of thing to hold its form and to behave in regular ways regardless of who’s observing it, since material things are impersonal and thus they lack any freedom to surprise. But scientists persist in speaking of their cognitive enterprise as progressive, not just because they assume that science is socially useful, but because scientific findings transcend our instrumental motives since they allow a natural system to speak mainly for itself. Moreover, scientists persist in calling those generalizations laws, despite the unfortunate personal (theistic) connotations, given the comparison with social laws. These facts indicate that inductive reasoning isn’t wholly rational, after all, and that the generalizations are implicitly normative (which isn’t to say moral), because the process of scientific discovery is structurally similar to the experience of art.

 

Natural Art and Science’s True Horror

 

Some obvious questions remain. Are natural phenomena exactly the same as fine artworks? No, since the latter are produced by minds whereas the former are generated by natural forces and elements, and by the processes of evolution and complexification. Does this mean that calling natural systems works of art is merely analogical? No, because the similarity in question isn’t accidental; rather, it’s due to the above theory of art, which says that art is nothing more than what we find when we adopt the aesthetic attitude towards it. According to this account, art is potentially everywhere and how the art is produced is irrelevant.

Does this mean, though, that aesthetic values are entirely subjective, that whether something is art is all in our heads since it depends on that perspective? The answer to this question is more complicated. Yes, the values of beauty and ugliness, for example, are subjective in that minds are required to discover and appreciate them. But notice that scientific truth is likewise just as subjective: minds are required to discover and to understand such truth. What’s objective in the case of scientific discoveries is the reality that corresponds to the best scientific conclusions. That reality is what it is regardless of whether we explain it or even encounter it. Likewise, what’s objective in the case of aesthetics is something’s potential to make the aesthetic appreciation of it worthwhile. That potential isn’t added entirely by the art appreciator, since that person opens herself up to being pleased or disappointed by the artwork. She hopes to be pleased, but the art’s quality is what it is and the truth will surface as long as she adopts the aesthetic attitude towards it, ignoring her prejudices and giving the art a chance to speak for itself, to show what it has to offer. Even if she loathes the artist, she may grudgingly come to admit that he’s produced a fine work, as long as she’s virtually objective in her appreciation of his work, which is to say as long as she treats it aesthetically and impersonally for the sake of the experience itself. Again, scientific objectivity differs slightly from aesthetic appreciation, since scientists are interested in knowledge, not in pleasant experience. But as I’ve explained, that difference is irrelevant since the cognitive agenda compels the scientist to subdue or to work around her personality and to think objectively—just like the art beholder.

So do beauty and ugliness exist as objective parts of the world? As potentials to reward or to punish the person who takes up anything like the aesthetic attitude, including a stance of scientific objectification, given the extent of the harmony or disharmony in the observed patterns, for example, I believe the answer is that those aesthetic properties are indeed as real as atoms and planets. The objective scientist is rewarded ultimately with knowledge of how nature works, while someone in the grip of the aesthetic attitude is rewarded (or punished) with an experience of the aesthetic dimension of any natural or artificial product. That dimension is found in the mechanical aspect of natural systems, since aesthetic harmony requires that the parts be related in certain ways to each other so that the whole system can be perceived as sublime or otherwise transcendent (mind-blowing). Traditional artworks are self-contained and science likewise deals largely with parts of the universe that are analyzed or reduced to systems within systems, each studied independently in artificial environments that are designed to isolate certain components of the system.

Now, such reduction is futile in the case of chaotic systems, but the grandeur of such systems is hardly lessened when the scientist discovers how a system which is sensitive to initial conditions evolves unpredictably as defined by a mathematical formula. Indeed, chaotic systems are comparable to modern and postmodern art as opposed to the more traditional kind. Recent, highly conceptual art or the nonrepresentational kind that explores the limits of the medium is about as unpredictable as a chaotic system. So the aesthetic dimension is found not just in part-whole relations and thus in beauty in the sense of harmony, but in free creativity. Modern art and science are both institutions that idealize the freedom of thought. Freed from certain traditions, artists now create whatever they’re inspired to create; they’re free to experiment, not to learn the natural facts but to push the boundaries of human creativity. Likewise, modern scientists are free to study whatever they like (in theory). And just as such modernists renounce their personal autonomy for the sake of their work, giving themselves over to their muse, to their unconscious inclinations (somewhat like Zen Buddhists who abhor the illusion of rational self-control), or instead to the rigors of institutional science, nature reveals its mindless creativity when chaotic systems emerge in its midst.

But does the scientist actually posit aesthetic values while doing science, given that scientific objectification isn’t identical with the aesthetic attitude? Well, the scientist would generally be too busy doing science to attend to the aesthetic dimension. But it’s no accident that mathematicians are disproportionately Platonists, that early modern scientists saw the cosmic order as attesting to God’s greatness, or that postmodern scientists like Neal deGrasse Tyson, who hosts the rebooted television show Cosmos, labour to convince the average American that naturalism ought to be enough of a religion for them, because the natural facts are glorious if not technically miraculous. The question isn’t whether scientists supply the world with aesthetic properties, like beauty or ugliness, since those properties preexist science as objective probabilities of uplifting or depressing anyone who takes up the aesthetic attitude, which attitude is practically the same as objectivity. Instead, the question here might be whether scientific objectivity compels the scientist to behold a natural phenomenon as art. Assuming there are nihilistic scientists, the answer would have to be no. The reason for this would be the difference in social contexts, which accounts for the difference between the goals and rewards. Again, the artist wants a certain refined pleasure whereas the scientist wants knowledge. But the point is that the scientist is poised to behold natural systems as artworks, just in so far as she’s especially objective.

Finally, we should return to the question of how this relates to nihilism. The fear, raised above, was that because science entails nihilism, the loss of faith in our values and traditions, scientists threaten to undermine the social order even as they lay bare the natural one. I’ve questioned the premise, since objectivity entails instead the aesthetic attitude which compels us to behold nature not as arid and barren but as rife with aesthetic values. Science presents us with a self-shaping universe, with the mindless, brute facts of how natural systems work that scientists come to know with exquisite attention to detail, thanks to their cognitive methods which effectively reveal the potential of even such systems to reward or to punish someone with an aesthetic eye. For every indifferent natural system uncovered by science, we’re well-disposed to appreciating that system’s aesthetic quality—as long as we emulate the scientist and objectify the system, ignoring our personal interests and modeling its patterns, such as by reducing the system to mechanical part-whole relations. The more objective knowledge we have, the more grist for the aesthetic mill. This isn’t to say that science supports all of our values and traditions. Obviously science threatens some of them and has already made many of them untenable. But science won’t leave us without any value at all. The more objective scientists are and the more of physical reality they disclose, the more we can perceive the aesthetic dimension that permeates all things, just by asking for pleasure rather than knowledge from nature.

There is, however, another great fear that should fill in for the nihilistic one. Instead of worrying that science will show us why we shouldn’t believe there’s any such thing as value, we might wonder whether, given the above, science will ultimately present us with a horrible rather than a beautiful universe. The question, then, is whether nature will indeed tend to punish or to reward those of us with aesthetic sensibilities. What is the aesthetic quality of natural phenomena in so far as they’re appreciated as artworks, as aesthetically interpretable products of undead processes? Is the final aesthetic judgment of nature an encouraging, life-affirming one that justifies all the scientific work that’s divorced the facts from our mental projections or will that judgment terrorize us worse than any grim vision of the world’s fundamental neutrality? Optimists like Richard Dawkins, Carl Sagan and Tyson think the wonders of nature are uplifting, but perhaps they’re spinning matters to protect science’s mystique and the secular humanistic myth of the progress of modern, science-centered societies. Perhaps the world’s objectification curses us not just with knowledge of many unpleasant facts of life, but with an experience of the monstrousness of all natural facts.

Neuroscience as Socio-Cognitive Pollution

by rsbakker

Want evidence of the Semantic Apocalypse? Look no further than your classroom.

As the etiology of more and more cognitive and behavioural ‘deficits’ is mapped, more and more of what once belonged to the realm of ‘character’ is being delivered to the domain of the ‘medical.’ This is why professors and educators more generally find themselves institutionally obliged to make more and more ‘accommodations,’ as well as why they find their once personal relations with students becoming ever more legalistic, ever more structured to maximally deflect institutional responsibility. Educators relate with students in an environment that openly declares their institutional incompetence regarding medicalized matters, thus providing students with a failsafe means to circumvent their institutional authority. This short-circuit is brought about by the way mechanical, or medical, explanations of behaviour impact intuitive/traditional notions regarding responsibility. Once cognitive or behavioural deficits are redefined as ‘conditions,’ it becomes easy to argue that treating those possessing the deficit the same as those who do not amounts to ‘punishing’ them for something they ‘cannot help.’ The professor is thus compelled to ‘accommodate’ to level the playing field, in order to be moral.

On Blind Brain Theory, this trend is part and parcel of the more general process of ‘social akrasis,’ the becoming incompatible of knowledge and experience. The adaptive functions of morality turn on certain kinds of ignorance, namely, ignorance of the very kind of information driving medicalization. Once the mechanisms underwriting some kind of ‘character flaw’ are isolated, that character flaw ceases to be a character flaw, and becomes a ‘condition.’ Given pre-existing imperatives to grant assistance to those suffering conditions, behaviour once deemed transgressive becomes symptomatic, and moral censure becomes immoral. Character flaws become disabilities. The problem, of course, is that all transgressive behaviour—all behaviour period, in fact—can be traced back to various mechanisms, begging the question, ‘Where does accommodation end?’ Any disparity in classroom performance can be attributed to disparities between neural mechanisms.

The problem, quite simply, is that the tools in our basic socio-cognitive toolbox are adapted to solve problems in the absence of mechanical cognition—it literally requires our blindness to certain kinds of facts to reliably function. We are primed ‘to hold responsible’ those who ‘could have done otherwise’—those who have a ‘choice.’ Choice, quite famously, requires some kind of fictional discontinuity between us and our precursors, a discontinuity that only ignorance and neglect can maintain. ‘Holding responsible,’ therefore, can only retreat before the advance of medicalization, insofar as the latter involves the specification of various behavioural precursors.

The whole problem of this short circuit—and the neuro-ethical mire more generally, in fact—can be seen as a socio-cognitive version of a visual illusion, where the atypical triggering of different visual heuristics generates conflicting visual intuitions. Medicalization stumps socio-cognition in much the same way the Muller-Lyer Illusion stumps the eye: It provides atypical (evolutionarily unprecedented, in fact) information, information that our socio-cognitive systems are adapted to solve without. Causal information regarding neurophysiological function triggers an intuition of moral exemption regarding behaviour that could never have been solved as such in our evolutionary history. Neuroscientific understanding of various behavioural deficits, however defined, cues the application of a basic, heuristic capacity within a historically unprecedented problem-ecology. If our moral capacities have evolved to solve problems neglecting the brains involved, to work around the lack of brain information, then it stands to reason that the provision of that information would play havoc with our intuitive problem-solving. Brain information, you could say, is ‘non-ecofriendly,’ a kind of ‘informatic pollutant’ in the problem-ecologies moral cognition is adapted to solve.

The idea that heuristic cognition generates illusions is now an old one. In naturalizing intentionality, Blind Brain Theory allows us to see how the heuristic nature of intentional problem-solving regimes means they actually require the absence of certain kinds of information to properly function. Adapted to solve social problems in the absence of any information regarding the actual functioning of the systems involved, our socio-cognitive toolbox literally requires that certain information not be available to function properly. The way this works can be plainly seen with the heuristics governing human threat detection, say. Since our threat detection systems are geared to small-scale, highly interdependent social contexts, the statistical significance of any threat information is automatically evaluated against a ‘default village.’ Our threat detection systems, in other words, are geared to problem-ecologies lacking any reliable information regarding much larger populations. To the extent that such information ‘jams’ reliable threat detection (incites irrational fears), one might liken such information to pollution, to something ecologically unprecedented that renders previously effective cognitive adaptations ineffective.

I actually think ‘cognitive pollution’ is definitive of modernity, that all modern decision-making occurs in information environments, many of them engineered, that cut against our basic decision-making capacities. The ‘ecocog’ ramifications of neuroscientific information, however, promise to be particularly pernicious.

Our moral intuitions were always blunt instruments, the condensation of innumerable ancestral social interactions, selected for their consequences rather than their consistencies. Their resistance to any decisive theoretical regimentation—the mire that is ‘metaethics’—should come as no surprise. But throughout this evolutionary development, neurofunctional neglect remained a constant: at no point in our evolutionary history were our ancestors called on to solve moral problems possessing neurofunctional information. Now, however, that information has become an inescapable feature of our moral trouble-shooting, spawning ad hoc fixes that seem to locally serve our intuitions, while generating any number of more global problems.

A genuine social process is afoot here.

A neglect based account suggests the following interpretation of what’s happening: As medicalization (biomechanization) continues apace, the social identity of the individual is progressively divided into the subject, the morally liable, and the abject, the morally exempt. Like a wipe in cinematic editing, the scene of the abject is slowly crawling across the scene of the subject, generating more and more breakdowns of moral cognition. Becoming abject doesn’t so much erase as displace liability: one individual’s exemption (such as you find in accommodation) from moral censure immediately becomes a moral liability for their compatriots. The paradoxical result is that even as we each become progressively more exempt from moral censure, we become progressively more liable to provide accommodation. Thus the slow accumulation of certain professional liabilities as the years wear on. Those charged with training and assessing their fellows will in particular face a slow erosion in their social capacity to censure—which is to say, evaluate—as accommodation and its administrative bureaucracies slowly continue to bloat, capitalizing on the findings of cognitive science.

The process, then, can be described as one where progressive individual exemption translates into progressive social liability: given our moral intuitions, exemptions for individuals mean liabilities for the crowd. Thus the paradoxical intensification of liability that exemption brings about: the process of diminishing performance liability is at once the process of increasing assessment liability. Censure becomes increasingly prone to trigger censure.

The erosion of censure’s public legitimacy is the most significant consequence of this socio-cognitive short-circuit I’m describing. Heuristic tool kits are typically whole package deals: we evolved our carrot problem-solving capacity as part of a larger problem-solving capacity involving sticks. As informatic pollutants destroy more and more of the stick’s problem-solving habitat, the carrots left behind will become less and less reliable. Thus, on a ‘zombie morality’ account, we should expect the gradual erosion of our social system’s ability to police public competence—a kind of ‘carrot drift.’

This is how social akrasis, the psychotic split between the nihilistic how and fantastic what of our society and culture, finds itself coded within the individual. Broken autonomy, subpersonally parsed. With medicalization, the order of the impersonal moves, not simply into the skull of the person, but into their performance as well. As the subject/abject hybrid continues to accumulate exemptions, it finds itself ever more liable to make exemptions. Since censure is communicative, the increasing liability of censure suggests a contribution, at least, to the increasing liability of moral communication, and thus, to the politicization of public interpersonal discourse.

How this clearly unsustainable trend ends depends on the contingencies of a socially volatile future. We should expect to witness the continual degradation in the capacity of moral cognition to solve in what amounts to an increasingly polluted information environment. Will we overcome these problems via some radical new understanding of social cognition? Or will this lead to some kind of atavistic backlash, the institution of some kind of informatic hygiene—an imposition of ignorance on the public? I sometimes think that the kind of ‘liberal atrocity tales’ I seem to endlessly encounter among my nonacademic peers point in this direction. For those ignorant of the polluting information, the old judgments obviously apply, and stories of students not needing to give speeches in public-speaking classes, or homeless individuals being allowed to dump garbage in the river, float like sparks from tongue to tongue, igniting the conviction that we need to return to the old ways, thus convincing who knows how many to vote directly against their economic interests. David Brookes, protege of William F. Buckley and conservative columnist for The New York Times, often expresses amazement at the way the American public continues to drift to the political right, despite the way fiscal conservative reengineering of the market continues to erode their bargaining power. Perhaps the identification of liberalism with some murky sense of the process described above has served to increase the rhetorical appeal of conservatism…

The sense that someone, somewhere, needs to be censured.

The Metacritique of Reason

by rsbakker

Kant

 

Whether the treatment of such knowledge as lies within the province of reason does or does not follow the secure path of a science, is easily to be determined from the outcome. For if, after elaborate preparations, frequently renewed, it is brought to a stop immediately it nears its goal; if often it is compelled to retrace its steps and strike into some new line of approach; or again, if the various participants are unable to agree in any common plan of procedure, then we may rest assured that it is very far from having entered upon the secure path of a science, and is indeed a merely random groping.  Immanuel Kant, The Critique of Pure Reason, 17.

The moral of the story, of course, is that this description of Dogmatism’s failure very quickly became an apt description of Critical Philosophy as well. As soon as others saw all the material inferential wiggle room in the interpretation of condition and conditioned, it was game over. Everything that damned Dogmatism in Kant’s eyes now characterizes his own philosophical inheritance.

Here’s a question you don’t come across everyday: Why did we need Kant? Why did philosophy have to discover the transcendental? Why did the constitutive activity of cognition elude every philosopher before the 18th Century? The fact we had to discover it means that it was somehow ‘always there,’ implicit in our experience and behaviour, but we just couldn’t see it. Not only could we not see it, we didn’t even realize it was missing, we had no inkling we needed to understand it to understand ourselves and how we make sense of the world. Another way to ask the question of the inscrutability of the ‘transcendental,’ then, is to ask why the passivity of cognition is our default assumption. Why do we assume that ‘what we see is all there is’ when we reflect on experience?

Why are we all ‘naive Dogmatists’ by default?

225px-Spinoza

It’s important to note that no one but no one disputes that it had to be discovered. This is important because it means that no one disputes that our philosophical forebears once uniformly neglected the transcendental, that it remained for them an unknown unknown. In other words, both the Intentionalist and the Eliminativist agree on the centrality of neglect in at least this one regard. The transcendental (whatever it amounts to) is not something that metacognition can readily intuit—so much so that humans engaged in thousands of years of ‘philosophical reflection’ without the least notion that it even existed. The primary difference is that the Intentionalist thinks they can overcome neglect via intuition and intellection, that theoretical metacognition (philosophical reflection), once alerted to the existence of the transcendental, suddenly somehow possesses the resources to accurately describe its structure and function. The Eliminativist, on the other hand, asks, ‘What resources?’ Lay them out! Convince me! And more corrosively still, ‘How do you know you’re not still blinkered by neglect?’ Show me the precautions!

The Eliminativist, in other words, pulls a Kant on Kant and demands what amounts to a metacritique of reason.

The fact is, short of this accounting of metacognitive resources and precautions, the Intentionalist has no way of knowing whether or not they’re simply a Stage-Two Dogmatist,’ whether their ‘clarity,’ like the specious clarity of the Dogmatist, isn’t simply the product of neglect—a kind of metacognitive illusion in effect. For the Eliminativist, the transcendental (whatever its guise) is a metacognitive artifact. For them, the obvious problems the Intentionalist faces—the supernaturalism of their posits, the underdetermination of their theories, the lack of decisive practical applications—are all symptomatic of inquiry gone wrong. Moreover, they find it difficult to understand why the Intentionalist would persist in the face of such problems given only a misplaced faith in their metacognitive intuitions—especially when the sciences of the brain are in the process of discovering the actual constitutive activity responsible! You want to know what’s really going on ‘implicitly,’ ask a cognitive neuroscientist. We’re just toying with our heuristics out of school otherwise.

We know that conscious cognition involves selective information uptake for broadcasting throughout the brain. We also know that no information regarding the astronomically complex activities constitutive of conscious cognition as such can be so selected and broadcast. So it should come as no surprise whatsoever that the constitutive activity responsible for experience and cognition eludes experience and cognition—that the ‘transcendental,’ so-called, had to be discovered. More importantly, it should come as no surprise that this constitutive activity, once discovered, would be systematically misinterpreted. Why? The philosopher ‘reflects’ on experience and cognition, attempts to ‘recollect’ them in subsequent moments of experience and cognition, in effect, and realizes (as Hume did regarding causality, say) that the information available cannot account for the sum of experience and cognition: the philosopher comes to believe (beginning most famously with Kant) that experience does not entirely beget experience, that the constitutive constraints on experience somehow lie orthogonal to experience. Since no information regarding the actual neural activity responsible is available, and since, moreover, no information regarding this lack is available, the philosopher presumes these orthogonal constraints must conform to their metacognitive intuitions. Since the resulting constraints are incompatible with causal cognition, they seem supernatural: transcendental, virtual, quasi-transcendental, aspectual, what have you. The ‘implicit’ becomes the repository of otherworldly constraining or constitutive activities.

Philosophy had to discover the transcendental because of metacognitive neglect—on this fact, both the Intentionalist and the Eliminativist agree. The Eliminativist simply takes the further step of holding neglect responsible for the ontologically problematic, theoretically underdetermined, and practically irrelevant character of Intentionalism. Far from what Kant supposed, Critical Philosophy—in all its incarnations, historical and contemporary–simply repeats, rather than solves, these sins of Dogmatism. The reason for this, the Eliminativist says, is that it overcomes one metacognitive illusion only to run afoul a cluster of others.

This is the sense in which Blind Brain Theory can be seen as completing as much as overthrowing the Kantian project. Though Kant took cognitive dogmatism, the assumption of cognitive simplicity and passivity, as his target, he nevertheless ran afoul metacognitive dogmatism, the assumption of metacognitive simplicity and passivity. He thought—as his intellectual heirs still think—that philosophical reflection possessed the capacity to apprehend the superordinate activity of cognition, that it could accurately theorize reason and understanding. We now possess ample empirical grounds to think this is simply not the case. There’s the mounting evidence comprising what Princeton psychologist Emily Pronin has termed the ‘Introspection Illusion,’ direct evidence of metacognitive incompetence, but the fact is, every nonconscious function experimentally isolated by cognitive science illuminates another constraining/constitutive cognitive activity utterly invisible to philosophical reflection, another ignorance that the Intentionalist believes has no bearing on their attempts to understand understanding.

One can visually schematize our metacognitive straits in the following way:

Metacognitive Capacity

This diagram simply presumes what natural science presumes, that you are a complex organism biomechanically synchronized with your environments. Light hits your retina, sound hits your eardrum, neural networks communicate and behaviours are produced. Imagine your problem-solving power set on a swivel and swung 360 degrees across the field of all possible problems, which is to say problems involving lateral, or nonfunctionally entangled environmental systems, as well as problems involving medial, or functionally entangled enabling systems, such as those comprising your brain. This diagram, then, visualizes the loss and gain in ‘cognitive dimensionality’—the quantity and modalities of information available for problem solving—as one swings from the third-person lateral to the first-person medial. Dimensionality peaks with external cognition because of the power and ancient evolutionary pedigree of the systems involved. The dimensionality plunges for metacognition, on the other hand, because of medial neglect, the way structural complicity, astronomical complexity, and evolutionary youth effectively renders the brain unwittingly blind to itself.

This is why the blue line tracking our assumptive or ‘perceived’ medial capacity in the figure peaks where our actual medial capacity bottoms out: with the loss in dimensionality comes the loss in the ability to assess reliability. Crudely put, the greater the cognitive dimensionality, the greater the problem-solving capacity, the greater the error-signalling capacity. And conversely, the less the cognitive dimensionality, the less the problem-solving capacity, the less the error-signalling capacity. The absence of error-signalling means that cognitive consumption of ineffective information will be routine, impossible to distinguish from the consumption of effective information. This raises the spectre of ‘psychological anosognosia’ as distinct from the clinical, the notion that the very cognitive plasticity that allowed humans to develop ACH thinking has led to patterns of consumption (such as those underwriting ‘philosophical reflection’) that systematically run afoul medial neglect. Even though low dimensionality speaks to cognitive specialization, and thus to the likely ineffectiveness of cognitive repurposing, the lack of error-signalling means the information will be routinely consumed no matter what. Given this, one should expect ACH thinking–reason–to be plagued with the very kinds of problems that plague theoretical discourse outside the sciences now, the perpetual coming up short, the continual attempt to retrace steps taken, the interminable lack of any decisive consensus…

Or what Kant calls ‘random groping.’

The most immediate, radical consequence of this 360 degree view is that the opposition between the first-person and third-person disappears. Since all the apparently supernatural characteristics rendering the first-person naturalistically inscrutable can now be understood as artifacts of neglect—illusions of problem-solving sufficiency—all the ‘hard problems’ posed by intentional phenomena simply evaporate. The metacritique of reason, far from pointing a way to any ‘science of the transcendental,’ shows how the transcendental is itself a dogmatic illusion, how cryptic things like the ‘apriori’ are obvious expressions of medial neglect, sources of constraint ‘from nowhere’ that baldly demonstrate our metacognitive incapacity to recognize our metacognitive incapacity. For all the prodigious problem-solving power of logic and mathematics, a quick glance at the philosophy of either is enough to assure you that no one knows what they are. Blind Brain Theory explains this remarkable contrast of insight and ignorance, how we could possess tools so powerful without any decisive understanding of the tools themselves.

The metacritique of reason, then, leads to what might be called ‘pronaturalism,’ a naturalism that can be called ‘progressive’ insofar as it continues to eschew the systematic misapplication of intentional cognition to domains that it cannot hope to solve—that continues the process of exorcising ghosts from the machinery of nature. The philosophical canon swallowed Kant so effortlessly that people often forget he was attempting to put an end to philosophy, to found a science worthy of the name, one which grounded both the mechanical and the ghostly. By rendering the ghostly the formal condition of any cognition of the mechanical, however, he situated his discourse squarely in the perpetually underdetermined domain of philosophy. His failure was inevitable.

The metacritique of reason makes the very same attempt, only this time anchored in the only real credible source of theoretical cognition we possess: the sciences. It allows us to peer through the edifying fog of our intentional traditions and to see ourselves, at long last, as wholly continuous with crazy shit like this…

Filamentary Map

 

Zombie Interpretation: Eliminating Kriegel’s Asymmetry Argument

by rsbakker

Could zombie versions of philosophical problems, versions that eliminate all intentionality from the phenomena at issue, shed any light on those problems?

The only way to find out is to try.

Since I’ve been railing so much about the failure of normativism to account for its evidential basis, I thought it worthwhile to consider the work of a very interesting intentionalist philosopher, Uriah Kriegel, who sees the need quite clearly. The question could not be more simple: What justifies philosophical claims regarding the existence and nature of intentional phenomena? For Kriegel the most ‘natural’ and explanatorily powerful answer is observational contact with experiential intentional states. How else, he asks, can we come to know our intentional states short of experiencing them? In what follows I propose to consider two of Kriegel’s central arguments against the backdrop of ‘zombie interpretations’ of the very activities he considers, and in doing so, I hope to undermine not only his argument, but the general abductive strategy one finds intentionalists taking throughout philosophy more generally, the presumption that only theoretical accounts somehow involving intentionality can account for intentional phenomena.

In his 2011 book, The Sources of Intentionality, Kriegel attempts to remedy semantic externalism’s failure to naturalize intentionality via a carefully specified return to phenomenology, an account of how intentional concepts arise from our introspective ‘observational contact’ with mental states possessing intentional content. Experience, he claims, is intrinsically intentional. Introspective contact with this intrinsic intentionality is what grounds our understanding of intentionality, providing ‘anchoring instances’ for our various intentional concepts.

As Kriegel is quick to point out, such a thesis implies a crucial distinction between experiential intentionality, the kind of intentionality we experience, and nonexperiential intentionality, the kind of intentionality we ascribe without experiencing. This leads him to Davidson’s account of radical interpretation, and to what he calls the “remarkable asymmetry” between various ascriptions of intentionality. On radical interpretation as Davidson theorizes it, our attempts to interpret one another are so evidentially impoverished that interpretative success fundamentally requires assuming the rationality of our interlocutor—what he terms ‘charity.’ The ascription of some intentional state to another turns on the prior assumption that he or she believes, desires, fears and so on as they should, otherwise we would have no way of deciding among the myriad interpretations consistent with the meagre behavioural data available. Kriegel argues “that while the Davidsonian insight is cogent, it applies only to the ascription of non-experiential intentionality, as well as the ascription of experiential intentionality to others, but not to the ascription of experiential intentionality to oneself” (29). We require charity when it comes to ascribing varieties of intentionality to signs, others, and even our nonconscious selves, but not when it comes to ascribing intentionality to our own experiences. So why this basic asymmetry? Why do we have to attribute true beliefs and rational desires—take the ‘intentional stance’—with regards to others and our nonconscious selves, and not our consciously experienced selves? Why do we seem to be the one self-interpreting entity?

Kriegel thinks observational contact with our actual intentionality provides the most plausible answer, that “[i]nsofar as it is appropriate to speak of data for ascription here, the only relevant datum seems to be a certain deliverance of introspection” (33). He continues:

There is thus a contrast between the mechanics of first-person [experiential]-intentional ascription and third-person … intentional ascription. The former is based on endorsement of introspective seemings, the latter on causal inference from behavior. This is hardly deniable: as noted, when you ascribe to yourself a perceptual experience as of a table, you do not observe putative causal effects of your experience and infer on their basis the existence of a hidden experiential cause. Rather, you seem to make the ascription on the basis of observing, in some (not unproblematic) sense, the experience itself—observing, that is, the very state which you ascribe. The Sources of Intentionality, 33

The mechanics of first-person and third-person intentional cognition differ in that the latter requires explanatory posits like ‘hidden mental causes.’ Since self-ascription involves nothing hidden, no interpretation is required. And it is this elegant and intuitive explanation of first-person interpretative asymmetry that provides abductive warrant for the foundational argument of the text:

1. All the anchoring instances of intentionality are such that we have observational contact with them;

2. The only instances of intentionality with which we have observational contact are experiential-intentional states; therefore,

3. All anchoring instances of intentionality are experiential-intentional states. 38

Given the abductive structure of Kriegel’s argument, those who dissent with either (1) or (2) need a better explanation of asymmetry. Those who deny the anchoring instance model of concept acquisition will target (1), arguing, say, that concept acquisition is an empirical process requiring empirical research. Kriegal simply punts on this issue, claiming we have no reason to think that concept acquisition, no matter how empirically detailed the story turns out to be, is insoluble at this (armchair) level of generality. Either way, his position still enjoys the abductive warrant of explaining asymmetry.

For Kriegal, (2) is the most philosophically controversial premise, with critics either denying we have any ‘observational contact’ with experiential-intentional states, or that we have observational contact with only such experiential-intentional states. The problem faced by both angles, Kriegal points out, is that asymmetry still holds whether one denies (2) or not: we can ascribe intentional experiences to ourselves without requiring charity. If observational contact—the ‘natural explanation’ Kriegal calls it—doesn’t lie at the root of this capacity, then what does?

For an eliminativist such as myself, however, the problem is more a matter of definition. I actually agree that suffering a certain kind of observational contact–namely, one that systematically neglects tremendous amounts of information–can anchor our philosophical concept of intentionality. Kriegel is fairly dismissive of eliminativism in The Sources of Intentionality, and even then the eliminativism he dismisses acknowledges the existence of intentional experiences! As he writes, “if eliminativism cannot be acceptable unless a relatively radical interpretation of cognitive science is adopted, then eliminativism is not in good shape” (199). The problem is that this assumes cognitive science is itself in fine shape, when Kriegel himself emphatically asserts “that it is not doing fine” (A Hesitant Defence of Introspection, 3). Cognitive science is fraught with theoretical dispute, certainly more than enough (and for long enough!) to seriously entertain the possibility that something radical has been overlooked.

So the radicality of eliminativism is neither here nor there regarding its ‘shape.’ The real problem faced by eliminativism, which Kriegel glosses, is abductive. Eliminativism simply cannot account for what seem to be obvious intentional phenomena.

Which brings me to zombies and what these kinds of issues might look like in their soulless, shuffling world…

In the zombie world I’m imagining, what Sellars called the ‘scientific image of man’ is the only true image. There quite simply is no experience or meaning or normativity as we intentionally characterize these things in our world. So zombies, in their world, possess only systematic causal relations to their environments. No transcendental rules or spooky functions haunt their brains. No virtual norms slumber in their community’s tacit gut. ‘Zombie knowledge’ is simply a matter of biomechanistic systematicity, having the right stochastic machinery to solve various problem ecologies. So although they use sounds to coordinate their behaviours, the efficacies involved are purely causal, a matter of brains conditioning brains. ‘Zombie language,’ then, can be understood as a means of resolving discrepancies via strings of mechanical code. Given only a narrow band of acoustic sensitivity, zombies constantly update their covariational schema relative to one another and their environments. They are ‘communicatively attuned.’

So imagine a version of radical zombie interpretation, where a zombie possessing one code—Blue—is confronted by another zombie possessing another code—Red. And now let’s ask the zombie version of Davidson’s question: What would it take for these zombies to become communicatively attuned?

Since the question is one of overcoming difference, it serves to recall what our zombies share: a common cognitive biology and environment. An enormous amount of evolutionary stage-setting underwrites the encounter. They come upon one another, in other words, differing only in code. And this is just to say that radical zombie interpretation occurs within a common attunement to each other and the world. They share both a natural environment and the sensorimotor systems required to exploit it. They also share powerful ‘brain-reading’ systems, a heuristic toolbox that allows them to systematically coordinate their behaviour with that of their zombie fellows without any common code. Even more, they share a common code apparatus, which is to say, the same system adapted to coordinate behaviours via acoustic utterances.

Given this ‘pre-established harmony’—common environment, common brain-reading and code-using biology—how might a code Blue zombie come to interpret (be systematically coordinated with) the utterances of a code Red zombie?

Since both zombies were once infant zombies, each has already undergone ‘code conditioning’; they have already tested innumerable utterances against innumerable environments, isolating and preserving robust covariances (and structural operators) on the way to acquiring their respective codes. At the same time, their brain-reading systems allow them to systematically coordinate their behaviours to some extent, to find a kind of basic attunement. All that remains is a matter of covariant sound substitution, of swapping the sounds belonging to code Blue for the sounds belonging to code Red, a process requiring little more than testing code-specific covariations against real-time environments. Perhaps radical zombie interpretation is not so radical after all!

The first thing to note is how the reliable coordination of behaviours is all that matters in this process: idiosyncrasies in their respective implementations of Red or Blue matter only insofar as they impact this coordination. The ‘synonymy’ involved is entirely coincident because it is entirely physical.

The second thing to note is how pre-established harmony is simply a structural feature of the encounter. These are just the problems that nature has already solved for our two intrepid zombies, what has to be the case for the problem of radical zombie interpretation to even arise. At no point do our zombies ‘attribute’ or ‘ascribe’ anything to their counterpart. Sensing another zombie simply triggers their zombie-brain-reading machinery, which modifies their behaviour and so on. There’s no ‘charity’ involved, no ‘attribution of rationality,’ just the environmental cuing of heuristic systems adapted to solve certain zombie-social environments.

Of course each zombie resorts to their brain-reading systems to behaviourally coordinate with its counterpart, but this is an automatic feature of the encounter, what happens whenever zombies detect zombies. Each engages in communicative troubleshooting behaviour in the course of executing some superordinate disposition to communicatively coordinate. Brains are astronomically complicated mechanisms—far too complicated for brains to intuit them as such. Thus the radically heuristic nature of zombie brain-reading. Thus the perpetual problem of covariational discrepancies. Thus the perpetual expenditure of zombie neural resources on the issue of other zombies.

Leading us to a third thing of note: how the point of radical zombie interpretation is to increase behavioural possibilities by rendering behavioural interactions more systematic. What makes this last point so interesting lies in the explanation it provides regarding why zombies need not first decode themselves to decode others. As a robust biomechanical system, ‘self-systematicity’ is simply a given. The whole problem of zombie interpretation resides in one zombie gaining some systematic purchase on other zombies in an effort to create some superordinate system—a zombie community. Asymmetry, in other words, is a structural given.

In radical zombie interpretation, then, not only do we have no need for ‘charity,’ we somehow manage to circumvent all the controversies pertaining to radical human interpretation.

Now of course the great zombie/human irony is that humans are everything that zombies are and more. So the question immediately becomes one of why radical human interpretation should prove to be so problematic when the radical zombie interpretation of the same problem is not. Where the zombie story certainly entails a vast number of technical details, it does not involve anything conceptually occult or naturalistically inexplicable. If mere zombies could avoid these problems using nothing more than zombie resources, why should humans find themselves perennially confounded?

This really is an extraordinary question. The intentionalist will cry foul, of course, reference all the obvious intentional phenomena pertaining to the communicative coordination of humans, things like rules and reasons and references and so on, and ask how this zombie fairy tale could possibly explain any of them. So even though this story of zombie interpretation provides, in outline at least, the very kind of explanation that Kriegel demands, it quite obviously throws out the baby with the bathwater in the course of doing so. Asymmetry becomes perspicuous, but now the whole of human intentional activity becomes impossible to explain (assuming that anything at this level has ever been genuinely explained). Zombie interpretation, in other words, wins the battle by losing the war.

It’s worth noting here the curious structure of the intentionalist’s abductive case. The idea is that we need a theoretical intentional account to explain human intentional activity. What warrants theoretical supernaturalism (or philosophy traditionally construed) is the matter-of-fact existence of everyday intentional phenomena (an existence that Kriegel thinks so obvious that on a couple of occasions he adduces arguments he claims he doesn’t need simply to bolster his case against skeptics such as myself). The curiosity, however, is that the ‘matter-of-fact existence of everyday intentional phenomena’ that at once “underscores the depth of eliminativism’s (quasi-) empirical inadequacy” (199) and motivates theoretical intentional accounts is itself a matter of theoretical controversy—just not for intentionalists! The problem with abductive appeals like Kriegel’s, in other words, is the way they rely on a prior theory of intentionality to anchor the need for theories of intentionality more generally.

This is what makes radical zombie interpretation out and out eerie. Because it does seem to be the case that zombies could achieve at least the same degree of communicative coordination absent any intentional phenomena at all. When you strip away the intentional glamour, when you simply look at the biology and the behaviour, it becomes hard to understand just what it is that humans do that requires anything over and above zombie biology and behaviour. Since some kind of gain in systematicity is the point of communicative coordination, it makes sense that zombies need not troubleshoot themselves in the course of troubleshooting other zombies. So it remains the case that radical zombie interpretation, analyzed at the same level of generality, seems to have a much easier time explaining the same degree of human communicative coordination sans bebe, than does radical human interpretation, which, quite frankly, strands us with a host of further, intractable mysteries regarding things like ‘ascription’ and ‘emergence’ and ‘anomalous causation.’

What could be going on? When it comes to Kriegel’s ‘remarkable asymmetry’ should we simply put our ‘zombie glasses’ on, or should we tough it out in the morass of intractable second-order accounts of intentionality on the basis of some ineliminable intentional remainder?

As Three Pound Brain regulars know, the eliminativism I’m espousing here is quite unique in that it arises, not out of concerns regarding the naturalistic inscrutability of intentional phenomena, but out of a prior, empirically grounded account of intentionality, what I’ve been calling Blind Brain Theory. On Blind Brain Theory the impasse described above is precisely the kind of situation we should expect given the kind of metacognitive capacities we possess. By its lights, zombies just are humans, and so-called intentional phenomena are simply artifacts of metacognitive neglect, what high-dimensional zombie brain functions ‘look like’ when low-dimensionally sampled for deliberative metacognition. Brains are simply too complicated to be effectively solved by causal cognition, so we evolved specialized fixes, ways to manage our brain and others in the absence of causal cognition. Since the high-dimensional actuality of those specialized fixes outruns our metacognitive capacity, philosophical reflection confuses what little it can access with everything required, and so is duped into the entirely natural (but nonetheless extraordinary) belief that it possesses ‘observational contact’ with a special, irreducible order of reality. Given this, we should expect that attempts to theoretically solve radical interpretation via our ‘mind’ reading systems would generate more mystery than it would dispel.

Blind Brain Theory, in other words, short circuits the abductive strategy of intentionalism. It doesn’t simply offer a parsimonious explanation of asymmetry; it proposes to explain all so-called intentional phenomena. It tells us what they are, why we’re prone to conceive them the naturalistically incompatible ways we do, and why these conceptions generate the perplexities they do.

To understand how it does so, it’s worth considering what Kriegel himself thinks is the ‘weak link’ in his attempt to source intentionality: the problem of introspective access. In The Sources of Intentionality, Kriegel is at pains to point out that “one need not be indulging in any mystery-mongering about first-person access” to provide the kind of experiential observational contact that he needs. No version of introspective incorrigibility follows “from the assertion that we have introspective observational contact with our intentional experiences” (34). Even still, the question of just what kind of observational contact is required is one that he leaves hanging.

In his 2013 paper, ‘A Hesitant Defence of Introspection,’ Kriegel attempts to tie down this crucial loose thread by arguing what he calls ‘introspective minimalism,’ an account of human introspective capacity that can weather what he terms ‘Schwitzgebel’s Challenge,’ essentially, the question (arising out of Eric’s watershed, Perplexities of Consciousness) of whether our introspective capacity, whatever it consists in, possesses any cognitive scientific value. He begins by arguing the pervasive, informal role that introspection plays in the ‘context of discovery’ of cognitive sciences. The question, however, is how introspection fits into the ‘context of justification’—the degree to which it counts as evidence as opposed to mere ‘inspiration.’ Given the obvious falsehood of what he terms ‘introspective maximalism,’ he sets out to save some minimalist version of introspection that can serve some kind of evidential role. He turns to olfaction to provide an analogy to the kind of minimal justification that introspection is capable of providing:

Suppose, for instance, that introspection turns out to be as trustworthy as our sense of smell, that is, as reliable and as potent as a normal adult human’s olfactory system. Then Introspective minimalism would be vindicated. Normally, when we have an olfactory experience as of raspberries, it is more likely that there are raspberries in our immediate environment (than if we do not have such an experience). Conversely, when there are raspberries in our immediate environment, it is more likely that we would have an olfactory experience as of raspberries (than if there are none). So the ‘equireliability’ of olfaction and introspection would support introspective minimalism. Such equireliability is highly plausible. 8

Kriegel’s argument is simply that introspecting some phenomenology reliably indicates the presence of that phenomenology the same way smelling raspberries reliably indicates the presence of raspberries. This is all that’s required, he thinks, to assert “that introspection affords us observational contact with our mental life” (13), and is thus “epistemically indispensable for any mature understanding of the mind” (13). It’s worth noting that Schwitzgebel is actually inclined to concede the analogy, suggesting that his own “dark pessimism about some of the absolutely most basic and pervasive features of consciousness, and about the future of any general theory of consciousness, seems to be entirely consistent with Uriah’s hesitant defense of introspection” (“Reply to Kriegel, Smithies, and Spener,” 4). He agrees then, that introspection reliably tells us that we possess a phenomenology, he just doubts it reliably tells us what it consists in. Kriegel, on the hand, thinks his introspective minimalism gives him the kind of ‘observational contact’ he needs to get his abductive asymmetry argument off the ground.

But does it?

Once again, it pays to flip to the zombie perspective. Given that the zombie olfactory system is a specialized system adapted to the detection of chemical residues in the immediate environment, one might expect the zombie olfactory system would reliably detect the chemical residue left by raspberries. Given that the zombie introspective system is a specialized system adapted to the detection of brain events, one might expect the zombie introspective system would reliably detect those brain events. The first system reliably allows zombies to detect raspberries, and the second system reliably allows zombies to detect activity in various parts of its zombie brain.

On this way of posing the problem, however, the disanalogy between the two systems all but leaps out at us. In fact, it’s hard to imagine two more disparate cognitive tasks than detecting something as simple as the chemical signature of raspberries versus something as complex as the machinations of the zombie brain. In point of fact, the brain is so astronomically complicated, it seems all but assured that zombie introspective capacity would be both fractionate and heuristic in the extreme, that it would consist of numerous fixes geared to a variety of problem-ecologies.

One way to possibly repair the analogy would be to scale up the complexity of the problem faced by olfaction. So it’s obvious, to give an example, that the information available for olfaction is far too low-dimensional, far too problem specific, to anchor theoretical accounts of the biosphere. Then, on this repaired analogy, we can say that just as zombie olfaction isn’t geared to the theoretical solution of the zombie biosphere, but rather to the detection of certain environmental obstacles and opportunities, it is almost certainly the case that zombie introspection isn’t geared to the theoretical solution of the zombie brain, but rather to more specific, environmentally germane tasks. Given this, we have no reason whatsoever to presume that what zombies metacognize and report possesses any ‘reliability and potency’ beyond very specific problem-ecologies—the same as with olfaction. On zombie introspection, then, we have no more reason to think that zombies could possibly accurately metacognize the structure of their brain than they could accurately smell the structure of the world.

And this returns us back to the whole question of Kriegel’s notion of ‘observational contact.’ Kriegel realizes that ‘introspection’ isn’t simply an all or nothing affair, that it isn’t magically ‘self-intimating’ and therefore admits of degrees of reliability—this is why he sets out to defend his minimalist brand. But he never pauses to seriously consider the empirical requirements of even such minimal introspective capacity.

In essence, what he’s claiming is that the kind of ‘observational contact’ available to philosophical introspection warrants complicating our ontology with a wide variety of (supernatural) intentional phenomena. Introspective minimalism, as he terms it, argues that we can metacognize some restricted set of intentional entities/relations with the same reliability that we cognize natural phenomena. We can sniff these things out, so it stands to reason that such things exist to be sniffed, that introspecting a phenomenology increases the chances that such phenomenology exists (as introspected). With zombie introspection, however, the analogy between olfaction and metacognition strained credulity given the vast disproportion in complexity between olfactory and metacognitive phenomena. It’s difficult to imagine how any natural system could possibly even begin to accurately metacognize the brain.

The difference Kriegel would likely press, however, is that we aren’t mindless zombies. Human metacognition, in other words, isn’t concerned with the empirical particulars of the brain as it is the functional particulars of the conscious mind. Even though the notion of accurate zombie introspection is obviously preposterous, the notion of accurate human metacognition would seem to be a different question altogether, the question of what a human introspective capacity requires to accurately metacognize human ‘phenomenology’ or ‘mind.’

The difficulty here, famously, is that there seems to be no noncircular way to answer this question. Because we can’t find intentional phenomena anywhere in the natural world, theoretical metacognition monopolizes our every attempt to specify their nature. This effectively renders assessing the reliability of such metacognitive exercises impossible apart from their ability to solve various kinds of problems. And the difficulty here is that the long history of introspectively motivated philosophical theorization (as opposed to other varieties of metacognition) regarding the nature of the intentional has only generated more problems. For some reason, the kind of metacognition involved in ‘philosophical reflection’ only seems to make matters worse when it comes to questions of intentional phenomena.

The zombie account of this second impasse is at once parsimonious and straightforward: phenomenology (or mind or what have you) is the smell, not the raspberry—that would be some systematic activity in the brain. It is absurd to think any evolved brain, zombie or human, could accurately cognize its own biomechanical operations the way it cognizes causal events in its environment. Kriegel himself agrees to this:

In fact cognitive science can partly illuminate why our introspective grasp of our inner world can be expected to be considerably weaker than our perceptual grasp of the external world. It is well-established that much of our perceptual grasp of the external world relies on calibration of information from different perceptual modalities. Our observation of our internal world, however, is restricted to a single source of information, and not the most powerful to begin with. (13)

And this is but one reason why the dimensionality of the mental is so low compared to the environmental. Given the evolutionary youth of human metacognition, the astronomical complexity of the human nervous system, and not to mention the problems posed by structural complicity, we should suppose that our metacognitive capacity evolved opportunistically, that it amounts to a metacognitive version of what Todd and Gigerenzer (2012) would call a ‘heuristic toolbox,’ a collection of systems geared to solve specific problem-ecologies. Since we neglect this heuristic toolbox, we remain oblivious to the fact we’re using a given cognitive tool at all, let alone the limits of its effectiveness. Given that systematic theoretical reflection of the kind philosophers practice is an exaptation from cognitive capacities that predate recorded history, the adequacy of Kriegel’s ‘deliverances’ assumes that our evolved introspective capacity can solve unprecedented questions. This is a very real empirical question. For if it turns out that the problems posed by theoretical reflection are not the problems that intentional cognition can solve, neglect means we would have no way of knowing short of actual problem solving, the solution of problems that plainly can be solved. The inability to plainly solve a problem—like the mind-body problem, say—might then be used as a way to identify where we have been systematically misapplying certain tools, asking information adapted to the solution of some specific problem to contribute to the solution of a very different kind of problem.

Kriegel agrees that self-ascriptions involve seemings, that we are blind to the causes of the mental, and that introspection is likely as low-dimensional as a smell, yet he nevertheless maintains on abductive grounds that observational contact with experiential intentionality sources our concepts of intentionality. But it is becoming difficult to understand what it is that’s being explained, or how simply adding inexplicable entities in explanations that bear all the hallmarks of heuristic missapplication is supposed to provide any real abductive warrant at all. Certainly it’s intuitive, powerfully so given we neglect certain information, but then so is geocentrism. The naturalist project, after all, is to understand how we are our brain and environment, not how we are more than our brain and environment. That is a project belonging to a more blinkered age.

And as it turns out, certain zombies in the zombie world hold parallel positions. Because zombie metacognition has no access to the impoverished and circumstantially specialized nature of the information it accesses, many zombies process the information they receive the way they would other information, and verbally report the existence of queerly structured entities somehow coinciding with the function of their brain. Since the solving systems involved possess no access to the high-dimensional, empirical structure of the neural systems they actually track, these entities are typically characterized by missing dimensions, be it causality, temporality, materiality. The fact that these dimensions are neglected disposes these particular zombies to function as if nothing were missing at all—as if certain ghosts, at least, were real.

Yes. You guessed it. The zombies have philosophy too.