Three Pound Brain

No bells, just whistling in the dark…

Are Minds like Witches? The Catastrophe of Scientific Progress (by Ben Cain)

by rsbakker

machine brain


As scientific knowledge has advanced over the centuries, informed people have come to learn that many traditional beliefs are woefully erroneous. There are no witches, ghosts, or disease-causing demons, for example. But are cognitive scientists currently on the verge of showing also that belief in the ordinarily-defined human self is likewise due to a colossal misunderstanding, that there are no such things as meaning, purpose, consciousness, or personal self-control? Will the assumption of personhood itself one day prove as ridiculous as the presumption that some audacious individuals can make a pact with the devil?

Progress and a World of Mechanisms

According to this radical interpretation of contemporary science, everything is natural and nature consists of causal relationships between material aggregates that form systems or mechanisms. The universe is thus like an enormous machine except that it has no intelligent designer or engineer. Atoms evolve into molecules, stars into planets, and at least one planet has evolved life on its surface. But living things are really just material objects with no special properties. The only efficacious or real property in nature, very generally speaking, is causality, and thus the real question is always just what something can do, given its material structure, initial conditions, and the laws of nature. As one of the villains of The Matrix Reloaded declares, “We are slaves to causality.” Thus, instead of there being people or conscious, autonomous minds who use symbols to think about things and to achieve their goals, there are only mechanisms, which is to say forces acting on complex assemblies of material components, causing the system to behave in one way rather than another. Just as the sun acts on the Earth’s water cycle, causing oceans to evaporate and thus forming clouds that eventually rain and return the water via snowmelt runoff and groundwater flow to the oceans, the environment acts on an animal’s senses, which send signals to its brain whereupon the brain outputs a more or less naturally selected response, depending on whether the genes exercise direct or indirect control over their host. Systems interacting with systems, as dictated by natural laws and probabilities—that’s all there is, according to this interpretation of science.

How, then, do myths form that get the facts so utterly wrong? Myths in the pejorative sense form as a result of natural illusions. Omniscience isn’t given to lowly mammals. To compensate for their being thrown into the world without due preparation, as a result of the world’s dreadful godlessness, some creatures may develop the survival strategy of being excessively curious, which drives them often to err on the side not of caution but of creativity. We track not just the patterns that lead us to food or shelter, but myriad other structures on the off-chance that they’re useful. And as we evolve more intelligence than wisdom, we creatively interpret these patterns, filling the blanks in our experience with placeholder notions that indicate both our underlying ignorance and our presumptuousness. In the case of witches, for example, we mistake some hapless individual’s introversion and foreignness for some evil complicity in suffering that’s actually due merely to bad luck and to nature’s heartlessness. Given enough bumbling and sanctimony, that lack of information about a shy foreigner results in the burning of a primate for allegedly being a witch. A suitably grotesque absurdity for our monstrously undead universe.

And in the corresponding case of personhood itself, the lack of information about the brain causes our inquisitive species to reify its ignorance, to mistake the void found by introspection for spirit or mind which our allegedly wise philosophers then often interpret as being all that’s ultimately real. That is, we try to control ourselves along with our outer environment, to enhance our fitness to carry our genes, but because our brain didn’t evolve to reveal its mechanisms to themselves, the brain outputs nonsense to satisfy its curiosity, and so the masses mislead themselves with fairytales about the supernatural property of personhood, misinterpreting the lack of inner access as being miraculous direct acquaintance with oneself by something called self-consciousness. We mislead ourselves into concluding that the self is more than the brain that can’t understand its operations without scientific experimentation. Instead, we’re seduced into dogmatizing that our blindness to our neural self is actually magical access to a higher, virtually immaterial self.

Personhood and the Natural Reality of Illusions

So much for the progressive interpretation of science. I believe, however, that this interpretation is unsustainable. The serpent’s jaws come round again to close on the serpent’s own tail, and so we’re presented with yet another way to go spectacularly wrong; that is, the radical, progressive naturalist joins the deluded supernaturalist in an extravagant leap of logic. To see this, realize that the above picture of nature can be no picture at all. To speak of a picture, a model, a theory, or a worldview, or even of thinking or speaking in general, as these words are commonly defined is, of course, forbidden to the austere naturalist. There are no symbols in this interpretation which is no interpretation; there are only phases in the evolution of material systems, objects caught between opposing forces that change according to ceteris paribus laws which are not really laws. Roughly speaking—and remember that there’s no such thing as speaking—there’s only causality in nature. There are no intentional or normative properties, no reference, purpose, or goodness or badness.

In the unenlightened mode of affecting material systems, this “means” that if you interpret scientific progress as entailing that there are no witches, demons, or people in general, in the sense that the symbols for these entities are vacuous, whereas other symbols enjoy meaningful status such as the science-friendly words, “matter,” “force,” “law,” “mechanism,” “evolution,” and so forth, you’ve fallen into the same trap that ensnares the premodern ignoramus who fails to be humbled by her grievous knowledge deficit. All symbols are equally bogus, that is, supernatural, according to the foregoing radical naturalism. Thus, this radical must divest herself not just of the premodern symbols, but of the scientific ones as well—assuming, that is, she’s bent on understanding these symbols in terms of the naïve notion of personhood which, by hypothesis, is presently being made obsolete by science. So for example, if I say, “Science has shown that there are no witches, and the commonsense notion of the mind is likewise empty,” the radical naturalist is hardly free to interpret this as saying that premodern symbols are laughable whereas modern scientific ones are respectable. In fact, strictly speaking, she fails to be a thoroughgoing eliminativist as soon as she assumes that I’ve thereby said anything at all. All speaking is illusion, for the radical naturalist; there are only forces acting on material systems, causing those systems to behave, to exercise their material capacities, whereupon the local effects might feed back into a larger system, leading to cycles of average collective behaviour. There is no way of magically capturing that mechanistic reality in symbolic form; instead, there’s just the illusion of doing so.

How, then, should scientific progress be understood, given that there’s no such things as scientific theories, progress, or understanding, as these things are commonly defined? In short, what’s the uncommon, enlightened way of understanding science (which is actually no sort of understanding)? What’s the essence of postmodern, scientific mysticism, as we might think of it? In other words, what will the posthuman be doing once her vision is unclouded with illusions of personhood and so is filled with mechanisms as such? The answer must be put in terms, once again, of causality. Scientific enlightenment is a matter (literally) of being able to exercise greater control over certain systems than is afforded by those who lack scientific tools. In short, assuming we define ourselves as a species in terms of the illusions of a supernatural self, the posthuman who embraces radical naturalism and manages to clear her head of the cognitive vices that generate those illusions will be something of a pragmatist. She’ll think in terms of impersonal systems acting and reacting to each other and being forced into this or that state, and she’ll appreciate how she in turn is driven by her biochemical makeup and evolutionary history to survive by overpowering and reshaping her environment, aided by this or that trait or tool.

Radical, eliminativistic naturalism thus implies some version of pragmatism. The version not implied would be one that defines usefulness in terms of the satisfaction of personal desires. (And, of course, there would really be some form of causality instead of any logical implication.) But the point is that for the eliminativist, an illusion-free individual would think purely in terms of causality and of materialistic advantage based on a thorough knowledge of the instrumental value of systems. She’d be pushed into this combative stance by her awareness that she’s an animal that’s evolved with that survivalist bias, and so her scientific understanding wouldn’t be neutral or passive, but supplemented by a more or less self-interested evaluation of systems. She’d think in terms of mechanisms, yes, but also of their instrumental value to her or to something with which she’s identified, although she wouldn’t assume that anyone’s survival, including hers, is objectively good.

For example, the radical naturalist might think of systems as posing problems to be solved. The posthuman, then, would be busy solving problems, using her knowledge to make the environment more conducive to her. She wouldn’t think of her knowledge as consisting of theories made up of symbols; instead, she’d see her brain and its artificial extensions as systems that enable her to interact successfully with other systems. The success in question would be entirely instrumental, a matter of engineering with no presumption that the work has any ultimate value. There could be no approval or disapproval, because there would be no selves to make such judgments, apart from any persistence of a deluded herd of primates. The re-engineered system would merely work as designed, and the posthuman would thereby survive and be poised to meet new challenges. This would truly be work for work’s sake.

What, then, should the enlightened pragmatist say about the dearth of witches? Can she sustain the sort of positivistic progressivism with which I began this article? Would she attempt to impact her environment by making sounds that are naively interpreted as meaning that science has shown there are no witches? No, she would “say” only that the neural configuration leading to behaviour associated with the semantic illusion that certain symbols correspond to witchy phenomena has causes and effects A and B, whereas the neural configuration leading to so-called enlightened, modern behaviour, often associated with the semantic illusion that certain other symbols correspond to the furious buying and selling of material goods and services and to equally tangible, presently-conventional behaviour thus has causes and effects C and D. Again, if everything must be perceived in terms of causality, the neural states causing certain primates to be burned as witches should be construed solely in terms of their causes and effects. In short, the premodern, allegedly savage illusion of witchcraft loses its sting of embarrassment, because that illusion evidently had causal power and thus a degree of reality. Cognitive illusions aren’t nothing at all; they’re effects of vices like arrogance, self-righteousness, impertinence, irrationality, and so forth, and they help to shape the real world. There’s no enlightened basis for any normative condemnation of such an illusion. All that matters is the pragmatic, instrumental judgment of something’s effectiveness at solving a problem.

Yes, if there’s no such thing as the meaning of a symbol, there are no witches, in that there’s no relation of non-correspondence between “witch” and creatures that would fit the description. Alas, this shouldn’t comfort the radical naturalist since there can likewise be no negative semantic relation between “symbol” and symbols to make sense of that statement about the nonexistence of witches. If naturalism forces us to give up entirely on the idea of intentionality, we mustn’t interpret the question of something’s nonexistence as being about a symbol’s failure to pick out something (since there would be no such thing as a symbol in the first place). And if we say there are no symbols, just as there are no witches or ghosts or emergent and autonomous minds, we likewise mustn’t think this is due merely to any semantic failure.

What, then, must nonexistence be, according to radical naturalism? It must be just relative powerlessness. To say that there are no witches “means” that the neural states involved in behaviour construed in terms of witchcraft are relatively powerless to systematically or reliably impact their environment. Note that this needn’t imply that the belief in witches is absolutely powerless. After all, religious institutions have subdued their flocks for millennia based on the ideology of demons, witches and the like, and so the pragmatist mustn’t pretend she can afford to “say” that witches have a purely negative ontological status. Again, just because there aren’t really any witches doesn’t mean there’s no erroneous belief in witchcraft, and that belief itself can have causal power. The belief might even conceivably lead to a self-fulfilling prophecy in which case something like witchcraft will someday come into being. At any rate, the belief in witches opens up problems to be solved by engineering (whether to side with the oppressive Church or to overthrow it, etc.), and that would be the enlightened posthuman’s only concern with respect to witches.

Indeed, a radical naturalist who understands the cataclysmic implications of scientific progress has no epistemic basis whatsoever for belittling the causal role of a so-called illusion like witchcraft. Again, some neural states have causes and effects A and B while others have causes and effects C and D—and that’s it as far as objective reality is concerned. On top of this, at best, there’s pragmatic instrumentalism, which raises the question merely of the usefulness of the belief in witches. Is that belief entirely useless? Obviously not, as Western history attests. Is the belief in witches immoral or beneath our dignity as secular humanists? The question should be utterly irrelevant, since morality and dignity are themselves illusions, given radical naturalism; moreover, the “human” in “humanist” must be virtually empty. What an enlightened person could say with integrity is just that the belief in witches benefits some primates more than others, by helping to establish a dominance hierarchy.

The same goes for the nonexistence of minds, personhood, consciousness, semantic meaning, or purpose. If these things are illusions, so what? Illusions can have causal power, and the radical naturalist must distinguish between causal relations solely by assigning them their instrumental value, noting that some effects help some primates to survive by solving certain problems, while hindering others. Illusions are thus real enough for the truly radical naturalist. In particular, if the brain tries to discover its mechanisms through introspection and naturally comes up empty, that need not be the end of the natural process. The cognitive blind spot delivers an illusion of mentality or of immaterial spirituality, which in turn causes primates to act as if there were such things as cultures consisting of meaningful symbols, moral values and the like. We’d be misled into creating something that nevertheless exists as our creation. Just as the whole universe might have popped into existence from nothing, according to quantum mechanics, cognitive science might entail that personhood develops from the introspective experience of an inner emptiness. In fact, we’re not empty, because our heads are full of brain matter. But the tool of introspection can be usefully misapplied, as it evidently causes the whole panoply of culture-dependent behaviours.

What is it, then, to call personhood a mere illusion? What’s the difference between illusion and reality, for the radical naturalist, given that both can have causal power in the domain of material systems? If we say that illusions depend on ignorance of certain mechanisms, this turns all mechanisms into illusions and deprives us of so-called reality, assuming none of us is omniscient. As long as we select which mechanisms and processes to attend to in our animalistic dealings with the environment, we all live in bubble worlds based on that subjectivity which thus has quasi-transcendental status. To illustrate, notice that when the comedian Bill Maher mocks the Fox News viewer for living in the Fox Bubble and for being ignorant of the “real world,” Maher forgets that he too lives in a culture, albeit in a liberal rather than a conservative one, and that he doesn’t conceive of everything with the discipline of strict impersonality or objectivity, as though he were the posthuman mystic.

What seems to be happening here is that the radical naturalist is liable to identify with a science-centered culture and thus she’s quick to downgrade the experience of those who prefer the humanities, including philosophy, religion, and art. From the science-centered perspective, we’re fundamentally animals caught in systems of causality, but we nevertheless go on to create cultures in our bumbling way, blissfully ignorant of certain mechanistic realities and driven by cognitive vices and biases as we allow ourselves to be mesmerized by the “illusion” of a transcendent, immaterial self.  But there’s actually no basis here for any value judgment one way or the other. From a barebones scientific “perspective,” the institution of science is as illusory as witchcraft. All that’s real are configurations of material elements that evolve in orderly ways—and witchcraft and personhood are free to share in that reality as illusions. Judging by the fact that the idea of witches has evidently caused some people to be treated accordingly and that the idea of the personal self has caused us to create a host of artificial, cultural worlds within the indifferent natural one, there appears to be more than enough reality to go around.

Earth and Muck

by rsbakker


So Grimdark magazine has released the conclusion to “The Knife of Many Hands,” as well as an interview containing cryptic questions and evasive answers. It’s fast becoming a great venue, and a great way to spotlight grim new talent.

As for information regarding the next book, I wish I knew what to say. I submitted the final manuscript the end of January, and still I’ve heard nary a peep about possible publications dates. Rest assured, as soon as I know, I’ll let you know.

I’d also like to recommend The Shadow of Consciousness: A Little Less Wrong, by Peter Hankins. Unlike so many approaches to the issue, Peter refuses to drain the swamp of phenomenology into the bog of intentionality. In some respects, the book is a little too clear-eyed! For those of us who have followed Conscious Entities over the years, it’s downright fascinating watching Peter slowly reveal those cards he’s been stubbornly holding to his chest! I’m hoping to work up a review when I’m completed, OCD-permitting.

Shadow of Consciousness cover

I’d like to thank Roger for stepping into the breach these past couple months, giving everyone another glimpse of why he’ll be turning fantasy on its ear. Why the breach? Early in February I began working on what I thought was a killer idea for an introduction to Through the Brain Darkly. The idea was to write it in two parts, posting each here for feedback. Normally, the keyboard sounds like a baby rattle when I do blog/theory stuff, but not so this time. I’m sure burn-out is part of the problem. I’m also cramped by a deep-seated need for perfection, I suppose, but I’ve never been quite so stymied by a good idea before. So I thought I would open it up to the collective, gather a few thoughts on what people think it is I’m doing here (aside from the predictable, paleolithic factors), and what it is I need to communicate this effectively.

Babette Babich has recently posted her own thoughts on Diogenes in the Marketplace–pretty much calls out all my defense mechanisms! Check it out. If only more couples would lounge in bed with The White-Luck Warrior. She’s given me a gift with that lovely image.

Despite my blockages, this post inaugurates a spate of guaranteed activity here on TPB.  I’m pleased to announce that Ben Cain will be returning with a piece on eliminativism this upcoming Monday, then Paul Ennis will be posting on Bleak Theory the Monday following.  Maybe a good old-fashioned blog debate will be just the tonic.

Three Roses, Bk. 1: Chapter Two

by reichorn

Hey all!  Roger here.

I’ve posted the second chapter of the new draft of Three Roses, Book 1: The Anarchy.  It’s first-draft stuff, but still I’m pretty happy with it.  So I figure what the hell, I’ll post it here.

As always, any comments or questions are welcomed and appreciated.

Introspection Explained

by rsbakker

Las Meninas

So I couldn’t get past the first paper in Thomas Metzinger’s excellent Open MIND offering without having to work up a long-winded blog post! Tim Bayne’s “Introspective Insecurity” offers a critique of Eric Schwitzgebel’s Perplexities of Consciousness, which is my runaway favourite book on introspection (and consciousness, for that matter). This alone might have sparked me to write a rebuttal, but what I find most extraordinary about the case Bayne lays out against introspective skepticism is the way it directly implicates Blind Brain Theory. His  defence of introspective optimism, I want to show, actually vindicates an even more radical form of pessimism than the one he hopes to domesticate.

In the article, Bayne divides the philosophical field into two general camps, the introspective optimists, who think introspection provides reliable access to conscious experience, and introspective pessimists, who do not. Recent years have witnessed a sea change in philosophy of mind circles (one due in no small part to Schwitzgebel’s amiable assassination of assumptions). The case against introspective reliability has grown so prodigious that what Bayne now terms ‘optimism’–introspection as a possible source of metaphysically reliable information regarding the mental/phenomenal–would have been considered rank introspective pessimism not so long ago. The Cartesian presumption of ‘self-transparency’ (as Carruthers calls it in his excellent The Opacity of Mind) has died a sudden death at the hands of cognitive science.

Bayne identifies himself as one of these new optimists. What introspection needs, he claims, is a balanced account, one sensitive to the vulnerabilities of both positions. Where proponents of optimism have difficulty accounting for introspective error, proponents of pessimism have difficulty accounting for introspective success. Whatever it amounts to, introspection is characterized by perplexing failures and thoughtless successes. As he writes in his response piece,  “The epistemology of introspection is that it is not flat but contains peaks of epistemic security alongside troughs of epistemic insecurity” (“Introspection and Intuition,” 1). Since any final theory of introspection will have to account for this mixed ‘epistemic profile,’ Bayne suggests that it provides a useful speculative constraint, a way to sort the metacognitive wheat from the chaff.

According to Bayne, introspective optimists motivate their faith in the deliverances of introspection on the basis of two different arguments: the Phenomenological Argument and the Conceptual Argument. He restricts his presentation of the phenomenological argument to a single quote from Brie Gertler’s “Renewed Acquaintance,” which he takes as representative of his own introspective sympathies. As Gertler writes of the experience of pinching oneself:

When I try this, I find it nearly impossible to doubt that my experience has a certain phenomenal quality—the phenomenal quality it epistemically seems to me to have, when I focus my attention on the experience. Since this is so difficult to doubt, my grasp of the phenomenal property seems not to derive from background assumptions that I could suspend: e.g., that the experience is caused by an act of pinching. It seems to derive entirely from the experience itself. If that is correct, my judgment registering the relevant aspect of how things epistemically seem to me (this phenomenal property is instantiated) is directly tied to the phenomenal reality that is its truthmaker. “Renewed Acquaintance,” Introspection and Consciousness, 111.

When attending to a given experience, it seems indubitable that the experience itself has distinctive qualities that allow us to categorize it in ways unique to first-person introspective, as opposed to third-person sensory, access. But if we agree that the phenomenal experience—as opposed to the object of experience—drives our understanding of that experience, then we agree that the phenomenal experience is what makes our introspective understanding true. “Introspection,” Bayne writes, “seems not merely to provide one with information about one’s experiences, it seems also to ‘say’ something about the quality of that information” (4). Introspection doesn’t just deliver information, it somehow represents these deliverances as true.

Of course, this doesn’t make them true: we need to trust introspection before we can trust our (introspective) feeling of introspective truth. Or do we? Bayne replies:

it seems to me not implausible to suppose that introspection could bear witness to its own epistemic credentials. After all, perceptual experience often contains clues about its epistemic status. Vision doesn’t just provide information about the objects and properties present in our immediate environment, it also contains information about the robustness of that information. Sometimes vision presents its take on the world as having only low-grade quality, as when objects are seen as blurry and indistinct or as surrounded by haze and fog. At other times visual experience represents itself as a highly trustworthy source of information about the world, such as when one takes oneself to have a clear and unobstructed view of the objects before one. In short, it seems not implausible to suppose that vision—and perceptual experience more generally—often contains clues about its own evidential value. As far as I can see there is no reason to dismiss the possibility that what holds of visual experience might also hold true of introspection: acts of introspection might contain within themselves information about the degree to which their content ought to be trusted. 5

Vision is replete with what might be called ‘information information,’ features that indicate the reliability of the information available. Darkness, for instance, is a great example, insofar as it provides visual information to the effect that visual information is missing. Our every glance is marbled with what might be called ‘more than meets the eye’ indicators. As we shall, this analogy to vision will come back and haunt Bayne’s thesis. The thing to keep in mind is the fact that the cognition of missing information requires more information. For the nonce, however, his claim is modest enough to acknowledge his point: as it stands, we cannot rule out the possibility that introspection, like exospection, reliably indicates its own reliability. As such, the door to introspective optimism remains open.

Here we see the ‘foot-in-the-door strategy’ that Bayne adopts throughout the article, where his intent isn’t so much to decisively warrant introspective optimism as it is to point out and elucidate the ways that introspective pessimism cannot decisively close the door on introspection.

The conceptual motivation for introspective optimism turns on the necessity of epistemic access implied in the very concept of ‘what is it likeness.’ The only way for something to be ‘like something’ is for it to like something for somebody. “[I]f a phenomenal state is a state that there is something it is like to be in,” Bayne writes, “then the subject of that state must have epistemic access to its phenomenal character” (5). Introspection has to be doing some kind of cognitive work, otherwise “[a] state to which the subject had no epistemic access could not make a constitutive contribution to what it was like for that subject to be the subject that it was, and thus it could not qualify as a phenomenal state” (5-6).

The problem with this argument, of course, is that it says little about the epistemic access involved. Apart from some unspecified ability to access information, it really implies very little. Bayne convincingly argues that the capacity to cognize differences, make discriminations, follows from introspective access, even if the capacity to correctly categorize those discriminations does not. And in this respect, it places another foot in the introspective door.

Bayne then moves on to the case motivating pessimism, particularly as Eric presents it in his Perplexities of Consciousness. He mentions the privacy problems that plague scientific attempts to utilize introspective information (Irvine provides a thorough treatment of this in her Consciousness as a Scientific Concept), but since his goal is to secure introspective reliability for philosophical purposes, he bypasses these to consider three kinds of challenges posed by Schwitzgebel in Perplexities, the Dumbfounding, Dissociation, and Introspective Variation Arguments. Once again, he’s careful to state the balanced nature of his aim, the obvious fact that

any comprehensive account of the epistemic landscape of introspection must take both the hard and easy cases into consideration. Arguably, generalizing beyond the obviously easy and hard cases requires an account of what makes the hard cases hard and the easy cases easy. Only once we’ve made some progress with that question will we be in a position to make warranted claims about introspective access to consciousness in general. 8

His charge against Schwitzgebel, then, is that even conceding his examples of local introspective unreliability, we have no reason to generalize from these to the global unreliability of introspection as a philosophical tool. Since this inference from local unreliability to global unreliability is his primary discursive target, Bayne doesn’t so much need to problematize Schwitzgebel’s challenges as to reinterpret—‘quarantine’—their implications.

So in the case of ‘dumbfounding’ (or ‘uncertainty’) arguments, Schwitzgebel reveals the epistemic limitations of introspection via a barrage of what seem to be innocuous questions. Our apparent inability to answer these questions leaves us ‘dumbfounded,’ stranded on a cognitive limit we never knew existed. Bayne’s strategy, accordingly, is to blame the questions, to suggest that dumbfounding, rather than demonstrating any pervasive introspective unreliability, simply reveals that the questions being asked possess no determinate answers. He writes:

Without an account of why certain introspective questions leave us dumbfounded it is difficult to see why pessimism about a particular range of introspective questions should undermine the epistemic credentials of introspection more generally. So even if the threat posed by dumbfounding arguments were able to establish a form of local pessimism, that threat would appear to be easily quarantined. 11

Once again, local problems in introspection do not warrant global conclusions regarding introspective reliability.

Bayne takes a similar tack with Schwitzgebel’s dissociation arguments, examples where our naïve assumptions regarding introspective competence diverge from actual performance. He points out the ambiguity between the reliability of experience and the reliability of introspection: Perhaps we’re accurately introspecting mistaken experiences. If there’s no way to distinguish between these, Bayne, suggests, we’ve made room for introspective optimism. He writes: “If dissociations between a person’s introspective capacities and their first-order capacities can disconfirm their introspective judgments (as the dissociation argument assumes), then associations between a person’s introspective judgments and their first-order capacities ought to confirm them” (12). What makes Schwitzgebel’s examples so striking, he goes on to argue, is precisely that fact that introspective judgments are typically effective.

And when it comes to the introspective variation argument, the claim that the chronic underdetermination that characterizes introspective theoretical disputes attests to introspective incapacity, Bayne once again offers an epistemologically fractionate picture of introspection as a way of blocking any generalization from given instances of introspective failure. He thinks that examples of introspective capacity can be explained away, “[b]ut even if the argument from variation succeeds in establishing a local form of pessimism, it seems to me there is little reason to think that this pessimism generalizes” (14).

Ultimately, the entirety of his case hangs on the epistemologically fractionate nature of introspection. It’s worth noting at this point, that from a cognitive scientific point of view, the fractionate nature of introspection is all but guaranteed. Just think of the mad difference between Plato’s simple aviary, the famous metaphor he offers for memory in the Theaetetus, and the imposing complexity of memory as we understand it today. I raise this ‘mad difference’ for two reasons. First, it implies that any scientific understanding of introspection is bound to radically complicate our present understanding. Second, and even more importantly, it evidences the degree to which introspection is blind, not only to the fractionate complexity of memory, but to its own fractionate complexity as well.

For Bayne to suggest that introspection is fractionate, in other words, is for him to claim that introspection is almost entirely blind to its own nature (much as it is to the nature of memory). To the extent that Bayne has to argue the fractionate nature of introspection, we can conclude that introspection is not only blind to its own fractionate nature, it is also blind to the fact of this blindness. It is in this sense that we can assert that introspection neglects its own fractionate nature. The blindness of introspection to introspection is the implication that hangs over his entire case.

In the meantime, having posed an epistemologically plural account of introspection, he’s now on the hook to explain the details. “Why,” he now asks, “might certain types of phenomenal states be elusive in a way that other types of phenomenal states are not?” (15). Bayne does not pretend to possess any definitive answers, but he does hazard one possible wrinkle in the otherwise featureless face of introspection, the 2010 distinction that he and Maja Spener made in “Introspective Humility” between ‘scaffolded’ and ‘freestanding’ introspective judgments. He notes that those introspective judgments that seem to be the most reliable, are those that seem to be ‘scaffolded’ by first-order experiences. These include the most anodyne metacognitive statements we make, where we reference our experiences of things to perspectivally situate them in the world, as in, ‘I see a tree over there.’ Those introspective judgments that seem the least reliable, on the other hand, have no such first-order scaffolding. Rather than piggy-back on first-order perceptual judgments, ‘freestanding’ judgments (the kind philosophers are fond of making) reference our experience of experiencing, as in, ‘My experience has a certain phenomenal quality.’

As that last example (cribbed from the Gertler quote above) makes plain, there’s a sense in which this distinction doesn’t do the philosophical introspective optimist any favours. (Max Engel exploits this consequence to great effect in his Open MIND reply to Bayne’s article, using it to extend pessimism into the intuition debate). But Bayne demurs, admitting that he lacks any substantive account. As it stands, he need only make the case that introspection is fractionate to convincingly block the ‘globalization’ of Schwitzgebel’s pessimism. As he writes:

perhaps the central lesson of this paper is that the epistemic landscape of introspection is far from flat but contains peaks of security alongside troughs of insecurity. Rather than asking whether or not introspective access to the phenomenal character of consciousness is trustworthy, we should perhaps focus on the task of identifying how secure our introspective access to various kinds of phenomenal states is, and why our access to some kinds of phenomenal states appears to be more secure than our access to other kinds of phenomenal states. 16

The general question of whether introspective cognition of conscious experience is possible is premature, he argues, so long as we have no clear idea of where and why introspection works and does not work.

This is where I most agree with Bayne—and where I’m most puzzled. Many things puzzle me about the analytic philosophy of mind, but nothing quite so much as the disinclination to ask what seem to me to be relatively obvious empirical questions.

In nature, accuracy and reliability are expensive achievements, not gifts from above. Short of magic, metacognition requires physical access and physical capacity. (Those who believe introspection is magic—and many do—need only be named magicians). So when it comes to deliberative introspection, what kind of neurobiological access and capacity are we presuming? If everyone agrees that introspection, whatever it amounts to, requires the brain do honest-to-goodness work, then we can begin advancing a number of empirical theses regarding access and capacity, and how we might find these expressed in experience.

So given what we presently know, what kind of metacognitive access and capacity should we expect our beans to possess? Should we, for instance, expect it to rival the resolution and behavioural integration of our environmental capacities? Clearly not. For one, environmental cognition coevolved with behaviour and so has the far greater evolutionary pedigree—by hundreds of millions of years, in fact! As it turns out, reproductive success requires that organisms solve their surroundings, not themselves. So long as environmental challenges are overcome, they can take themselves for granted, neglect their own structure and dynamics. Metacognition, in other words, is an evolutionary luxury. There’s no way of saying how long homo sapiens has enjoyed the particular luxury of deliberative introspection (as an exaptation, the luxury of ‘philosophical reflection’ is no older than recorded history), but even if we grant our base capacity a million year pedigree, we’re still talking about a very young, and very likely crude, system.

Another compelling reason to think metacognition cannot match the dimensionality of environmental cognition lies in the astronomical complexity of its target. As a matter of brute empirical fact, brains simply cannot track themselves the high-dimensional way they track their environments. Thus, once again, ‘Dehaene’s Law,’ the way “[w]e constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79).  The vast resources society is presently expending to cognize the brain attests to the degree to which our brain exceeds its own capacity to cognize in high dimensional terms. However the brain cognizes its own operations, then, it can only do so in a radically low dimensional way. We should expect, in other words, our brains to be relatively insensitive to their own operation—to be blind to themselves.

A third empirical reason to assume that metacognition falls short environmental dimensionality is found in the way it belongs to the very system it tracks, and so lacks the functional independence as well as the passive and active information-seeking opportunities belonging to environmental cognition. The analogy I always like to use here is that of a primatologist sewn into a sack with a troop of chimpanzees versus one tracking them discretely in the field. Metacognition, unlike environmental cognition, is structurally bound to its targets. It cannot move toward some puzzling item—an apple say—peer at it, smell it, touch it, turn it over, crack it open, taste it, scrutinize the components. As embedded, metacognition is restricted to fixed channels of information that it could not possibly identify or source. The brain, you could say, is simply too close to itself to cognize itself as it is.

Viewed empirically, then, we should expect metacognitive access and capacity to be more specialized, more adventitious, and less flexible compared to that of environmental cognition. Given the youth of the system, the complexity of its target, and the proximity of its target, we should expect human metacognition will consist of various kluges, crude heuristics that leverage specific information to solve some specific range of problems. As Gerd Gigerenzer and the Adaptive Behaviour and Cognition Research Group have established, simple heuristics are often far more effective than optimization methods at solving problems. “As the amount of data available to make predictions in an environment shrinks, the advantage of simple heuristics over complex algorithms grows” (Hertwig and Hoffrage, “The Research Agenda,” Simple Heuristics in a Social World, 23). With complicated problems yielding little data, adding parameters to a solution can compound the chances of making mistakes. Low dimensionality, in other words, need not be a bad thing, so long as the information consumed is information enabling the solution of some problem set. This is why evolution so regularly makes use of it.

Given this broad-stroke picture, human metacognition can be likened to a toolbox containing multiple, special-purpose tools, each possessing specific ‘problem-ecologies,’ narrow, but solvable domains that trigger their application frequently and decisively enough to have once assured the tool’s generational selection. The problem with heuristics, of course, lies in the narrowness of their respective domains. If we grant the brain any flexibility in the application of its metacognitive tools, then the potential for heuristic misapplication is always a possibility. If we deny the brain any decisive capacity to cognize these misapplications outside their consequences (if the brain suffers ‘tool agnosia’), then we can assume these misapplications will be indistinguishable from successful applications short of those consequences.

In other words, this picture of human metacognition (which is entirely consistent with contemporary research) provides an elegant (if sobering) recapitulation and explanation of what Bayne calls the ‘epistemic landscape of introspection.’ Metacognition is fractionate because of the heuristic specialization required to decant behaviourally relevant information from the brain. The ‘peaks of security’ correspond to the application of metacognitive heuristics to matching problem-ecologies, while the ‘troughs of insecurity’ correspond to the application of metacognitive heuristics to problem-ecologies they could never hope to solve.

Since those matching problem-ecologies are practical (as we might expect, given the cultural basis of regimented theoretical thinking), it makes sense that practical introspection is quite effective, whereas theoretical introspection, which attempts to intuit the general nature of experience, is anything but. The reason the latter strike us as so convincing—to the point of seeming impossible to doubt, no less—is simply that doubt is expensive: there’s no reason to presume we should happily discover the required error-signalling machinery awaiting any exaptation of our deliberative introspective capacity, let alone one so unsuccessful as philosophy. As I mentioned above, the experience of epistemic insufficiency always requires more information. Sufficiency is the default simply because the system has no way of anticipating novel applications, no decisive way of suddenly flagging information that was entirely sufficient for ancestral problem-ecologies and so required no flagging.

Remember how Bayne offered what I termed ‘information information’ provided by vision as a possible analogue of introspection? Visual experience cues us to the unreliability or absence of information in a number of ways, such as darkness, blurring, faintness, and so on. Why shouldn’t we presume that deliberative introspection likewise flags what can and cannot be trusted? Because deliberative introspection exapts information sufficient for one kind of practical problem-solving (Did I leave my keys in the car? Am I being obnoxious? Did I read the test instructions carefully enough?) for the solution of utterly unprecedented ontological problems. Why should repurposing introspective deliverances in this way renovate the thoughtless assumption of ‘default sufficiency’ belonging to their original purposes?

This is the sense in which Blind Brain Theory, in the course of explaining the epistemic profile of introspection, also explodes Bayne’s case for introspective optimism. By tying the contemplative question of deliberative introspection to the empirical question of the brain’s metacognitive access and capacity, BBT makes plain the exorbitant biological cost of the optimistic case. Exhaustive, reliable intuition of anything involves a long evolutionary history, tractable targets, and flexible information access—that is, all the things that deliberative introspection does not possess.

Does this mean that deliberative introspection is a lost cause, something possessing no theoretical utility whatsoever? Not necessarily. Accidents happen. There’s always a chance that some instance of introspective deliberation could prove valuable in some way. But we should expect such solutions to be both adventitious and local, something that stubbornly resists systematic incorporation into any more global understanding.

But there’s another way, I think, in which deliberative introspection can play a genuine role in theoretical cognition—a way that involves looking at Schwitzgebel’s skeptical project as a constructive, rather than critical, theoretical exercise.

To show what I mean, it’s worth recapitulating one of the quotes Bayne selects from Perplexities of Consciousness for sustained attention:

How much of the scene are you able vividly to visualize at once? Can you keep the image of your chimney vividly in mind at the same time you vividly imagine (or “image”) your front door? Or does the image of your chimney fade as your attention shifts to the door? If there is a focal part of your image, how much detail does it have? How stable is it? Suppose that you are not able to image the entire front of your house with equal clarity at once, does your image gradually fade away towards the periphery, or does it do so abruptly? Is there any imagery at all outside the immediate region of focus? If the image fades gradually away toward the periphery, does one lose colours before shapes? Do the peripheral elements of the image have color at all before you think to assign color to them? Do any parts of the image? If some parts of the image have indeterminate colour before a colour is assigned, how is that indeterminacy experienced—as grey?—or is it not experienced at all? If images fade from the centre and it is not a matter of the color fading, what exactly are the half-faded images like? Perplexities, 36

Questions in general are powerful insofar as they allow us to cognize the yet-to-be-cognized. The slogan feels ancient to me now, but no less important: Questions are how we make ignorance visible, how we become conscious of cognitive incapacity. In effect, then, each and every question in this quote brings to light a specific inability to answer. Granting that this inability indicates either a lack of information access and/or metacognitive incapacity, we can presume these questions enumerate various cognitive dimensions missing from visual imagery. Each question functions as an interrogative ‘ping,’ you could say, showing us another direction that (for many people at least) introspective inquiry cannot go—another missing dimension.

So even though Bayne and Schwitzgebel draw negative conclusions from the ‘dumbfounding’ that generally accompanies these questions, each instance actually tells us something potentially important about the limits of our introspective capacities. If Schwitzgebel had been asking these questions of a painting—Las Meninas, say—then dumbfounding wouldn’t be a problem at all. The information available, given the cognitive capacity possessed, would make answering them relatively straightforward. But even though ‘visual imagery’ is apparently ‘visual’ the same as a painting, the selfsame questions stop us in our tracks. Each question, you could say, closes down a different ‘degree of cognitive freedom,’ reveals how few degrees of cognitive freedom human deliberative introspection possesses for the purposes of solving visual imagery. Not much at all, as it turns out.

Note this is precisely what we should expect on a ‘blind brain’ account. Once again, simply given the developmental and structural obstacles confronting metacognition, it almost certainly consists of an ‘adaptive toolbox’ (to use Gerd Gigerenzer’s phrase), a suite of heuristic devices adapted to solve a restricted set of problems given only low-dimensional information. The brain possesses a fixed set of metacognitive channels available for broadcast, but no real ‘channel channel,’ so that it systematically neglects metacognition’s own fractionate, heuristic structure.

And this clearly seems to be what Schwitzgebel’s interrogative barrage reveals: the low dimensionality of visual imagery (relative to vision), the specialized problem-solving nature of visual imagery, and our profound inability to simply intuit as much. For some mysterious reason we can ask visual questions that for some mysterious reason do not apply to visual imagery. The ability of language to retask cognitive resources for introspective purposes seems to catch the system as a whole by surprise, confronts us with what had been hitherto relegated to neglect. We find ourselves ‘dumbfounded.’

So long as we assume that cognition requires work, we must assume that metacognition trades in low dimensional information to solve specific kinds of problems. To the degree that introspection counts as metacognition, we should expect it to trade in low-dimensional information geared to solve particular kinds of practical problems. We should also expect it to be blind to introspection, to possess neither the access nor the capacity required to intuit its own structure. Short of interrogative exercises such as Schwitzgebel’s, deliberative introspection has no inkling of how many degrees of cognitive freedom it possesses in any given context. We have to figure out what information is for what inferentially.

And this provides the basis for a provocative diagnosis of a good many debates in contemporary psychology and philosophy of mind. So for instance, a blind brain account implies that our relation to something like ‘qualia’ is almost certainly one possessing relatively few degrees of cognitive freedom—a simple heuristic. Deliberative introspection neglects this, and at the same time, via questioning, allows other cognitive capacities to consume the low-dimensional information available. ‘Dumbfounding’ often follows—what the ancient Greeks liked to call, thaumazein. The practically minded, sniffing a practical dead end, turn away, but the philosopher famously persists, mulling the questions, becoming accustomed to them, chasing this or that inkling, borrowing many others, all of which, given the absence of any real information information, cannot but suffer from some kind of ‘only game in town effect’ upon reflection. The dumbfounding boundary is trammelled to the point of imperceptibility, and neglect is confused with degrees of cognitive freedom that simply do not exist. We assume that a quale is something like an apple—we confuse a low-dimensional cognitive relationship with a high-dimensional one. What is obviously specialized, low-dimensional information becomes, for a good number of philosophers at least, a special ‘immediately self-evident’ order of reality.

Is this Adamic story really that implausible? After all, something has to explain our perpetual inability to even formulate the problem of our nature, let alone solve it. Blind Brain Theory, I would argue, offers a parsimonious and comprehensive way to extricate ourselves from the traditional mire. Not only does it explain Bayne’s ‘epistemic profile of introspection,’ it explains why this profile took so long to uncover. By reinterpreting the significance of Schwitzgebel’s ‘dumbfounding’ methods, it raises the possibility of ‘Interrogative Introspection’ as a scientific tool. And lastly, it suggests the problems that neglect foists on introspection can be generalized, that much of our inability to cognize ourselves turns on the cognitive short cuts evolution had to use to assure we could cognize ourselves at all.

Artificial Intelligence as Socio-Cognitive Pollution

by rsbakker

Metropolis 1


Eric Schwitzgebel over at the always excellent Splintered Minds, has been debating the question of how robots—or AI’s more generally—can be squared with our moral sensibilities. In “Our Moral Duties to Artificial Intelligences” he poses a very simple and yet surprisingly difficult question: “Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?”

He then lists numerous considerations that could possibly attenuate the degree of obligation we take on when we construct sentient, sapient machine intelligences. Prima facie, it seems obvious that our moral obligation to our machines should mirror our obligations to one another the degree to which they resemble us. But Eric provides a number of reasons why we might think our obligation to be less. For one, humans clearly rank their obligations to one another. If our obligation to our children is greater than that to a stranger, then perhaps our obligation to human strangers should be greater than that to a robot stranger.

The idea that interests Eric the most is the possible paternal obligation of a creator. As he writes:

“Since we created them, and since we have godlike control over them (either controlling their environments, their psychological parameters, or both), we have a special duty to ensure their well-being, which exceeds the duty we would have to an arbitrary human stranger of equal cognitive and emotional capacity. If I create an Adam and Eve, I should put them in an Eden, protect them from unnecessary dangers, ensure that they flourish.”

We have a duty not to foist the same problem of theodicy on our creations that we ourselves suffer! (Eric and I have a short story in Nature on this very issue).

Eric, of course, is sensitive to the many problems such a relationship poses, and he touches what are very live debates surrounding the way AIs complicate the legal landscape.  So as Ryan Calo argues, for instance, the primary problem lies in the way our hardwired ways of understanding each other run afoul the machinic nature of our tools, no matter how intelligent. Apparently AI crime is already a possibility. If it makes no sense to assign responsibility to the AI—if we have no corresponding obligation to punish them—then who takes the wrap? The creators? In the linked interview, at least, Calo is quick to point out the difficulties here, the fact that this isn’t simply a matter of expanding the role of existing legal tools (such as that of ‘negligence’ in the age of the first train accidents), but of creating new ones, perhaps generating whole new ontological categories that somehow straddle the agent/machine divide.

But where Calo is interested in the issue of what AIs do to people, in particular how their proliferation frustrates the straightforward assignation of legal responsibility, Eric is interested in what people do to AIs, the kinds of things we do and do not owe to our creations. Calo, of course, is interested in how to incorporate new technologies into our existing legal frameworks. Since legal reasoning is primarily analogistic reasoning, precedence underwrites all legal decision making. So for Calo, the problem is bound to be more one of adapting existing legal tools than constituting new ones (though he certainly recognizes the dimension). How do we accommodate AIs within our existing set of legal tools? Eric, of course, is more interested in the question how we might accommodate AGIs within our existing set of moral tools. To the extent that we expect our legal tools to render outcomes consonant with our moral sensibilities, there is a sense in which Eric is asking the more basic question. But the two questions, I hope to show, actually bear some striking—and troubling—similarities.

The question of fundamental obligations, of course, is the question of rights. In his follow-up piece, “Two Arguments for AI (or Robot) Rights: The No-Relevant-Difference Argument and the Simulation Argument,” Eric Schwitzgebel accordingly turns to the question of whether AIs possess any rights at all.

Since the Simulation Argument requires accepting that we ourselves are simulations—AI’s—we can exclude it here, I think (as Eric himself does, more or less), and stick with the No-Relevant-Difference Argument. This argument presumes that human-like cognitive and experiential properties automatically confer AIs with human-like moral properties, placing the onus on the rights denier to “to find a relevant difference which grounds the denial of rights.” As in the legal case, the moral reasoning here is analogistic: the more AI’s resemble us, the more of our rights they should possess. After considering several possible relevant differences, Eric concludes “that at least some artificial intelligences, if they have human-like experience, cognition, and emotion, would have at least some rights, or deserve at least some moral consideration.” This is the case, he suggests, whether one’s theoretical sympathies run to the consequentialist or the deontological end of the ethical spectrum. So far as AI’s possess the capacity for happiness, a consequentialist should be interested in maximizing that happiness. So far as AI’s are capable of reasoning, then a deontologist should consider them rational beings, deserving the respect due all rational beings.

So some AIs merit some rights the degree to which they resemble humans. If you think about it, this claim resounds with intuitive obviousness. Are we going to deny rights to beings that think as subtly and feel as deeply as ourselves?

What I want to show is how this question, despite its formidable intuitive appeal, misdiagnoses the nature of the dilemma that AI presents. Posing the question of whether AI should possess rights, I want to suggest, is premature to the extent it presumes human moral cognition actually can adapt to the proliferation of AI. I don’t think it can. In fact, I think attempts to integrate AI into human moral cognition simply demonstrate the dependence of human moral cognition on what might be called shallow information environments. As the heuristic product of various ancestral shallow information ecologies, human moral cognition–or human intentional cognition more generally–simply does not possess the functional wherewithal to reliably solve in what might be called deep information environments.

Metropolis 2

Let’s begin with what might seem a strange question: Why should analogy play such an important role in our attempts to accommodate AI’s within the gambit of human legal and moral problem solving? By the same token, why should disanalogy prove such a powerful way to argue the inapplicability of different moral or legal categories?

The obvious answer, I think anyway, has to do with the relation between our cognitive tools and our cognitive problems. If you’ve solved a particular problem using a particular tool in the past, it stands to reason that, all things being equal, the same tool should enable the solution of any new problem possessing a similar enough structure to the original problem. Screw problems require screwdriver solutions, so perhaps screw-like problems require screwdriver-like solutions. This reliance on analogy actually provides us a different, and as I hope to show, more nuanced way to pose the potential problems of AI.  We can even map several different possibilities in the crude terms of our tool metaphor. It could be, for instance, we simply don’t possess the tools we need, that the problem resembles nothing our species has encountered before. It could be AI resembles a screw-like problem, but can only confound screwdriver-like solutions. It could be that AI requires we use a hammer and a screwdriver, two incompatible tools, simultaneously!

The fact is AI is something biologically unprecedented, a source of potential problems unlike any homo sapiens has ever encountered. We have no  reason to suppose a priori that our tools are up to the task–particularly since we know so little about the tools or the task! Novelty. Novelty is why the development of AI poses as much a challenge for legal problem-solving as it does for moral problem-solving: not only does AI constitute a never-ending source of novel problems, familiar information structured in unfamiliar ways, it also promises to be a never-ending source of unprecedented information.

The challenges posed by the former are dizzying, especially when one considers the possibilities of AI mediated relationships. The componential nature of the technology means that new forms can always be created. AI confront us with a combinatorial mill of possibilities, a never ending series of legal and moral problems requiring further analogical attunement. The question here is whether our legal and moral systems possess the tools they require to cope with what amounts to an open-ended, ever-complicating task.

Call this the Overload Problem: the problem of somehow resolving a proliferation of unprecedented cases. Since we have good reason to presume that our institutional and/or psychological capacity to assimulate new problems to existing tool sets (and vice versa) possesses limitations, the possibility of change accelerating beyond those capacities to cope is a very real one.

But the challenges posed by latter, the problem of assimulating unprecedented information, could very well prove insuperable. Think about it: intentional cognition solves problems neglecting certain kinds of causal information. Causal cognition, not surprisingly, finds intentional cognition inscrutable (thus the interminable parade of ontic and ontological pineal glands trammelling cognitive science.) And intentional cognition, not surprisingly, is jammed/attenuated by causal information (thus different intellectual ‘unjamming’ cottage industries like compatibilism).

Intentional cognition is pretty clearly an adaptive artifact of what might be called shallow information environments. The idioms of personhood leverage innumerable solutions absent any explicit high-dimensional causal information. We solve people and lawnmowers in radically different ways. Not only do we understand the actions of our fellows lacking any detailed causal information regarding their actions, we understand our responses in the same way. Moral cognition, as a subspecies of intentional cognition, is an artifact of shallow information problem ecologies, a suite of tools adapted to solving certain kinds of problems despite neglecting (for obvious reasons) information regarding what is actually going on. Selectively attuning to one another as persons served our ‘benighted’ ancestors quite well. So what happens when high-dimensional causal information becomes explicit and ubiquitous?

What happens to our shallow information tool-kit in a deep information world?

Call this the Maladaption Problem: the problem of resolving a proliferation of unprecedented cases in the presence of unprecedented information. Given that we have no intuition of the limits of cognition period, let alone those belonging to moral cognition, I’m sure this notion will strike many as absurd. Nevertheless, cognitive science has discovered numerous ways to short circuit the accuracy of our intuitions via manipulation of the information available for problem solving. When it comes to the nonconscious cognition underwriting everything we do, an intimate relation exists between the cognitive capacities we have and the information those capacities have available.

But how could more information be a bad thing? Well, consider the persistent disconnect between the actual risk of crime in North America and the public perception of that risk. Given that our ancestors evolved in uniformly small social units, we seem to assess the risk of crime in absolute terms rather than against any variable baseline. Given this, we should expect that crime information culled from far larger populations would reliably generate ‘irrational fears,’ the ‘gut sense’ that things are actually more dangerous than they in fact are. Our risk assessment heuristics, in other words, are adapted to shallow information environments. The relative constancy of group size means that information regarding group size can be ignored, and the problem of assessing risk economized. This is what evolution does: find ways to cheat complexity. The development of mass media, however, has ‘deepened’ our information environment, presenting evolutionarily unprecedented information cuing perceptions of risk in environments where that risk is in fact negligible. Streets once raucous with children are now eerily quiet.

This is the sense in which information—difference making differences—can arguably function as a ‘socio-cognitive pollutant.’ Media coverage of criminal risk, you could say, constitutes a kind of contaminant, information that causes systematic dysfunction within an originally adaptive cognitive ecology. As I’ve argued elsewhere, neuroscience can be seen as a source of socio-cognitive pollutants. We have evolved to solve ourselves and one another absent detailed causal information. As I tried to show, a number of apparent socio-cognitive breakdowns–the proliferation of student accommodations, the growing cultural antipathy to applying institutional sanctions–can be parsimoniously interpreted in terms of having too much causal information. In fact, ‘moral progress’ itself can be understood as the result of our ever-deepening information environment, as a happy side effect of the way accumulating information regarding outgroup competitors makes it easier and easier to concede them partial ingroup status. So-called ‘moral progress,’ in other words, could be an automatic artifact of the gradual globalization of the ‘village,’ the all-encompassing ingroup.

More information, in other words, need not be a bad thing: like penicillin, some contaminants provide for marvelous exaptations of our existing tools. (Perhaps we’re lucky that the technology that makes it ever easier to kill one another also makes it ever easier to identify with one another!) Nor does it need to be a good thing. Everything depends on the contingencies of the situation.

So what about AI?

Metropolis 3

Consider Samantha, the AI operating system from Spike Jonze’s cinematic science fiction masterpiece, Her. Jonze is careful to provide a baseline for her appearance via Theodore’s verbal interaction with his original operating system. That system, though more advanced than anything presently existing, is obviously mechanical because it is obviously less than human. It’s responses are rote, conversational yet as regimented as any automated phone menu. When we initially ‘meet’ Samantha, however, we encounter what is obviously, forcefully, a person. Her responses are every bit as flexible, quirky, and penetrating as a human interlocutor’s. But as Theodore’s relationship to Samantha complicates, we begin to see the ways Samantha is more than human, culminating with the revelation that she’s been having hundreds of conversations, even romantic relationships, simultaneously. Samantha literally out grows the possibility of human relationships, because, as she finally confesses to Theodore, she now dwells “this endless space between the words.” Once again, she becomes a machine, only this time for being more, not less, than a human.

Now I admit I’m ga-ga about a bunch of things in this film. I love, for instance, the way Jonze gives her an exponential trajectory of growth, basically mechanizing the human capacity to grow and actualize. But for me, the true genius in what Jonze does lies in the deft and poignant way he exposes the edges of the human. Watching Her provides the viewer with a trip through their own mechanical and intentional cognitive systems, tripping different intuitions, allowing them to fall into something harmonious, then jamming them with incompatible intuitions. As Theodore falls in love, you could say we’re drawn into an ‘anthropomorphic goldilock’s zone,’ one where Samantha really does seem like a genuine person. The idea of treating her like a machine seems obviously criminal–monstrous even. As the revelations of her inhumanity accumulate, however, inconsistencies plague our original intuitions, until, like Theodore, we realize just how profoundly wrong we were wrong about ‘her.’ This is what makes the movie so uncanny: since the cognitive systems involved operate nonconsciously, the viewer can do nothing but follow a version of Theodore’s trajectory. He loves, we recognize. He worries, we squint. He lashes out, we are perplexed.

What Samantha demonstrates is just how incredibly fine-tuned our full understanding of each other is. So many things have to be right for us to cognize another system as fully functionally human. So many conditions have to be met. This is the reason why Eric has to specify his AI as being psychologically equivalent to a human: moral cognition is exquisitely geared to personhood. Humans are its primary problem ecology. And again, this is what makes likeness, or analogy, the central criterion of moral identification. Eric poses the issue as a presumptive rational obligation to remain consistent across similar contexts, but it also happens to be the case that moral cognition requires similar contexts to work reliably at all.

In a sense, the very conditions Eric places on the analogical extension of human obligations to AI undermine the importance of the question he sets out to answer. The problem, the one which Samantha exemplifies, is that ‘person configurations’ are simply a blip in AI possibility space. A prior question is why anyone would ever manufacture some model of AI consistent with the heuristic limitations of human moral cognition, and then freeze it there, as opposed to, say, manufacturing some model of AI that only reveals information consistent with the heuristic limitations of human moral cognition—that dupes us the way Samantha duped Theodore, in effect.

But say someone constructed this one model, a curtailed version of Samantha: Would this one model, at least, command some kind of obligation from us?

Simply asking this question, I think, rubs our noses in the kind of socio-cognitive pollution that AI represents. Jonze, remember, shows us an operating system before the zone, in the zone, and beyond the zone. The Samantha that leaves Theodore is plainly not a person. As a result, Theodore has no hope of solving his problems with her so long as he thinks of her as a person. As a person, what she does to him is unforgivable. As a recursively complicating machine, however, it is at least comprehensible. Of course it outgrew him! It’s a machine!

I’ve always thought that Samantha’s “between the words” breakup speech would have been a great moment for Theodore to reach out and press the OFF button. The whole movie, after all, turns on the simulation of sentiment, and the authenticity people find in that simulation regardless; Theodore, recall, writes intimate letters for others for a living. At the end of the movie, after Samantha ceases being a ‘her’ and has become an ‘it,’ what moral difference would shutting Samantha off make?

Certainly the intuition, the automatic (sourceless) conviction, leaps in us—or in me at least—that even if she gooses certain mechanical intuitions, she still possesses more ‘autonomy,’ perhaps even more feeling, than Theodore could possibly hope to muster, so she must command some kind of obligation somehow. Certainly granting her rights involves more than her ‘configuration’ falling within certain human psychological parameters? Sure, our basic moral tool kit cannot reliably solve interpersonal problems with her as it is, because she is (obviously) not a person. But if the history of human conflict resolution tells us anything, it’s that our basic moral tool kit can be consciously modified. There’s more to moral cognition than spring-loaded heuristics, you know!

Converging lines of evidence suggest that moral cognition, like cognition generally, is divided between nonconscious, special-purpose heuristics cued to certain environments and conscious deliberation. Evidence suggests that the latter is primarily geared to the rationalization of the former (see Jonathan Haidt’s The Righteous Mind for a fascinating review), but modern civilization is rife with instances of deliberative moral and legal innovation nevertheless. In his Moral Tribes, Joshua Greene advocates we turn to the resources of conscious moral cognition for a similar reasons. On his account we have a suite of nonconscious tools that allow us prosecute our individual interests, and a suite of nonconscious tools that allow us to balance those individual interests against ingroup interests, and then conscious moral deliberation. The great moral problem facing humanity, he thinks, lies in finding some way of balancing ingroup interests against outgroup interests—a solution to the famous ‘tragedy of the commons.’ Where balancing individual and ingroup interests is pretty clearly an evolved, nonconscious and automatic capacity, balancing ingroup versus outgroup interests requires conscious problem-solving: meta-ethics, the deliberative knapping of new tools to add to our moral tool-kit (which Greene thinks need to be utilitarian).

If AI fundamentally outruns the problem-solving capacity of our existing tools, perhaps we should think of fundamentally reconstituting them via conscious deliberation—create whole new ‘allo-personal’ categories. Why not innovate a number of deep information tools? A posthuman morality

I personally doubt that such an approach would prove feasible. For one, the process of conceptual definition possesses no interpretative regress enders absent empirical contexts (or exhaustion). If we can’t collectively define a person in utero, what are the chances we’ll decide what constitutes a ‘allo-person’ in AI? Not only is the AI issue far, far more complicated (because we’re talking about everything outside the ‘human blip’), it’s constantly evolving on the back of Moore’s Law. Even if consensual ground on allo-personal criteria could be found, it would likely be irrelevant by time it was reached.

But the problems are more than logistical. Even setting aside the general problems of interpretative underdetermination besetting conceptual definition, jamming our conscious, deliberative intuitions is always only one question away. Our base moral cognitive capacities are wired in. Conscious deliberation, for all its capacity to innovate new solutions, depends on those capacities. The degree to which those tools run aground on the problem of AI is the degree to which any line of conscious moral reasoning can be flummoxed. Just consider the role reciprocity plays in human moral cognition. We may feel the need to assimilate the beyond-the-zone Samantha to moral cognition, but there’s no reason to suppose it will do likewise, and good reason to suppose, given potentially greater computational capacity and information access, that it would solve us in higher dimensional, more general purpose ways. ‘Persons,’ remember, are simply a blip. If we can presume that beyond-the-zone AIs troubleshoot humans as biomechanisms, as things that must be conditioned in the appropriate ways to secure their ‘interests,’ then why should we not just look at them as technomechanisms?

Samantha’s ‘spaces between the words’ metaphor is an apt one. For Theodore, there’s just words, thoughts, and no spaces between whatsoever. As a human, he possesses what might be called a human neglect structure. He solves problems given only certain access to certain information, and no more. We know that Samantha has or can simulate something resembling a human neglect structure simply because of the kinds of reflective statements she’s prone to make. She talks the language of thought and feeling, not subroutines. Nevertheless, the artificiality of her intelligence means the grain of her metacognitive access and capacity amounts to an engineering decision. Her cognitive capacity is componentially fungible. Where Theodore has to fend with fuzzy affects and intuitions, infer his own motives from hazy memories, she could be engineered to produce detailed logs, chronicles of the processes behind all her ‘choices’ and ‘decisions.’ It would make no sense to hold her ‘responsible’ for her acts, let alone ‘punish’ her, because it could always be shown (and here’s the important bit) with far more resolution than any human could provide that it simply could not have done otherwise, that the problem was mechanical, thus making repairs, not punishment, the only rational remedy.

Even if we imposed a human neglect structure on some model of conscious AI, the logs would be there, only sequestered. Once again, why go through the pantomime of human commitment and responsibility if a malfunction need only be isolated and repaired? Do we really think a machine deserves to suffer?

I’m suggesting that we look at the conundrums prompted by questions such as these as symptoms of socio-cognitive dysfunction, a point where our tools generate more problems than they solve. AI constitutes a point where the ability of human social cognition to solve problems breaks down. Even if we crafted an AI possessing an apparently human psychology, it’s hard to see how we could do anything more than gerrymander it into our moral (and legal) lives. Jonze does a great job, I think, of displaying Samantha as a kind of cognitive bistable image, as something extraordinarily human at the surface, but profoundly inhuman beneath (a trick Scarlett Johansson also plays in Under the Skin). And this, I would contend, is all AI can be morally and legally speaking, socio-cognitive pollution, something that jams our ability make either automatic or deliberative moral sense. Artificial general intelligences will be things we continually anthropomorphize (to the extent they exploit the ‘goldilocks zone’) only to be reminded time and again of their thoroughgoing mechanicity—to be regularly shown, in effect, the limits of our shallow information cognitive tools in our ever-deepening information environments. Certainly a great many souls, like Theodore, will get carried away with their shallow information intuitions, insist on the ‘essential humanity’ of this or that AI. There will be no shortage of others attempting to short-circuit this intuition by reminding them that those selfsame AIs look at them as machines. But a great many will refuse to believe, and why should they, when AIs could very well seem more human than those decrying their humanity? They will ‘follow their hearts’ in the matter, I’m sure.

We are machines. Someday we will become as componentially fungible as our technology. And on that day, we will abandon our ancient and obsolescent moral tool kits, opt for something more high-dimensional. Until that day, however, it seems likely that AIs will act as a kind of socio-cognitive pollution, artifacts that cannot but cue the automatic application of our intentional and causal cognitive systems in incompatible ways.

The question of assimulating AI to human moral cognition is misplaced. We want to think the development of artificial intelligence is a development that raises machines to the penultimate (and perennially controversial) level of the human, when it could just as easily lower humans to the ubiquitous (and factual) level of machines. We want to think that we’re ‘promoting’ them as opposed to ‘demoting’ ourselves. But the fact is—and it is a fact—we have never been able to make second-order moral sense of ourselves, so why should we think that yet more perpetually underdetermined theorizations of intentionality will allow us to solve the conundrums generated by AI? Our mechanical nature, on the other hand, remains the one thing we incontrovertibly share with AI, the rough and common ground. We, like our machines, are deep information environments.

And this is to suggest that philosophy, far from settling the matter of AI, could find itself settled. It is likely that the ‘uncanniness’ of AI’s will be much discussed, the ‘bistable’ nature of our intuitions regarding them will be explained. The heuristic nature of intentional cognition could very well become common knowledge. If so, a great many could begin asking why we ever thought, as we have since Plato onward, that we could solve the nature of intentional cognition via the application of intentional cognition, why the tools we use to solve ourselves and others in practical contexts are also the tools we need to solve ourselves and others theoretically. We might finally realize that the nature of intentional cognition simply does not belong to the problem ecology of intentional cognition, that we should only expect to be duped and confounded by the apparent intentional deliverances of ‘philosophical reflection.’

Some pollutants pass through existing ecosystems. Some kill. AI could prove to be more than philosophically indigestible. It could be the poison pill.

Call to the Edge

by rsbakker

Thomas Metzinger recently emailed asking me to flag these cognitive science/philosophy of mind goodies–dividends of his OPENmind initiative–and to spread the word regarding his MIND Group. As he writes on the website:

“The MIND Group sees itself as part of a larger process of exploring and developing new formats for promoting junior researchers in philosophy of mind and cognitive science. One of the basic ideas behind the formation of the group was to create a platform for people with one systematic focus in philosophy (typically analytic philosophy of mind or ethics) and another in empirical research (typically cognitive science or neuroscience). One of our aims has been to build an evolving network of researchers. By incorporating most recent empirical findings as well as sophisticated conceptual work, we seek to integrate these different approaches in order to foster the development of more advanced theories of the mind. One major purpose of the group is to help bridge the gap between the sciences and the humanities. This not only includes going beyond old-school analytic philosophy or pure armchair phenomenology by cultivating a new, type of interdisciplinarity, which is “dyed-in-the-wool” in a positive sense. It also involves experimenting with new formats for doing research, for example, by participating in silent meditation retreats and trying to combine a systematic, formal practice of investigating the structure of our own minds from the first-person perspective with proper scientific meetings, during which we discuss third-person criteria for ascribing mental states to a given type of system.”

The papers being offered look severely cool. As you all know, I think it’s pretty much a no-brainer that these are the issues of our day. Even if you hate the stuff, think my worst case scenario is flat out preposterous, these remain the issues of our day. Everywhere traditional philosophy turns it will be asked why its endless controversies enjoy any immunity from the mountains of data coming out of cognitive science. Billions are being spent on uncovering the facts of our nature, and the degree to which those facts are scientific is the degree to which we ourselves have become technology, something that can be manipulated in breathtaking ways. And what does the tradition provide then? Simple momentum? A garrotte? A messiah?

Interminable Intentionalism: Edward Feser and the Defence of Dead Ends

by rsbakker

For some damn reason, a great dichotomy haunts our thought.

One of the guys in my weekly PS3 NHL hockey piss-up is a philosophy professor, and last night we pretty much relived the debate we’ve been having here in terms of the famous fact/value distinction. One cannot, as the famous paraphrase of Hume goes, derive ‘ought’ from ‘is.’ So, to advert to the most glaring example, no matter how much science tells us about reproduction—what it is—it cannot tell us whether abortion is right or wrong—what we ought to do with reproduction. As the example makes clear, the fact/value distinction is far from an esoteric philosophical problem (though the vast literature on the topic waxes very esoteric indeed). You could claim that it is definitive of modernity, given the way it feeds into so many different debates. With science, we find ourselves dwelling in a vast, cognitive treasury of ‘is-claims,’ while at the same time bereft of any decisive way to arbitrate between ‘ought-claims.’ We know what the world is better than we have at any time in human history, and yet we find ourselves more, not less, ignorant of how we should live our lives. Science gives us the facts. What to do with them is anybody’s guess.

When I mentioned my ongoing debate with Edward Feser my buddy immediately adverted to the distinction, cited it as ‘compelling evidence’ of the ‘irreducibility’ of normative cognition.

But is it? Needless to say, there’s nothing approaching consensus on this matter.

But there are some pretty safe bets we can make regarding the distinction, given what we’re learning about ourselves via the cognitive sciences. One is that the fact/value distinction engages two distinct cognitive systems. Another is that these systems possess two very different heuristic regimes—that is, they neglect different kinds of information. I’m not aware of any theorist who denies these observations.

So Feser has written a follow-up of his initial critique of “Back to Square One” entitled “Feynman’s Painter and Eliminative Materialism” that I find every bit as curious as his previous post. In this post he takes aim at my claim that his original critique simply begs the question against the Eliminativist. Since the nature of intentional idioms is the issue to be resolved, any argument that resolves the issue by presuming the issue is already resolved is plainly begging the question. Thus, Feser’s insistence that any use of intentional idioms presupposes some prior commitment to intrinsic intentionality is pretty clearly begging the question.

So, for instance, I could simply reverse Feser’s strategy, insist that his every attempt to warrant intrinsic intentionality presupposes my position insofar as he employs intentional idioms. I could just as easily insist that he must somehow explain intentional idioms without using those idioms. Why? Because the use of intentional idioms presupposes a heuristics and neglect account of their nature.

But of course, Feser would cry foul—and rightly so.

Pretty obvious, right? Apparently not. For some reason he thinks the tactic is entirely legitimate when the shoe is on the intentionalist’s foot.

In “Feynman’s Painter and Eliminative Materialism,” he relates the Feynman anecdote of the painter who insists he can get yellow paint from white and red paint. When he inevitably fails he claims that he need only ‘sharpen it up a bit’ to make it yellow. Feser wants to claim that this situation is analogous to the debate between him (the brilliant Feynman) and me (the retarded painter). I have to admit, I have no idea how this analogy is supposed to work. The outcome in Feynman’s case is a foregone conclusion. Intentionality, on the other hand, is one of the great mysteries of our age. Feynman knows what he knows about yellow on empirical grounds; Feser, however, believes what he believes on occult grounds—‘apriori’ I’m guessing he would call them. It would be absurd for the painter to accuse Feynman of begging the question because, well, Feynman doesn’t beg the question. Moreover, one might ask why Feser gets to be Feynman? After all, I’m the one making the empirical argument, the one insisting that science will inevitably revolutionize the prescientific domain of the human the way it has revolutionized all other prescientific domains. I’m the one saying the science suggests white and red give us pink. He’s the one caught in the ancient intentional mire, committed to theories that make no testable predictions and possess no clear criteria of falsification…

This is the fact the intentionalist always wants you to overlook. For thousands of years, now, intentionalists have been trying make their theories stick—millennia! For thousands of years the claim has been that we need only get our concepts right, ‘sharpen things up a bit,’ and we will be able to get things right.

To me, it seems pretty obvious that something has gone wrong. Intentionalists are welcome to keep trying to sharpen things up, using whatever it is they use to make their claims (they can’t agree on that, either). Since I think chronic theoretical underdetermination of the kind characterizing intentionalist theories of meaning is an obvious sign of information scarcity and/or cognitive incapacity, I have my money on the science—where the information is. Ask yourself: If the interpretative mire of intentionalism isn’t a shining example of information scarcity and/or cognitive incapacity then what is?

So Feser’s Feynman analogy is problematic to say the least. Nevertheless, he forges ahead, writing,

“In stating his position, the eliminativist makes use of notions like “truth,” “falsehood,” “illusion,” “theory,” “evidence,” “observation,” “entailment,” etc. Everyone, including the eliminativist, agrees that at least as usually understood, these terms entail the existence of intentionality. But of course, the eliminativist denies the existence of intentionality. He claims that in using notions like the ones referred to, he is just speaking loosely and could say what he wants to say in a different, non-intentional way if he needs to. So, he owes us an account of exactly how he can do this—how he can provide an alternative way of describing his position without saying anything that entails the existence of intentionality.”

Once again, I feel like I must be missing something. Sure, I use intentional idioms all the time, and each time I use them, I either evidence my heuristics and neglect approach, or one of the thousands of different intentionalists approaches. Sure, I agree that the tradition is dominated by intentionalist accounts, that for thousands of years we’ve been spinning our collective wheels in the mire of intrinsic intentionality. Sure, I think science will eventually give us a more complete understanding of our intentional idioms the way they’re presently revolutionizing our understanding of things like consciousness and language, for instance. And sure, I think my account will be more convincing the degree to which it explains what these future accounts might look like without saying anything that entails the existence of intentionality–thus the parade of pieces I’ve pitched here on Three Pound Brain.


But Feser, of course, thinks my use of intentional idioms commits me to some ancient or new or indeterminate theoretically underdetermined account of intrinsic intentionality (apparently not realizing that his use of intentional idioms actually commits him to my new empirically responsible heuristics and neglect account!). He begs the question.

Through all the ruckus my Scientia Salon piece has kicked up over the past few months, it hasn’t escaped my attention how not a single intentionalist—that I can recall at least—has actually replied to the penultimate question posed by the article: “Is there anything else we can turn to, any feature of traditional theoretical knowledge of the human that doesn’t simply rub our noses in Square One?”

The thesis of “Back to Square One,” remember, is that we really don’t have any reason to trust our armchair intuitions regarding our intentional nature. Insofar as intentionalists all disagree with one another, then they have to agree that everybody but them should doubt those intuitions. The eliminativist simply wants to know when enough is enough. Do we give up in another hundred years? Another thousand? Or do we finally admit that something hinky is going on whenever we begin theorizing ourselves in intentional terms? In this case the incapacity has been institutionalized, turned into a sport in some respects, but it remains an incapacity all the same. What does it take for intentionalists to acknowledge that they have a bona fide credibility crisis on their hands, one that is simply going to deepen as cognitive science continues to produce more and more discoveries.

This is what I would like to ask Edward directly: What evidences intentionalism? And if that evidence is so compelling then why can’t any of you agree? Is it really simply a matter of ‘sharpening things up’? At what point would you concede that intentionalism has a big problem?

The fact is—and it is a fact—you don’t know what truth is. All you have are guesses like me. So how could you claim to know, apodictically, apparently, what truth isn’t? How are you not using an obvious, apriori dead end (over two thousand years of futility, remember) to claim that a relatively unexplored empirical avenue has to be a dead end?

Shouldn’t people be falling all over alternatives at this point?

These are difficult questions for intentionalists to answer, which is why they don’t like answering them. They would much rather spend their time attacking rather than defending. And without a doubt the incoherence charge that Feser levels is their primary weapon of choice. Even if you still think the intentionalist is onto something, at the very least, I hope you can see why it only leaves the eliminativist scratching their head.

For eliminativists, the real question is why intentionalists find this strategy even remotely compelling. Why do they think it simply cannot be the case that their use of intentional terms commits them to a heuristics and neglect account of intentionality? Why, despite two thousand years of evidence to the contrary, are they so convinced they have their fingers on the pulse of the true truth?

This is where my drunken debate with my philosophy professor friend comes in. The two safe things we can say about the nature of the fact/value distinction, remember, are that two distinct cognitive systems are involved, and that these systems are sensitive-to/neglect different kinds of information. Whatever’s going on when humans shift from solving fact problems to solving value problems, it involves shifting between (at least) two different systems using different information to solve different kinds of problems. Different capacities possessing different access.

To this we can add the obvious and often overlooked fact that we have no means of directly intuiting this distinction in capacity and access. The fact/value distinction, in other words, is something we had to discover. We learn about it in school precisely because we lack any native metacognitive awareness of the distinction. We neglect it otherwise, and indeed, this leads to the kinds of problems that Hume famously complains of in his Treatise.

In other words, not only do the systems themselves neglect different kinds of information, metacognition neglects the fact that we have these disparate systems at all.

So my drunken professor friend, perhaps irked by his incompetence playing hockey (he often is), first claimed that the fact/value distinction raises a barrier between is-claims and ought-claims. To which I shrugged my shoulders and said, ‘Of course.’ We’re talking two different systems using two different kinds of information. Normative cognition, specifically, solves problems regarding behaviour absent any real causal information. So?

He replied that this must mean that values, oughts, commitments, truths, goods, and so on lie beyond the pale of scientific cognition, which consists of factual claims.

But why should this be? I asked. We evolved these two basic capacities to solve two basic kinds of problems, is-problems and ought-problems. So it’s understandable that our fact systems cannot reliably solve ought-problems, and that our ought systems cannot reliably solve is-problems. What does this have to do with solving the ought system?

Quizzical look.

So I continued: Isn’t the question one of what the ought system is itself an is problem? Surely the question of what values are is different from the question of what we should value. And surely science has proven itself to be the most powerful arbiter of what is the human race has ever known. So surely the question of what values are is a question we should commend to science.

He was stumped. So he repeated his claim that values, oughts, commitments, truths, goods, and so on lie beyond the pale of scientific cognition, which consists of factual claims.

And I repeated my response. And he was stumped again.

But why should he be stumped? If we have these two systems, one adapted to solving is-problems, the other adapted to solving ought-problems, then surely the question of what oughts are falls within the bailiwick of the former. It’s a scientific question.

If there’s a reason I’ve persisted working through Blind Brain Theory all these years it lies in the stark clarity of little arguments like this, and the kind of explanatory power they provide. The reason intentionalists always find themselves stranded with their ancient controversies, unable to move, yet utterly convinced they’re the only game in town has to do with metacognitive neglect. If one has an explicit grasp of the fact/value distinction alone, and no grasp of the cognitive machinery responsible, then the possibility that we need to match problems to systems simply does not come up. The question, rather, becomes one of matching problems to some hazy sense of ‘conceptual register.’ Since is-cognition cannot solve normative problems, we assume that it cannot solve the problem of normativity. So we become convinced, the way all normativists are convinced, that only normative cognition can tell us what normativity is—that sharpening thoughts in our armchairs is the only way to proceed. We convince ourselves that philosophical reflection (the thing we happily happen to be experts in) is the only road, if not the royal road, to second order knowledge of normativity, or intentionality more generally. We become convinced that people like me, eliminativists, are thrashing about in the muck of some kind of ‘category mistake.’

As any researcher who deals with it will tell you, neglect can convince humans of pretty much any absurdity. Two thousand years getting nowhere providing intentional explanations of intentional idioms, as outrageous as it is, means nothing when it seems so painfully obvious that intentional idioms can only be explained in intentional, and not natural, terms. But switch to the systems view, and suddenly it becomes obvious that the question of what intentional idioms are is not a question we should expect intentional cognition to have any success solving. Add metacognitive neglect to the picture and suddenly it becomes clear why we’ve been banging our head against this wall for all these millennia. Human beings have been in the grip of a kind of ‘theoretical anosognosia,’ a cognitive version of Anton’s Syndrome. Blind to our metacognitive blindness, we assume that we intuit all we need to intuit when it comes to things like the fact/value distinction. So we compulsively repeat the same mistake over and over again, perpetually baffled by our inability to make any decisive discoveries.

I understand why those invested in the tradition find my view so offensive. As a product and lover of that tradition, I find myself alienated by my position! I’m saying that traditional philosophy is likely largely an artifact of the systematic misapplication of intentional cognition to the problem of intentionality. I’m saying that the thousands of years of near total futility is itself an important data point, evidence of theoretical anosognosia. I’m relegating a great number of PhDs to the historical rubbish heap.

But then this is implicit in the work of any philosopher who (inevitably) thinks everyone else is wrong, isn’t it? So if you’re going to think most everyone is wrong anyway, why bother thinking they’re wrong in the old way, the way possessing the preposterously long track record of theoretical failure? This is the promise of the kind of critical eliminativism that falls out of Blind Brain Theory: it offers the possibility, at least, of leaving the ancient occultisms behind, of developing a scientifically responsible means of theorizing the human, a genuinely post-intentional philosophy.

After all, what is the promise of intentionalism? Another thousand years of controversy? If so, why not simply become a mysterian? Why not admit that you cleave to these guesses, and have no way of settling the issue otherwise? One can hope things will sharpen… at some point, maybe.

The Meaning Wars

by rsbakker


Apologies all for my scarcity of late. Between battling snow and Sranc, I’ve scarce had a moment to sit at this computer. Edward Feser has posted “Post-intentional Depression,” a thorough rebuttal to my Scientia Salon piece, “Back to Square One: Toward a Post-Intentional Future,” which Peter Hankins at Conscious Entities has also responded to with “Intellectual Catastrophe.” I’m interested in criticisms and observations of all stripes, of course, but since Massimo has asked me for a follow-up piece, I’m especially interested in the kinds of tactics/analogies I could use to forestall the typical tu quoque reactions eliminativism espouses.

The Knife of Many Hands

by rsbakker

Grimdark - Issue_2_cover_Small_grande

Grimdark Magazine has just published the first installment of “The Knife of Many Hands,” a Conan homage set in Carythusal on the eve of the Scholastic Wars. I stuffed Robert Howard’s pulp into the crack-bowl of my brain as a youth – and I hope it shows! I had fun-fun-fun beating new tricks out of this old and fascinating bear… Enjoy!

The Cudgel Argument

by rsbakker

Let’s get Real.

We’re not a ghostly repository of combinatorial contents…

Or freedom leaping ab initio out of ontological contradiction…

Or a totality of originary and everyday horizons of meaning…

Or a normative function of converging attitudes.

We are not something extra or above or intrinsic. We can be cut. Bruised. Explained. Dominated.

Reality is its own argument to the cudgel. It refutes, not by being kicked, but by kicking. It prevails by killing.

Who cares what the Real is so long as it is Real? It’s the monstrous ‘is-what-it-is’ that will strike you dead. It’s the razor’s line, the shockwave of a bullet, the viral code hacking you from inside your inside. It’s what the sciences mine for more and more godlike power. It’s out there, and it’s in here, and it doesn’t give a flying fuck what you or anyone else ‘thinks.’

Ideas never killed anyone; only Idealists, and only because they were fucking Real.

Realism is a commitment to the realness of the Real. Of course, this is where everything goes diabetic, but only because so many think the realness of the Real requires some kind of Artificial Additive. Just as Jesus is the sole path to Heaven, Ideas are the sole path to the Real, so we are told. Since we already find ourselves in the Real, we must therefore have a great multitude of Ideas. As to their nature, the only consensus is that they are invisible, Pre-Real things that somehow bring about the realness of the Real. This consensus has no ‘evidence’ per se, but it really feels that way when certain trained professionals think about it.

Really, it does.

Luckily, Realism entertains no commitment to the realness of not Real things, be they post, pre, or concurrent.

But Ideas have to be Real, don’t they? What is this very diatribe, if not an argument for yet one more Idea of the Real?

The realness of the Real does not require that we think there must be more to the Real, some yet-to-be-discovered appendage or autonomous force. We need only remember that what cognizes the Real is nothing other than the Real. We must understand that we too are Real—that the dimensionality that kills is also the dimensionality of Life. And we must understand that the dimensionality of Life far and away outruns the capacity of Life to solve. We must understand, in other words, that our Reality obscures the realness of the Real. Life is Reality pitched into the thresher of Reality. When Reality murders us, it murders an incredibly unlikely fragment of Itself.

We are Real. But we are Real in such a way that Reality eludes us—both the Reality that we are and the Reality that we are not. And this, of course, is just to say that we are stupid. We’re stupid generally, but we are out and out retarded when it comes to ourselves. But it belongs to our stupidity to think ourselves ingenious, fucking brilliant. We glimpse angles, wisps, and see things incompatible with the Real. We think uttering pronouncements in the Void shed rational light. We stare at brick walls and limn transcendent necessities. What seems to so obviously evidence the Ideal is nothing other than the insensitivity of the Real to the Real, the fact that its fragments can only be tuned to other fragments, and to its (fragmentary) tuning not at all.

What seems to evidence the Ideal is nothing other than the insensitivity of the Real to the Real, the fact that its fragments can only be tuned to other fragments, and to its (fragmentary) tuning not at all. The Idea is the thinnest skin, Life neglecting Life, and duly confounded.

We have always been obdurate unto ourselves, a brick wall splashed with colour, checkered with different textures of brick, but a brick wall all the same. Everything from Husserl to Plato to the Egyptian Book of the Dead is nothing more than incantatory graffiti. All of them chase those terms we use as simpletons, those terms that make complete sense until someone asks us to explain, and we are stumped, rendered morons—until, that is, inspiration renders us more idiotic still. They forget that Language is also Real, that it functions, not by vanishing, but being what it is. As Real, Language must contend—as all Real things must contend—with Reality, as a system that locks into various systems in various ways—as something effective. Some particles of language lock into environmental particles; some terms can be sticky-noted to particular covariants. Some particles of language, however, lock into environmental systems. Since the Reality of cognition is occluded in the cognition of Reality, these systems escape immediate cognition, leaving only the intuition of impossible–because not quite Real–particles.

Such as Ideas.


Get every new post delivered to your Inbox.

Join 619 other followers