Three Pound Brain

No bells, just whistling in the dark…

Month: January, 2015

Artificial Intelligence as Socio-Cognitive Pollution

by rsbakker

Metropolis 1

.

Eric Schwitzgebel over at the always excellent Splintered Minds, has been debating the question of how robots—or AI’s more generally—can be squared with our moral sensibilities. In “Our Moral Duties to Artificial Intelligences” he poses a very simple and yet surprisingly difficult question: “Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?”

He then lists numerous considerations that could possibly attenuate the degree of obligation we take on when we construct sentient, sapient machine intelligences. Prima facie, it seems obvious that our moral obligation to our machines should mirror our obligations to one another the degree to which they resemble us. But Eric provides a number of reasons why we might think our obligation to be less. For one, humans clearly rank their obligations to one another. If our obligation to our children is greater than that to a stranger, then perhaps our obligation to human strangers should be greater than that to a robot stranger.

The idea that interests Eric the most is the possible paternal obligation of a creator. As he writes:

“Since we created them, and since we have godlike control over them (either controlling their environments, their psychological parameters, or both), we have a special duty to ensure their well-being, which exceeds the duty we would have to an arbitrary human stranger of equal cognitive and emotional capacity. If I create an Adam and Eve, I should put them in an Eden, protect them from unnecessary dangers, ensure that they flourish.”

We have a duty not to foist the same problem of theodicy on our creations that we ourselves suffer! (Eric and I have a short story in Nature on this very issue).

Eric, of course, is sensitive to the many problems such a relationship poses, and he touches what are very live debates surrounding the way AIs complicate the legal landscape.  So as Ryan Calo argues, for instance, the primary problem lies in the way our hardwired ways of understanding each other run afoul the machinic nature of our tools, no matter how intelligent. Apparently AI crime is already a possibility. If it makes no sense to assign responsibility to the AI—if we have no corresponding obligation to punish them—then who takes the wrap? The creators? In the linked interview, at least, Calo is quick to point out the difficulties here, the fact that this isn’t simply a matter of expanding the role of existing legal tools (such as that of ‘negligence’ in the age of the first train accidents), but of creating new ones, perhaps generating whole new ontological categories that somehow straddle the agent/machine divide.

But where Calo is interested in the issue of what AIs do to people, in particular how their proliferation frustrates the straightforward assignation of legal responsibility, Eric is interested in what people do to AIs, the kinds of things we do and do not owe to our creations. Calo, of course, is interested in how to incorporate new technologies into our existing legal frameworks. Since legal reasoning is primarily analogistic reasoning, precedence underwrites all legal decision making. So for Calo, the problem is bound to be more one of adapting existing legal tools than constituting new ones (though he certainly recognizes the dimension). How do we accommodate AIs within our existing set of legal tools? Eric, of course, is more interested in the question how we might accommodate AGIs within our existing set of moral tools. To the extent that we expect our legal tools to render outcomes consonant with our moral sensibilities, there is a sense in which Eric is asking the more basic question. But the two questions, I hope to show, actually bear some striking—and troubling—similarities.

The question of fundamental obligations, of course, is the question of rights. In his follow-up piece, “Two Arguments for AI (or Robot) Rights: The No-Relevant-Difference Argument and the Simulation Argument,” Eric Schwitzgebel accordingly turns to the question of whether AIs possess any rights at all.

Since the Simulation Argument requires accepting that we ourselves are simulations—AI’s—we can exclude it here, I think (as Eric himself does, more or less), and stick with the No-Relevant-Difference Argument. This argument presumes that human-like cognitive and experiential properties automatically confer AIs with human-like moral properties, placing the onus on the rights denier to “to find a relevant difference which grounds the denial of rights.” As in the legal case, the moral reasoning here is analogistic: the more AI’s resemble us, the more of our rights they should possess. After considering several possible relevant differences, Eric concludes “that at least some artificial intelligences, if they have human-like experience, cognition, and emotion, would have at least some rights, or deserve at least some moral consideration.” This is the case, he suggests, whether one’s theoretical sympathies run to the consequentialist or the deontological end of the ethical spectrum. So far as AI’s possess the capacity for happiness, a consequentialist should be interested in maximizing that happiness. So far as AI’s are capable of reasoning, then a deontologist should consider them rational beings, deserving the respect due all rational beings.

So some AIs merit some rights the degree to which they resemble humans. If you think about it, this claim resounds with intuitive obviousness. Are we going to deny rights to beings that think as subtly and feel as deeply as ourselves?

What I want to show is how this question, despite its formidable intuitive appeal, misdiagnoses the nature of the dilemma that AI presents. Posing the question of whether AI should possess rights, I want to suggest, is premature to the extent it presumes human moral cognition actually can adapt to the proliferation of AI. I don’t think it can. In fact, I think attempts to integrate AI into human moral cognition simply demonstrate the dependence of human moral cognition on what might be called shallow information environments. As the heuristic product of various ancestral shallow information ecologies, human moral cognition–or human intentional cognition more generally–simply does not possess the functional wherewithal to reliably solve in what might be called deep information environments.

Metropolis 2

Let’s begin with what might seem a strange question: Why should analogy play such an important role in our attempts to accommodate AI’s within the gambit of human legal and moral problem solving? By the same token, why should disanalogy prove such a powerful way to argue the inapplicability of different moral or legal categories?

The obvious answer, I think anyway, has to do with the relation between our cognitive tools and our cognitive problems. If you’ve solved a particular problem using a particular tool in the past, it stands to reason that, all things being equal, the same tool should enable the solution of any new problem possessing a similar enough structure to the original problem. Screw problems require screwdriver solutions, so perhaps screw-like problems require screwdriver-like solutions. This reliance on analogy actually provides us a different, and as I hope to show, more nuanced way to pose the potential problems of AI.  We can even map several different possibilities in the crude terms of our tool metaphor. It could be, for instance, we simply don’t possess the tools we need, that the problem resembles nothing our species has encountered before. It could be AI resembles a screw-like problem, but can only confound screwdriver-like solutions. It could be that AI requires we use a hammer and a screwdriver, two incompatible tools, simultaneously!

The fact is AI is something biologically unprecedented, a source of potential problems unlike any homo sapiens has ever encountered. We have no  reason to suppose a priori that our tools are up to the task–particularly since we know so little about the tools or the task! Novelty. Novelty is why the development of AI poses as much a challenge for legal problem-solving as it does for moral problem-solving: not only does AI constitute a never-ending source of novel problems, familiar information structured in unfamiliar ways, it also promises to be a never-ending source of unprecedented information.

The challenges posed by the former are dizzying, especially when one considers the possibilities of AI mediated relationships. The componential nature of the technology means that new forms can always be created. AI confront us with a combinatorial mill of possibilities, a never ending series of legal and moral problems requiring further analogical attunement. The question here is whether our legal and moral systems possess the tools they require to cope with what amounts to an open-ended, ever-complicating task.

Call this the Overload Problem: the problem of somehow resolving a proliferation of unprecedented cases. Since we have good reason to presume that our institutional and/or psychological capacity to assimulate new problems to existing tool sets (and vice versa) possesses limitations, the possibility of change accelerating beyond those capacities to cope is a very real one.

But the challenges posed by latter, the problem of assimulating unprecedented information, could very well prove insuperable. Think about it: intentional cognition solves problems neglecting certain kinds of causal information. Causal cognition, not surprisingly, finds intentional cognition inscrutable (thus the interminable parade of ontic and ontological pineal glands trammelling cognitive science.) And intentional cognition, not surprisingly, is jammed/attenuated by causal information (thus different intellectual ‘unjamming’ cottage industries like compatibilism).

Intentional cognition is pretty clearly an adaptive artifact of what might be called shallow information environments. The idioms of personhood leverage innumerable solutions absent any explicit high-dimensional causal information. We solve people and lawnmowers in radically different ways. Not only do we understand the actions of our fellows lacking any detailed causal information regarding their actions, we understand our responses in the same way. Moral cognition, as a subspecies of intentional cognition, is an artifact of shallow information problem ecologies, a suite of tools adapted to solving certain kinds of problems despite neglecting (for obvious reasons) information regarding what is actually going on. Selectively attuning to one another as persons served our ‘benighted’ ancestors quite well. So what happens when high-dimensional causal information becomes explicit and ubiquitous?

What happens to our shallow information tool-kit in a deep information world?

Call this the Maladaption Problem: the problem of resolving a proliferation of unprecedented cases in the presence of unprecedented information. Given that we have no intuition of the limits of cognition period, let alone those belonging to moral cognition, I’m sure this notion will strike many as absurd. Nevertheless, cognitive science has discovered numerous ways to short circuit the accuracy of our intuitions via manipulation of the information available for problem solving. When it comes to the nonconscious cognition underwriting everything we do, an intimate relation exists between the cognitive capacities we have and the information those capacities have available.

But how could more information be a bad thing? Well, consider the persistent disconnect between the actual risk of crime in North America and the public perception of that risk. Given that our ancestors evolved in uniformly small social units, we seem to assess the risk of crime in absolute terms rather than against any variable baseline. Given this, we should expect that crime information culled from far larger populations would reliably generate ‘irrational fears,’ the ‘gut sense’ that things are actually more dangerous than they in fact are. Our risk assessment heuristics, in other words, are adapted to shallow information environments. The relative constancy of group size means that information regarding group size can be ignored, and the problem of assessing risk economized. This is what evolution does: find ways to cheat complexity. The development of mass media, however, has ‘deepened’ our information environment, presenting evolutionarily unprecedented information cuing perceptions of risk in environments where that risk is in fact negligible. Streets once raucous with children are now eerily quiet.

This is the sense in which information—difference making differences—can arguably function as a ‘socio-cognitive pollutant.’ Media coverage of criminal risk, you could say, constitutes a kind of contaminant, information that causes systematic dysfunction within an originally adaptive cognitive ecology. As I’ve argued elsewhere, neuroscience can be seen as a source of socio-cognitive pollutants. We have evolved to solve ourselves and one another absent detailed causal information. As I tried to show, a number of apparent socio-cognitive breakdowns–the proliferation of student accommodations, the growing cultural antipathy to applying institutional sanctions–can be parsimoniously interpreted in terms of having too much causal information. In fact, ‘moral progress’ itself can be understood as the result of our ever-deepening information environment, as a happy side effect of the way accumulating information regarding outgroup competitors makes it easier and easier to concede them partial ingroup status. So-called ‘moral progress,’ in other words, could be an automatic artifact of the gradual globalization of the ‘village,’ the all-encompassing ingroup.

More information, in other words, need not be a bad thing: like penicillin, some contaminants provide for marvelous exaptations of our existing tools. (Perhaps we’re lucky that the technology that makes it ever easier to kill one another also makes it ever easier to identify with one another!) Nor does it need to be a good thing. Everything depends on the contingencies of the situation.

So what about AI?

Metropolis 3

Consider Samantha, the AI operating system from Spike Jonze’s cinematic science fiction masterpiece, Her. Jonze is careful to provide a baseline for her appearance via Theodore’s verbal interaction with his original operating system. That system, though more advanced than anything presently existing, is obviously mechanical because it is obviously less than human. It’s responses are rote, conversational yet as regimented as any automated phone menu. When we initially ‘meet’ Samantha, however, we encounter what is obviously, forcefully, a person. Her responses are every bit as flexible, quirky, and penetrating as a human interlocutor’s. But as Theodore’s relationship to Samantha complicates, we begin to see the ways Samantha is more than human, culminating with the revelation that she’s been having hundreds of conversations, even romantic relationships, simultaneously. Samantha literally out grows the possibility of human relationships, because, as she finally confesses to Theodore, she now dwells “this endless space between the words.” Once again, she becomes a machine, only this time for being more, not less, than a human.

Now I admit I’m ga-ga about a bunch of things in this film. I love, for instance, the way Jonze gives her an exponential trajectory of growth, basically mechanizing the human capacity to grow and actualize. But for me, the true genius in what Jonze does lies in the deft and poignant way he exposes the edges of the human. Watching Her provides the viewer with a trip through their own mechanical and intentional cognitive systems, tripping different intuitions, allowing them to fall into something harmonious, then jamming them with incompatible intuitions. As Theodore falls in love, you could say we’re drawn into an ‘anthropomorphic goldilock’s zone,’ one where Samantha really does seem like a genuine person. The idea of treating her like a machine seems obviously criminal–monstrous even. As the revelations of her inhumanity accumulate, however, inconsistencies plague our original intuitions, until, like Theodore, we realize just how profoundly wrong we were wrong about ‘her.’ This is what makes the movie so uncanny: since the cognitive systems involved operate nonconsciously, the viewer can do nothing but follow a version of Theodore’s trajectory. He loves, we recognize. He worries, we squint. He lashes out, we are perplexed.

What Samantha demonstrates is just how incredibly fine-tuned our full understanding of each other is. So many things have to be right for us to cognize another system as fully functionally human. So many conditions have to be met. This is the reason why Eric has to specify his AI as being psychologically equivalent to a human: moral cognition is exquisitely geared to personhood. Humans are its primary problem ecology. And again, this is what makes likeness, or analogy, the central criterion of moral identification. Eric poses the issue as a presumptive rational obligation to remain consistent across similar contexts, but it also happens to be the case that moral cognition requires similar contexts to work reliably at all.

In a sense, the very conditions Eric places on the analogical extension of human obligations to AI undermine the importance of the question he sets out to answer. The problem, the one which Samantha exemplifies, is that ‘person configurations’ are simply a blip in AI possibility space. A prior question is why anyone would ever manufacture some model of AI consistent with the heuristic limitations of human moral cognition, and then freeze it there, as opposed to, say, manufacturing some model of AI that only reveals information consistent with the heuristic limitations of human moral cognition—that dupes us the way Samantha duped Theodore, in effect.

But say someone constructed this one model, a curtailed version of Samantha: Would this one model, at least, command some kind of obligation from us?

Simply asking this question, I think, rubs our noses in the kind of socio-cognitive pollution that AI represents. Jonze, remember, shows us an operating system before the zone, in the zone, and beyond the zone. The Samantha that leaves Theodore is plainly not a person. As a result, Theodore has no hope of solving his problems with her so long as he thinks of her as a person. As a person, what she does to him is unforgivable. As a recursively complicating machine, however, it is at least comprehensible. Of course it outgrew him! It’s a machine!

I’ve always thought that Samantha’s “between the words” breakup speech would have been a great moment for Theodore to reach out and press the OFF button. The whole movie, after all, turns on the simulation of sentiment, and the authenticity people find in that simulation regardless; Theodore, recall, writes intimate letters for others for a living. At the end of the movie, after Samantha ceases being a ‘her’ and has become an ‘it,’ what moral difference would shutting Samantha off make?

Certainly the intuition, the automatic (sourceless) conviction, leaps in us—or in me at least—that even if she gooses certain mechanical intuitions, she still possesses more ‘autonomy,’ perhaps even more feeling, than Theodore could possibly hope to muster, so she must command some kind of obligation somehow. Certainly granting her rights involves more than her ‘configuration’ falling within certain human psychological parameters? Sure, our basic moral tool kit cannot reliably solve interpersonal problems with her as it is, because she is (obviously) not a person. But if the history of human conflict resolution tells us anything, it’s that our basic moral tool kit can be consciously modified. There’s more to moral cognition than spring-loaded heuristics, you know!

Converging lines of evidence suggest that moral cognition, like cognition generally, is divided between nonconscious, special-purpose heuristics cued to certain environments and conscious deliberation. Evidence suggests that the latter is primarily geared to the rationalization of the former (see Jonathan Haidt’s The Righteous Mind for a fascinating review), but modern civilization is rife with instances of deliberative moral and legal innovation nevertheless. In his Moral Tribes, Joshua Greene advocates we turn to the resources of conscious moral cognition for a similar reasons. On his account we have a suite of nonconscious tools that allow us prosecute our individual interests, and a suite of nonconscious tools that allow us to balance those individual interests against ingroup interests, and then conscious moral deliberation. The great moral problem facing humanity, he thinks, lies in finding some way of balancing ingroup interests against outgroup interests—a solution to the famous ‘tragedy of the commons.’ Where balancing individual and ingroup interests is pretty clearly an evolved, nonconscious and automatic capacity, balancing ingroup versus outgroup interests requires conscious problem-solving: meta-ethics, the deliberative knapping of new tools to add to our moral tool-kit (which Greene thinks need to be utilitarian).

If AI fundamentally outruns the problem-solving capacity of our existing tools, perhaps we should think of fundamentally reconstituting them via conscious deliberation—create whole new ‘allo-personal’ categories. Why not innovate a number of deep information tools? A posthuman morality

I personally doubt that such an approach would prove feasible. For one, the process of conceptual definition possesses no interpretative regress enders absent empirical contexts (or exhaustion). If we can’t collectively define a person in utero, what are the chances we’ll decide what constitutes a ‘allo-person’ in AI? Not only is the AI issue far, far more complicated (because we’re talking about everything outside the ‘human blip’), it’s constantly evolving on the back of Moore’s Law. Even if consensual ground on allo-personal criteria could be found, it would likely be irrelevant by time it was reached.

But the problems are more than logistical. Even setting aside the general problems of interpretative underdetermination besetting conceptual definition, jamming our conscious, deliberative intuitions is always only one question away. Our base moral cognitive capacities are wired in. Conscious deliberation, for all its capacity to innovate new solutions, depends on those capacities. The degree to which those tools run aground on the problem of AI is the degree to which any line of conscious moral reasoning can be flummoxed. Just consider the role reciprocity plays in human moral cognition. We may feel the need to assimilate the beyond-the-zone Samantha to moral cognition, but there’s no reason to suppose it will do likewise, and good reason to suppose, given potentially greater computational capacity and information access, that it would solve us in higher dimensional, more general purpose ways. ‘Persons,’ remember, are simply a blip. If we can presume that beyond-the-zone AIs troubleshoot humans as biomechanisms, as things that must be conditioned in the appropriate ways to secure their ‘interests,’ then why should we not just look at them as technomechanisms?

Samantha’s ‘spaces between the words’ metaphor is an apt one. For Theodore, there’s just words, thoughts, and no spaces between whatsoever. As a human, he possesses what might be called a human neglect structure. He solves problems given only certain access to certain information, and no more. We know that Samantha has or can simulate something resembling a human neglect structure simply because of the kinds of reflective statements she’s prone to make. She talks the language of thought and feeling, not subroutines. Nevertheless, the artificiality of her intelligence means the grain of her metacognitive access and capacity amounts to an engineering decision. Her cognitive capacity is componentially fungible. Where Theodore has to fend with fuzzy affects and intuitions, infer his own motives from hazy memories, she could be engineered to produce detailed logs, chronicles of the processes behind all her ‘choices’ and ‘decisions.’ It would make no sense to hold her ‘responsible’ for her acts, let alone ‘punish’ her, because it could always be shown (and here’s the important bit) with far more resolution than any human could provide that it simply could not have done otherwise, that the problem was mechanical, thus making repairs, not punishment, the only rational remedy.

Even if we imposed a human neglect structure on some model of conscious AI, the logs would be there, only sequestered. Once again, why go through the pantomime of human commitment and responsibility if a malfunction need only be isolated and repaired? Do we really think a machine deserves to suffer?

I’m suggesting that we look at the conundrums prompted by questions such as these as symptoms of socio-cognitive dysfunction, a point where our tools generate more problems than they solve. AI constitutes a point where the ability of human social cognition to solve problems breaks down. Even if we crafted an AI possessing an apparently human psychology, it’s hard to see how we could do anything more than gerrymander it into our moral (and legal) lives. Jonze does a great job, I think, of displaying Samantha as a kind of cognitive bistable image, as something extraordinarily human at the surface, but profoundly inhuman beneath (a trick Scarlett Johansson also plays in Under the Skin). And this, I would contend, is all AI can be morally and legally speaking, socio-cognitive pollution, something that jams our ability make either automatic or deliberative moral sense. Artificial general intelligences will be things we continually anthropomorphize (to the extent they exploit the ‘goldilocks zone’) only to be reminded time and again of their thoroughgoing mechanicity—to be regularly shown, in effect, the limits of our shallow information cognitive tools in our ever-deepening information environments. Certainly a great many souls, like Theodore, will get carried away with their shallow information intuitions, insist on the ‘essential humanity’ of this or that AI. There will be no shortage of others attempting to short-circuit this intuition by reminding them that those selfsame AIs look at them as machines. But a great many will refuse to believe, and why should they, when AIs could very well seem more human than those decrying their humanity? They will ‘follow their hearts’ in the matter, I’m sure.

We are machines. Someday we will become as componentially fungible as our technology. And on that day, we will abandon our ancient and obsolescent moral tool kits, opt for something more high-dimensional. Until that day, however, it seems likely that AIs will act as a kind of socio-cognitive pollution, artifacts that cannot but cue the automatic application of our intentional and causal cognitive systems in incompatible ways.

The question of assimulating AI to human moral cognition is misplaced. We want to think the development of artificial intelligence is a development that raises machines to the penultimate (and perennially controversial) level of the human, when it could just as easily lower humans to the ubiquitous (and factual) level of machines. We want to think that we’re ‘promoting’ them as opposed to ‘demoting’ ourselves. But the fact is—and it is a fact—we have never been able to make second-order moral sense of ourselves, so why should we think that yet more perpetually underdetermined theorizations of intentionality will allow us to solve the conundrums generated by AI? Our mechanical nature, on the other hand, remains the one thing we incontrovertibly share with AI, the rough and common ground. We, like our machines, are deep information environments.

And this is to suggest that philosophy, far from settling the matter of AI, could find itself settled. It is likely that the ‘uncanniness’ of AI’s will be much discussed, the ‘bistable’ nature of our intuitions regarding them will be explained. The heuristic nature of intentional cognition could very well become common knowledge. If so, a great many could begin asking why we ever thought, as we have since Plato onward, that we could solve the nature of intentional cognition via the application of intentional cognition, why the tools we use to solve ourselves and others in practical contexts are also the tools we need to solve ourselves and others theoretically. We might finally realize that the nature of intentional cognition simply does not belong to the problem ecology of intentional cognition, that we should only expect to be duped and confounded by the apparent intentional deliverances of ‘philosophical reflection.’

Some pollutants pass through existing ecosystems. Some kill. AI could prove to be more than philosophically indigestible. It could be the poison pill.

Advertisements

Call to the Edge

by rsbakker

Thomas Metzinger recently emailed asking me to flag these cognitive science/philosophy of mind goodies–dividends of his OPENmind initiative–and to spread the word regarding his MIND Group. As he writes on the website:

“The MIND Group sees itself as part of a larger process of exploring and developing new formats for promoting junior researchers in philosophy of mind and cognitive science. One of the basic ideas behind the formation of the group was to create a platform for people with one systematic focus in philosophy (typically analytic philosophy of mind or ethics) and another in empirical research (typically cognitive science or neuroscience). One of our aims has been to build an evolving network of researchers. By incorporating most recent empirical findings as well as sophisticated conceptual work, we seek to integrate these different approaches in order to foster the development of more advanced theories of the mind. One major purpose of the group is to help bridge the gap between the sciences and the humanities. This not only includes going beyond old-school analytic philosophy or pure armchair phenomenology by cultivating a new, type of interdisciplinarity, which is “dyed-in-the-wool” in a positive sense. It also involves experimenting with new formats for doing research, for example, by participating in silent meditation retreats and trying to combine a systematic, formal practice of investigating the structure of our own minds from the first-person perspective with proper scientific meetings, during which we discuss third-person criteria for ascribing mental states to a given type of system.”

The papers being offered look severely cool. As you all know, I think it’s pretty much a no-brainer that these are the issues of our day. Even if you hate the stuff, think my worst case scenario is flat out preposterous, these remain the issues of our day. Everywhere traditional philosophy turns it will be asked why its endless controversies enjoy any immunity from the mountains of data coming out of cognitive science. Billions are being spent on uncovering the facts of our nature, and the degree to which those facts are scientific is the degree to which we ourselves have become technology, something that can be manipulated in breathtaking ways. And what does the tradition provide then? Simple momentum? A garrotte? A messiah?

Interminable Intentionalism: Edward Feser and the Defence of Dead Ends

by rsbakker

For some damn reason, a great dichotomy haunts our thought.

One of the guys in my weekly PS3 NHL hockey piss-up is a philosophy professor, and last night we pretty much relived the debate we’ve been having here in terms of the famous fact/value distinction. One cannot, as the famous paraphrase of Hume goes, derive ‘ought’ from ‘is.’ So, to advert to the most glaring example, no matter how much science tells us about reproduction—what it is—it cannot tell us whether abortion is right or wrong—what we ought to do with reproduction. As the example makes clear, the fact/value distinction is far from an esoteric philosophical problem (though the vast literature on the topic waxes very esoteric indeed). You could claim that it is definitive of modernity, given the way it feeds into so many different debates. With science, we find ourselves dwelling in a vast, cognitive treasury of ‘is-claims,’ while at the same time bereft of any decisive way to arbitrate between ‘ought-claims.’ We know what the world is better than we have at any time in human history, and yet we find ourselves more, not less, ignorant of how we should live our lives. Science gives us the facts. What to do with them is anybody’s guess.

When I mentioned my ongoing debate with Edward Feser my buddy immediately adverted to the distinction, cited it as ‘compelling evidence’ of the ‘irreducibility’ of normative cognition.

But is it? Needless to say, there’s nothing approaching consensus on this matter.

But there are some pretty safe bets we can make regarding the distinction, given what we’re learning about ourselves via the cognitive sciences. One is that the fact/value distinction engages two distinct cognitive systems. Another is that these systems possess two very different heuristic regimes—that is, they neglect different kinds of information. I’m not aware of any theorist who denies these observations.

So Feser has written a follow-up of his initial critique of “Back to Square One” entitled “Feynman’s Painter and Eliminative Materialism” that I find every bit as curious as his previous post. In this post he takes aim at my claim that his original critique simply begs the question against the Eliminativist. Since the nature of intentional idioms is the issue to be resolved, any argument that resolves the issue by presuming the issue is already resolved is plainly begging the question. Thus, Feser’s insistence that any use of intentional idioms presupposes some prior commitment to intrinsic intentionality is pretty clearly begging the question.

So, for instance, I could simply reverse Feser’s strategy, insist that his every attempt to warrant intrinsic intentionality presupposes my position insofar as he employs intentional idioms. I could just as easily insist that he must somehow explain intentional idioms without using those idioms. Why? Because the use of intentional idioms presupposes a heuristics and neglect account of their nature.

But of course, Feser would cry foul—and rightly so.

Pretty obvious, right? Apparently not. For some reason he thinks the tactic is entirely legitimate when the shoe is on the intentionalist’s foot.

In “Feynman’s Painter and Eliminative Materialism,” he relates the Feynman anecdote of the painter who insists he can get yellow paint from white and red paint. When he inevitably fails he claims that he need only ‘sharpen it up a bit’ to make it yellow. Feser wants to claim that this situation is analogous to the debate between him (the brilliant Feynman) and me (the retarded painter). I have to admit, I have no idea how this analogy is supposed to work. The outcome in Feynman’s case is a foregone conclusion. Intentionality, on the other hand, is one of the great mysteries of our age. Feynman knows what he knows about yellow on empirical grounds; Feser, however, believes what he believes on occult grounds—‘apriori’ I’m guessing he would call them. It would be absurd for the painter to accuse Feynman of begging the question because, well, Feynman doesn’t beg the question. Moreover, one might ask why Feser gets to be Feynman? After all, I’m the one making the empirical argument, the one insisting that science will inevitably revolutionize the prescientific domain of the human the way it has revolutionized all other prescientific domains. I’m the one saying the science suggests white and red give us pink. He’s the one caught in the ancient intentional mire, committed to theories that make no testable predictions and possess no clear criteria of falsification…

This is the fact the intentionalist always wants you to overlook. For thousands of years, now, intentionalists have been trying make their theories stick—millennia! For thousands of years the claim has been that we need only get our concepts right, ‘sharpen things up a bit,’ and we will be able to get things right.

To me, it seems pretty obvious that something has gone wrong. Intentionalists are welcome to keep trying to sharpen things up, using whatever it is they use to make their claims (they can’t agree on that, either). Since I think chronic theoretical underdetermination of the kind characterizing intentionalist theories of meaning is an obvious sign of information scarcity and/or cognitive incapacity, I have my money on the science—where the information is. Ask yourself: If the interpretative mire of intentionalism isn’t a shining example of information scarcity and/or cognitive incapacity then what is?

So Feser’s Feynman analogy is problematic to say the least. Nevertheless, he forges ahead, writing,

“In stating his position, the eliminativist makes use of notions like “truth,” “falsehood,” “illusion,” “theory,” “evidence,” “observation,” “entailment,” etc. Everyone, including the eliminativist, agrees that at least as usually understood, these terms entail the existence of intentionality. But of course, the eliminativist denies the existence of intentionality. He claims that in using notions like the ones referred to, he is just speaking loosely and could say what he wants to say in a different, non-intentional way if he needs to. So, he owes us an account of exactly how he can do this—how he can provide an alternative way of describing his position without saying anything that entails the existence of intentionality.”

Once again, I feel like I must be missing something. Sure, I use intentional idioms all the time, and each time I use them, I either evidence my heuristics and neglect approach, or one of the thousands of different intentionalists approaches. Sure, I agree that the tradition is dominated by intentionalist accounts, that for thousands of years we’ve been spinning our collective wheels in the mire of intrinsic intentionality. Sure, I think science will eventually give us a more complete understanding of our intentional idioms the way they’re presently revolutionizing our understanding of things like consciousness and language, for instance. And sure, I think my account will be more convincing the degree to which it explains what these future accounts might look like without saying anything that entails the existence of intentionality–thus the parade of pieces I’ve pitched here on Three Pound Brain.

So?

But Feser, of course, thinks my use of intentional idioms commits me to some ancient or new or indeterminate theoretically underdetermined account of intrinsic intentionality (apparently not realizing that his use of intentional idioms actually commits him to my new empirically responsible heuristics and neglect account!). He begs the question.

Through all the ruckus my Scientia Salon piece has kicked up over the past few months, it hasn’t escaped my attention how not a single intentionalist—that I can recall at least—has actually replied to the penultimate question posed by the article: “Is there anything else we can turn to, any feature of traditional theoretical knowledge of the human that doesn’t simply rub our noses in Square One?”

The thesis of “Back to Square One,” remember, is that we really don’t have any reason to trust our armchair intuitions regarding our intentional nature. Insofar as intentionalists all disagree with one another, then they have to agree that everybody but them should doubt those intuitions. The eliminativist simply wants to know when enough is enough. Do we give up in another hundred years? Another thousand? Or do we finally admit that something hinky is going on whenever we begin theorizing ourselves in intentional terms? In this case the incapacity has been institutionalized, turned into a sport in some respects, but it remains an incapacity all the same. What does it take for intentionalists to acknowledge that they have a bona fide credibility crisis on their hands, one that is simply going to deepen as cognitive science continues to produce more and more discoveries.

This is what I would like to ask Edward directly: What evidences intentionalism? And if that evidence is so compelling then why can’t any of you agree? Is it really simply a matter of ‘sharpening things up’? At what point would you concede that intentionalism has a big problem?

The fact is—and it is a fact—you don’t know what truth is. All you have are guesses like me. So how could you claim to know, apodictically, apparently, what truth isn’t? How are you not using an obvious, apriori dead end (over two thousand years of futility, remember) to claim that a relatively unexplored empirical avenue has to be a dead end?

Shouldn’t people be falling all over alternatives at this point?

These are difficult questions for intentionalists to answer, which is why they don’t like answering them. They would much rather spend their time attacking rather than defending. And without a doubt the incoherence charge that Feser levels is their primary weapon of choice. Even if you still think the intentionalist is onto something, at the very least, I hope you can see why it only leaves the eliminativist scratching their head.

For eliminativists, the real question is why intentionalists find this strategy even remotely compelling. Why do they think it simply cannot be the case that their use of intentional terms commits them to a heuristics and neglect account of intentionality? Why, despite two thousand years of evidence to the contrary, are they so convinced they have their fingers on the pulse of the true truth?

This is where my drunken debate with my philosophy professor friend comes in. The two safe things we can say about the nature of the fact/value distinction, remember, are that two distinct cognitive systems are involved, and that these systems are sensitive-to/neglect different kinds of information. Whatever’s going on when humans shift from solving fact problems to solving value problems, it involves shifting between (at least) two different systems using different information to solve different kinds of problems. Different capacities possessing different access.

To this we can add the obvious and often overlooked fact that we have no means of directly intuiting this distinction in capacity and access. The fact/value distinction, in other words, is something we had to discover. We learn about it in school precisely because we lack any native metacognitive awareness of the distinction. We neglect it otherwise, and indeed, this leads to the kinds of problems that Hume famously complains of in his Treatise.

In other words, not only do the systems themselves neglect different kinds of information, metacognition neglects the fact that we have these disparate systems at all.

So my drunken professor friend, perhaps irked by his incompetence playing hockey (he often is), first claimed that the fact/value distinction raises a barrier between is-claims and ought-claims. To which I shrugged my shoulders and said, ‘Of course.’ We’re talking two different systems using two different kinds of information. Normative cognition, specifically, solves problems regarding behaviour absent any real causal information. So?

He replied that this must mean that values, oughts, commitments, truths, goods, and so on lie beyond the pale of scientific cognition, which consists of factual claims.

But why should this be? I asked. We evolved these two basic capacities to solve two basic kinds of problems, is-problems and ought-problems. So it’s understandable that our fact systems cannot reliably solve ought-problems, and that our ought systems cannot reliably solve is-problems. What does this have to do with solving the ought system?

Quizzical look.

So I continued: Isn’t the question one of what the ought system is itself an is problem? Surely the question of what values are is different from the question of what we should value. And surely science has proven itself to be the most powerful arbiter of what is the human race has ever known. So surely the question of what values are is a question we should commend to science.

He was stumped. So he repeated his claim that values, oughts, commitments, truths, goods, and so on lie beyond the pale of scientific cognition, which consists of factual claims.

And I repeated my response. And he was stumped again.

But why should he be stumped? If we have these two systems, one adapted to solving is-problems, the other adapted to solving ought-problems, then surely the question of what oughts are falls within the bailiwick of the former. It’s a scientific question.

If there’s a reason I’ve persisted working through Blind Brain Theory all these years it lies in the stark clarity of little arguments like this, and the kind of explanatory power they provide. The reason intentionalists always find themselves stranded with their ancient controversies, unable to move, yet utterly convinced they’re the only game in town has to do with metacognitive neglect. If one has an explicit grasp of the fact/value distinction alone, and no grasp of the cognitive machinery responsible, then the possibility that we need to match problems to systems simply does not come up. The question, rather, becomes one of matching problems to some hazy sense of ‘conceptual register.’ Since is-cognition cannot solve normative problems, we assume that it cannot solve the problem of normativity. So we become convinced, the way all normativists are convinced, that only normative cognition can tell us what normativity is—that sharpening thoughts in our armchairs is the only way to proceed. We convince ourselves that philosophical reflection (the thing we happily happen to be experts in) is the only road, if not the royal road, to second order knowledge of normativity, or intentionality more generally. We become convinced that people like me, eliminativists, are thrashing about in the muck of some kind of ‘category mistake.’

As any researcher who deals with it will tell you, neglect can convince humans of pretty much any absurdity. Two thousand years getting nowhere providing intentional explanations of intentional idioms, as outrageous as it is, means nothing when it seems so painfully obvious that intentional idioms can only be explained in intentional, and not natural, terms. But switch to the systems view, and suddenly it becomes obvious that the question of what intentional idioms are is not a question we should expect intentional cognition to have any success solving. Add metacognitive neglect to the picture and suddenly it becomes clear why we’ve been banging our head against this wall for all these millennia. Human beings have been in the grip of a kind of ‘theoretical anosognosia,’ a cognitive version of Anton’s Syndrome. Blind to our metacognitive blindness, we assume that we intuit all we need to intuit when it comes to things like the fact/value distinction. So we compulsively repeat the same mistake over and over again, perpetually baffled by our inability to make any decisive discoveries.

I understand why those invested in the tradition find my view so offensive. As a product and lover of that tradition, I find myself alienated by my position! I’m saying that traditional philosophy is likely largely an artifact of the systematic misapplication of intentional cognition to the problem of intentionality. I’m saying that the thousands of years of near total futility is itself an important data point, evidence of theoretical anosognosia. I’m relegating a great number of PhDs to the historical rubbish heap.

But then this is implicit in the work of any philosopher who (inevitably) thinks everyone else is wrong, isn’t it? So if you’re going to think most everyone is wrong anyway, why bother thinking they’re wrong in the old way, the way possessing the preposterously long track record of theoretical failure? This is the promise of the kind of critical eliminativism that falls out of Blind Brain Theory: it offers the possibility, at least, of leaving the ancient occultisms behind, of developing a scientifically responsible means of theorizing the human, a genuinely post-intentional philosophy.

After all, what is the promise of intentionalism? Another thousand years of controversy? If so, why not simply become a mysterian? Why not admit that you cleave to these guesses, and have no way of settling the issue otherwise? One can hope things will sharpen… at some point, maybe.

The Meaning Wars

by rsbakker

Meaning

Apologies all for my scarcity of late. Between battling snow and Sranc, I’ve scarce had a moment to sit at this computer. Edward Feser has posted “Post-intentional Depression,” a thorough rebuttal to my Scientia Salon piece, “Back to Square One: Toward a Post-Intentional Future,” which Peter Hankins at Conscious Entities has also responded to with “Intellectual Catastrophe.” I’m interested in criticisms and observations of all stripes, of course, but since Massimo has asked me for a follow-up piece, I’m especially interested in the kinds of tactics/analogies I could use to forestall the typical tu quoque reactions eliminativism espouses.

The Knife of Many Hands

by rsbakker

Grimdark - Issue_2_cover_Small_grande

Grimdark Magazine has just published the first installment of “The Knife of Many Hands,” a Conan homage set in Carythusal on the eve of the Scholastic Wars. I stuffed Robert Howard’s pulp into the crack-bowl of my brain as a youth – and I hope it shows! I had fun-fun-fun beating new tricks out of this old and fascinating bear… Enjoy!

The Cudgel Argument

by rsbakker

Let’s get Real.

We’re not a ghostly repository of combinatorial contents…

Or freedom leaping ab initio out of ontological contradiction…

Or a totality of originary and everyday horizons of meaning…

Or a normative function of converging attitudes.

We are not something extra or above or intrinsic. We can be cut. Bruised. Explained. Dominated.

Reality is its own argument to the cudgel. It refutes, not by being kicked, but by kicking. It prevails by killing.

Who cares what the Real is so long as it is Real? It’s the monstrous ‘is-what-it-is’ that will strike you dead. It’s the razor’s line, the shockwave of a bullet, the viral code hacking you from inside your inside. It’s what the sciences mine for more and more godlike power. It’s out there, and it’s in here, and it doesn’t give a flying fuck what you or anyone else ‘thinks.’

Ideas never killed anyone; only Idealists, and only because they were fucking Real.

Realism is a commitment to the realness of the Real. Of course, this is where everything goes diabetic, but only because so many think the realness of the Real requires some kind of Artificial Additive. Just as Jesus is the sole path to Heaven, Ideas are the sole path to the Real, so we are told. Since we already find ourselves in the Real, we must therefore have a great multitude of Ideas. As to their nature, the only consensus is that they are invisible, Pre-Real things that somehow bring about the realness of the Real. This consensus has no ‘evidence’ per se, but it really feels that way when certain trained professionals think about it.

Really, it does.

Luckily, Realism entertains no commitment to the realness of not Real things, be they post, pre, or concurrent.

But Ideas have to be Real, don’t they? What is this very diatribe, if not an argument for yet one more Idea of the Real?

The realness of the Real does not require that we think there must be more to the Real, some yet-to-be-discovered appendage or autonomous force. We need only remember that what cognizes the Real is nothing other than the Real. We must understand that we too are Real—that the dimensionality that kills is also the dimensionality of Life. And we must understand that the dimensionality of Life far and away outruns the capacity of Life to solve. We must understand, in other words, that our Reality obscures the realness of the Real. Life is Reality pitched into the thresher of Reality. When Reality murders us, it murders an incredibly unlikely fragment of Itself.

We are Real. But we are Real in such a way that Reality eludes us—both the Reality that we are and the Reality that we are not. And this, of course, is just to say that we are stupid. We’re stupid generally, but we are out and out retarded when it comes to ourselves. But it belongs to our stupidity to think ourselves ingenious, fucking brilliant. We glimpse angles, wisps, and see things incompatible with the Real. We think uttering pronouncements in the Void shed rational light. We stare at brick walls and limn transcendent necessities. What seems to so obviously evidence the Ideal is nothing other than the insensitivity of the Real to the Real, the fact that its fragments can only be tuned to other fragments, and to its (fragmentary) tuning not at all.

What seems to evidence the Ideal is nothing other than the insensitivity of the Real to the Real, the fact that its fragments can only be tuned to other fragments, and to its (fragmentary) tuning not at all. The Idea is the thinnest skin, Life neglecting Life, and duly confounded.

We have always been obdurate unto ourselves, a brick wall splashed with colour, checkered with different textures of brick, but a brick wall all the same. Everything from Husserl to Plato to the Egyptian Book of the Dead is nothing more than incantatory graffiti. All of them chase those terms we use as simpletons, those terms that make complete sense until someone asks us to explain, and we are stumped, rendered morons—until, that is, inspiration renders us more idiotic still. They forget that Language is also Real, that it functions, not by vanishing, but being what it is. As Real, Language must contend—as all Real things must contend—with Reality, as a system that locks into various systems in various ways—as something effective. Some particles of language lock into environmental particles; some terms can be sticky-noted to particular covariants. Some particles of language, however, lock into environmental systems. Since the Reality of cognition is occluded in the cognition of Reality, these systems escape immediate cognition, leaving only the intuition of impossible–because not quite Real–particles.

Such as Ideas.