Three Pound Brain

No bells, just whistling in the dark…

Three Roses, Bk. 1: Chapter Two

by reichorn

Hey all!  Roger here.

I’ve posted the second chapter of the new draft of Three Roses, Book 1: The Anarchy.  It’s first-draft stuff, but still I’m pretty happy with it.  So I figure what the hell, I’ll post it here.

As always, any comments or questions are welcomed and appreciated.

Introspection Explained

by rsbakker

Las Meninas

So I couldn’t get past the first paper in Thomas Metzinger’s excellent Open MIND offering without having to work up a long-winded blog post! Tim Bayne’s “Introspective Insecurity” offers a critique of Eric Schwitzgebel’s Perplexities of Consciousness, which is my runaway favourite book on introspection (and consciousness, for that matter). This alone might have sparked me to write a rebuttal, but what I find most extraordinary about the case Bayne lays out against introspective skepticism is the way it directly implicates Blind Brain Theory. His  defence of introspective optimism, I want to show, actually vindicates an even more radical form of pessimism than the one he hopes to domesticate.

In the article, Bayne divides the philosophical field into two general camps, the introspective optimists, who think introspection provides reliable access to conscious experience, and introspective pessimists, who do not. Recent years have witnessed a sea change in philosophy of mind circles (one due in no small part to Schwitzgebel’s amiable assassination of assumptions). The case against introspective reliability has grown so prodigious that what Bayne now terms ‘optimism’–introspection as a possible source of metaphysically reliable information regarding the mental/phenomenal–would have been considered rank introspective pessimism not so long ago. The Cartesian presumption of ‘self-transparency’ (as Carruthers calls it in his excellent The Opacity of Mind) has died a sudden death at the hands of cognitive science.

Bayne identifies himself as one of these new optimists. What introspection needs, he claims, is a balanced account, one sensitive to the vulnerabilities of both positions. Where proponents of optimism have difficulty accounting for introspective error, proponents of pessimism have difficulty accounting for introspective success. Whatever it amounts to, introspection is characterized by perplexing failures and thoughtless successes. As he writes in his response piece,  “The epistemology of introspection is that it is not flat but contains peaks of epistemic security alongside troughs of epistemic insecurity” (“Introspection and Intuition,” 1). Since any final theory of introspection will have to account for this mixed ‘epistemic profile,’ Bayne suggests that it provides a useful speculative constraint, a way to sort the metacognitive wheat from the chaff.

According to Bayne, introspective optimists motivate their faith in the deliverances of introspection on the basis of two different arguments: the Phenomenological Argument and the Conceptual Argument. He restricts his presentation of the phenomenological argument to a single quote from Brie Gertler’s “Renewed Acquaintance,” which he takes as representative of his own introspective sympathies. As Gertler writes of the experience of pinching oneself:

When I try this, I find it nearly impossible to doubt that my experience has a certain phenomenal quality—the phenomenal quality it epistemically seems to me to have, when I focus my attention on the experience. Since this is so difficult to doubt, my grasp of the phenomenal property seems not to derive from background assumptions that I could suspend: e.g., that the experience is caused by an act of pinching. It seems to derive entirely from the experience itself. If that is correct, my judgment registering the relevant aspect of how things epistemically seem to me (this phenomenal property is instantiated) is directly tied to the phenomenal reality that is its truthmaker. “Renewed Acquaintance,” Introspection and Consciousness, 111.

When attending to a given experience, it seems indubitable that the experience itself has distinctive qualities that allow us to categorize it in ways unique to first-person introspective, as opposed to third-person sensory, access. But if we agree that the phenomenal experience—as opposed to the object of experience—drives our understanding of that experience, then we agree that the phenomenal experience is what makes our introspective understanding true. “Introspection,” Bayne writes, “seems not merely to provide one with information about one’s experiences, it seems also to ‘say’ something about the quality of that information” (4). Introspection doesn’t just deliver information, it somehow represents these deliverances as true.

Of course, this doesn’t make them true: we need to trust introspection before we can trust our (introspective) feeling of introspective truth. Or do we? Bayne replies:

it seems to me not implausible to suppose that introspection could bear witness to its own epistemic credentials. After all, perceptual experience often contains clues about its epistemic status. Vision doesn’t just provide information about the objects and properties present in our immediate environment, it also contains information about the robustness of that information. Sometimes vision presents its take on the world as having only low-grade quality, as when objects are seen as blurry and indistinct or as surrounded by haze and fog. At other times visual experience represents itself as a highly trustworthy source of information about the world, such as when one takes oneself to have a clear and unobstructed view of the objects before one. In short, it seems not implausible to suppose that vision—and perceptual experience more generally—often contains clues about its own evidential value. As far as I can see there is no reason to dismiss the possibility that what holds of visual experience might also hold true of introspection: acts of introspection might contain within themselves information about the degree to which their content ought to be trusted. 5

Vision is replete with what might be called ‘information information,’ features that indicate the reliability of the information available. Darkness, for instance, is a great example, insofar as it provides visual information to the effect that visual information is missing. Our every glance is marbled with what might be called ‘more than meets the eye’ indicators. As we shall, this analogy to vision will come back and haunt Bayne’s thesis. The thing to keep in mind is the fact that the cognition of missing information requires more information. For the nonce, however, his claim is modest enough to acknowledge his point: as it stands, we cannot rule out the possibility that introspection, like exospection, reliably indicates its own reliability. As such, the door to introspective optimism remains open.

Here we see the ‘foot-in-the-door strategy’ that Bayne adopts throughout the article, where his intent isn’t so much to decisively warrant introspective optimism as it is to point out and elucidate the ways that introspective pessimism cannot decisively close the door on introspection.

The conceptual motivation for introspective optimism turns on the necessity of epistemic access implied in the very concept of ‘what is it likeness.’ The only way for something to be ‘like something’ is for it to like something for somebody. “[I]f a phenomenal state is a state that there is something it is like to be in,” Bayne writes, “then the subject of that state must have epistemic access to its phenomenal character” (5). Introspection has to be doing some kind of cognitive work, otherwise “[a] state to which the subject had no epistemic access could not make a constitutive contribution to what it was like for that subject to be the subject that it was, and thus it could not qualify as a phenomenal state” (5-6).

The problem with this argument, of course, is that it says little about the epistemic access involved. Apart from some unspecified ability to access information, it really implies very little. Bayne convincingly argues that the capacity to cognize differences, make discriminations, follows from introspective access, even if the capacity to correctly categorize those discriminations does not. And in this respect, it places another foot in the introspective door.

Bayne then moves on to the case motivating pessimism, particularly as Eric presents it in his Perplexities of Consciousness. He mentions the privacy problems that plague scientific attempts to utilize introspective information (Irvine provides a thorough treatment of this in her Consciousness as a Scientific Concept), but since his goal is to secure introspective reliability for philosophical purposes, he bypasses these to consider three kinds of challenges posed by Schwitzgebel in Perplexities, the Dumbfounding, Dissociation, and Introspective Variation Arguments. Once again, he’s careful to state the balanced nature of his aim, the obvious fact that

any comprehensive account of the epistemic landscape of introspection must take both the hard and easy cases into consideration. Arguably, generalizing beyond the obviously easy and hard cases requires an account of what makes the hard cases hard and the easy cases easy. Only once we’ve made some progress with that question will we be in a position to make warranted claims about introspective access to consciousness in general. 8

His charge against Schwitzgebel, then, is that even conceding his examples of local introspective unreliability, we have no reason to generalize from these to the global unreliability of introspection as a philosophical tool. Since this inference from local unreliability to global unreliability is his primary discursive target, Bayne doesn’t so much need to problematize Schwitzgebel’s challenges as to reinterpret—‘quarantine’—their implications.

So in the case of ‘dumbfounding’ (or ‘uncertainty’) arguments, Schwitzgebel reveals the epistemic limitations of introspection via a barrage of what seem to be innocuous questions. Our apparent inability to answer these questions leaves us ‘dumbfounded,’ stranded on a cognitive limit we never knew existed. Bayne’s strategy, accordingly, is to blame the questions, to suggest that dumbfounding, rather than demonstrating any pervasive introspective unreliability, simply reveals that the questions being asked possess no determinate answers. He writes:

Without an account of why certain introspective questions leave us dumbfounded it is difficult to see why pessimism about a particular range of introspective questions should undermine the epistemic credentials of introspection more generally. So even if the threat posed by dumbfounding arguments were able to establish a form of local pessimism, that threat would appear to be easily quarantined. 11

Once again, local problems in introspection do not warrant global conclusions regarding introspective reliability.

Bayne takes a similar tack with Schwitzgebel’s dissociation arguments, examples where our naïve assumptions regarding introspective competence diverge from actual performance. He points out the ambiguity between the reliability of experience and the reliability of introspection: Perhaps we’re accurately introspecting mistaken experiences. If there’s no way to distinguish between these, Bayne, suggests, we’ve made room for introspective optimism. He writes: “If dissociations between a person’s introspective capacities and their first-order capacities can disconfirm their introspective judgments (as the dissociation argument assumes), then associations between a person’s introspective judgments and their first-order capacities ought to confirm them” (12). What makes Schwitzgebel’s examples so striking, he goes on to argue, is precisely that fact that introspective judgments are typically effective.

And when it comes to the introspective variation argument, the claim that the chronic underdetermination that characterizes introspective theoretical disputes attests to introspective incapacity, Bayne once again offers an epistemologically fractionate picture of introspection as a way of blocking any generalization from given instances of introspective failure. He thinks that examples of introspective capacity can be explained away, “[b]ut even if the argument from variation succeeds in establishing a local form of pessimism, it seems to me there is little reason to think that this pessimism generalizes” (14).

Ultimately, the entirety of his case hangs on the epistemologically fractionate nature of introspection. It’s worth noting at this point, that from a cognitive scientific point of view, the fractionate nature of introspection is all but guaranteed. Just think of the mad difference between Plato’s simple aviary, the famous metaphor he offers for memory in the Theaetetus, and the imposing complexity of memory as we understand it today. I raise this ‘mad difference’ for two reasons. First, it implies that any scientific understanding of introspection is bound to radically complicate our present understanding. Second, and even more importantly, it evidences the degree to which introspection is blind, not only to the fractionate complexity of memory, but to its own fractionate complexity as well.

For Bayne to suggest that introspection is fractionate, in other words, is for him to claim that introspection is almost entirely blind to its own nature (much as it is to the nature of memory). To the extent that Bayne has to argue the fractionate nature of introspection, we can conclude that introspection is not only blind to its own fractionate nature, it is also blind to the fact of this blindness. It is in this sense that we can assert that introspection neglects its own fractionate nature. The blindness of introspection to introspection is the implication that hangs over his entire case.

In the meantime, having posed an epistemologically plural account of introspection, he’s now on the hook to explain the details. “Why,” he now asks, “might certain types of phenomenal states be elusive in a way that other types of phenomenal states are not?” (15). Bayne does not pretend to possess any definitive answers, but he does hazard one possible wrinkle in the otherwise featureless face of introspection, the 2010 distinction that he and Maja Spener made in “Introspective Humility” between ‘scaffolded’ and ‘freestanding’ introspective judgments. He notes that those introspective judgments that seem to be the most reliable, are those that seem to be ‘scaffolded’ by first-order experiences. These include the most anodyne metacognitive statements we make, where we reference our experiences of things to perspectivally situate them in the world, as in, ‘I see a tree over there.’ Those introspective judgments that seem the least reliable, on the other hand, have no such first-order scaffolding. Rather than piggy-back on first-order perceptual judgments, ‘freestanding’ judgments (the kind philosophers are fond of making) reference our experience of experiencing, as in, ‘My experience has a certain phenomenal quality.’

As that last example (cribbed from the Gertler quote above) makes plain, there’s a sense in which this distinction doesn’t do the philosophical introspective optimist any favours. (Max Engel exploits this consequence to great effect in his Open MIND reply to Bayne’s article, using it to extend pessimism into the intuition debate). But Bayne demurs, admitting that he lacks any substantive account. As it stands, he need only make the case that introspection is fractionate to convincingly block the ‘globalization’ of Schwitzgebel’s pessimism. As he writes:

perhaps the central lesson of this paper is that the epistemic landscape of introspection is far from flat but contains peaks of security alongside troughs of insecurity. Rather than asking whether or not introspective access to the phenomenal character of consciousness is trustworthy, we should perhaps focus on the task of identifying how secure our introspective access to various kinds of phenomenal states is, and why our access to some kinds of phenomenal states appears to be more secure than our access to other kinds of phenomenal states. 16

The general question of whether introspective cognition of conscious experience is possible is premature, he argues, so long as we have no clear idea of where and why introspection works and does not work.

This is where I most agree with Bayne—and where I’m most puzzled. Many things puzzle me about the analytic philosophy of mind, but nothing quite so much as the disinclination to ask what seem to me to be relatively obvious empirical questions.

In nature, accuracy and reliability are expensive achievements, not gifts from above. Short of magic, metacognition requires physical access and physical capacity. (Those who believe introspection is magic—and many do—need only be named magicians). So when it comes to deliberative introspection, what kind of neurobiological access and capacity are we presuming? If everyone agrees that introspection, whatever it amounts to, requires the brain do honest-to-goodness work, then we can begin advancing a number of empirical theses regarding access and capacity, and how we might find these expressed in experience.

So given what we presently know, what kind of metacognitive access and capacity should we expect our beans to possess? Should we, for instance, expect it to rival the resolution and behavioural integration of our environmental capacities? Clearly not. For one, environmental cognition coevolved with behaviour and so has the far greater evolutionary pedigree—by hundreds of millions of years, in fact! As it turns out, reproductive success requires that organisms solve their surroundings, not themselves. So long as environmental challenges are overcome, they can take themselves for granted, neglect their own structure and dynamics. Metacognition, in other words, is an evolutionary luxury. There’s no way of saying how long homo sapiens has enjoyed the particular luxury of deliberative introspection (as an exaptation, the luxury of ‘philosophical reflection’ is no older than recorded history), but even if we grant our base capacity a million year pedigree, we’re still talking about a very young, and very likely crude, system.

Another compelling reason to think metacognition cannot match the dimensionality of environmental cognition lies in the astronomical complexity of its target. As a matter of brute empirical fact, brains simply cannot track themselves the high-dimensional way they track their environments. Thus, once again, ‘Dehaene’s Law,’ the way “[w]e constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79).  The vast resources society is presently expending to cognize the brain attests to the degree to which our brain exceeds its own capacity to cognize in high dimensional terms. However the brain cognizes its own operations, then, it can only do so in a radically low dimensional way. We should expect, in other words, our brains to be relatively insensitive to their own operation—to be blind to themselves.

A third empirical reason to assume that metacognition falls short environmental dimensionality is found in the way it belongs to the very system it tracks, and so lacks the functional independence as well as the passive and active information-seeking opportunities belonging to environmental cognition. The analogy I always like to use here is that of a primatologist sewn into a sack with a troop of chimpanzees versus one tracking them discretely in the field. Metacognition, unlike environmental cognition, is structurally bound to its targets. It cannot move toward some puzzling item—an apple say—peer at it, smell it, touch it, turn it over, crack it open, taste it, scrutinize the components. As embedded, metacognition is restricted to fixed channels of information that it could not possibly identify or source. The brain, you could say, is simply too close to itself to cognize itself as it is.

Viewed empirically, then, we should expect metacognitive access and capacity to be more specialized, more adventitious, and less flexible compared to that of environmental cognition. Given the youth of the system, the complexity of its target, and the proximity of its target, we should expect human metacognition will consist of various kluges, crude heuristics that leverage specific information to solve some specific range of problems. As Gerd Gigerenzer and the Adaptive Behaviour and Cognition Research Group have established, simple heuristics are often far more effective than optimization methods at solving problems. “As the amount of data available to make predictions in an environment shrinks, the advantage of simple heuristics over complex algorithms grows” (Hertwig and Hoffrage, “The Research Agenda,” Simple Heuristics in a Social World, 23). With complicated problems yielding little data, adding parameters to a solution can compound the chances of making mistakes. Low dimensionality, in other words, need not be a bad thing, so long as the information consumed is information enabling the solution of some problem set. This is why evolution so regularly makes use of it.

Given this broad-stroke picture, human metacognition can be likened to a toolbox containing multiple, special-purpose tools, each possessing specific ‘problem-ecologies,’ narrow, but solvable domains that trigger their application frequently and decisively enough to have once assured the tool’s generational selection. The problem with heuristics, of course, lies in the narrowness of their respective domains. If we grant the brain any flexibility in the application of its metacognitive tools, then the potential for heuristic misapplication is always a possibility. If we deny the brain any decisive capacity to cognize these misapplications outside their consequences (if the brain suffers ‘tool agnosia’), then we can assume these misapplications will be indistinguishable from successful applications short of those consequences.

In other words, this picture of human metacognition (which is entirely consistent with contemporary research) provides an elegant (if sobering) recapitulation and explanation of what Bayne calls the ‘epistemic landscape of introspection.’ Metacognition is fractionate because of the heuristic specialization required to decant behaviourally relevant information from the brain. The ‘peaks of security’ correspond to the application of metacognitive heuristics to matching problem-ecologies, while the ‘troughs of insecurity’ correspond to the application of metacognitive heuristics to problem-ecologies they could never hope to solve.

Since those matching problem-ecologies are practical (as we might expect, given the cultural basis of regimented theoretical thinking), it makes sense that practical introspection is quite effective, whereas theoretical introspection, which attempts to intuit the general nature of experience, is anything but. The reason the latter strike us as so convincing—to the point of seeming impossible to doubt, no less—is simply that doubt is expensive: there’s no reason to presume we should happily discover the required error-signalling machinery awaiting any exaptation of our deliberative introspective capacity, let alone one so unsuccessful as philosophy. As I mentioned above, the experience of epistemic insufficiency always requires more information. Sufficiency is the default simply because the system has no way of anticipating novel applications, no decisive way of suddenly flagging information that was entirely sufficient for ancestral problem-ecologies and so required no flagging.

Remember how Bayne offered what I termed ‘information information’ provided by vision as a possible analogue of introspection? Visual experience cues us to the unreliability or absence of information in a number of ways, such as darkness, blurring, faintness, and so on. Why shouldn’t we presume that deliberative introspection likewise flags what can and cannot be trusted? Because deliberative introspection exapts information sufficient for one kind of practical problem-solving (Did I leave my keys in the car? Am I being obnoxious? Did I read the test instructions carefully enough?) for the solution of utterly unprecedented ontological problems. Why should repurposing introspective deliverances in this way renovate the thoughtless assumption of ‘default sufficiency’ belonging to their original purposes?

This is the sense in which Blind Brain Theory, in the course of explaining the epistemic profile of introspection, also explodes Bayne’s case for introspective optimism. By tying the contemplative question of deliberative introspection to the empirical question of the brain’s metacognitive access and capacity, BBT makes plain the exorbitant biological cost of the optimistic case. Exhaustive, reliable intuition of anything involves a long evolutionary history, tractable targets, and flexible information access—that is, all the things that deliberative introspection does not possess.

Does this mean that deliberative introspection is a lost cause, something possessing no theoretical utility whatsoever? Not necessarily. Accidents happen. There’s always a chance that some instance of introspective deliberation could prove valuable in some way. But we should expect such solutions to be both adventitious and local, something that stubbornly resists systematic incorporation into any more global understanding.

But there’s another way, I think, in which deliberative introspection can play a genuine role in theoretical cognition—a way that involves looking at Schwitzgebel’s skeptical project as a constructive, rather than critical, theoretical exercise.

To show what I mean, it’s worth recapitulating one of the quotes Bayne selects from Perplexities of Consciousness for sustained attention:

How much of the scene are you able vividly to visualize at once? Can you keep the image of your chimney vividly in mind at the same time you vividly imagine (or “image”) your front door? Or does the image of your chimney fade as your attention shifts to the door? If there is a focal part of your image, how much detail does it have? How stable is it? Suppose that you are not able to image the entire front of your house with equal clarity at once, does your image gradually fade away towards the periphery, or does it do so abruptly? Is there any imagery at all outside the immediate region of focus? If the image fades gradually away toward the periphery, does one lose colours before shapes? Do the peripheral elements of the image have color at all before you think to assign color to them? Do any parts of the image? If some parts of the image have indeterminate colour before a colour is assigned, how is that indeterminacy experienced—as grey?—or is it not experienced at all? If images fade from the centre and it is not a matter of the color fading, what exactly are the half-faded images like? Perplexities, 36

Questions in general are powerful insofar as they allow us to cognize the yet-to-be-cognized. The slogan feels ancient to me now, but no less important: Questions are how we make ignorance visible, how we become conscious of cognitive incapacity. In effect, then, each and every question in this quote brings to light a specific inability to answer. Granting that this inability indicates either a lack of information access and/or metacognitive incapacity, we can presume these questions enumerate various cognitive dimensions missing from visual imagery. Each question functions as an interrogative ‘ping,’ you could say, showing us another direction that (for many people at least) introspective inquiry cannot go—another missing dimension.

So even though Bayne and Schwitzgebel draw negative conclusions from the ‘dumbfounding’ that generally accompanies these questions, each instance actually tells us something potentially important about the limits of our introspective capacities. If Schwitzgebel had been asking these questions of a painting—Las Meninas, say—then dumbfounding wouldn’t be a problem at all. The information available, given the cognitive capacity possessed, would make answering them relatively straightforward. But even though ‘visual imagery’ is apparently ‘visual’ the same as a painting, the selfsame questions stop us in our tracks. Each question, you could say, closes down a different ‘degree of cognitive freedom,’ reveals how few degrees of cognitive freedom human deliberative introspection possesses for the purposes of solving visual imagery. Not much at all, as it turns out.

Note this is precisely what we should expect on a ‘blind brain’ account. Once again, simply given the developmental and structural obstacles confronting metacognition, it almost certainly consists of an ‘adaptive toolbox’ (to use Gerd Gigerenzer’s phrase), a suite of heuristic devices adapted to solve a restricted set of problems given only low-dimensional information. The brain possesses a fixed set of metacognitive channels available for broadcast, but no real ‘channel channel,’ so that it systematically neglects metacognition’s own fractionate, heuristic structure.

And this clearly seems to be what Schwitzgebel’s interrogative barrage reveals: the low dimensionality of visual imagery (relative to vision), the specialized problem-solving nature of visual imagery, and our profound inability to simply intuit as much. For some mysterious reason we can ask visual questions that for some mysterious reason do not apply to visual imagery. The ability of language to retask cognitive resources for introspective purposes seems to catch the system as a whole by surprise, confronts us with what had been hitherto relegated to neglect. We find ourselves ‘dumbfounded.’

So long as we assume that cognition requires work, we must assume that metacognition trades in low dimensional information to solve specific kinds of problems. To the degree that introspection counts as metacognition, we should expect it to trade in low-dimensional information geared to solve particular kinds of practical problems. We should also expect it to be blind to introspection, to possess neither the access nor the capacity required to intuit its own structure. Short of interrogative exercises such as Schwitzgebel’s, deliberative introspection has no inkling of how many degrees of cognitive freedom it possesses in any given context. We have to figure out what information is for what inferentially.

And this provides the basis for a provocative diagnosis of a good many debates in contemporary psychology and philosophy of mind. So for instance, a blind brain account implies that our relation to something like ‘qualia’ is almost certainly one possessing relatively few degrees of cognitive freedom—a simple heuristic. Deliberative introspection neglects this, and at the same time, via questioning, allows other cognitive capacities to consume the low-dimensional information available. ‘Dumbfounding’ often follows—what the ancient Greeks liked to call, thaumazein. The practically minded, sniffing a practical dead end, turn away, but the philosopher famously persists, mulling the questions, becoming accustomed to them, chasing this or that inkling, borrowing many others, all of which, given the absence of any real information information, cannot but suffer from some kind of ‘only game in town effect’ upon reflection. The dumbfounding boundary is trammelled to the point of imperceptibility, and neglect is confused with degrees of cognitive freedom that simply do not exist. We assume that a quale is something like an apple—we confuse a low-dimensional cognitive relationship with a high-dimensional one. What is obviously specialized, low-dimensional information becomes, for a good number of philosophers at least, a special ‘immediately self-evident’ order of reality.

Is this Adamic story really that implausible? After all, something has to explain our perpetual inability to even formulate the problem of our nature, let alone solve it. Blind Brain Theory, I would argue, offers a parsimonious and comprehensive way to extricate ourselves from the traditional mire. Not only does it explain Bayne’s ‘epistemic profile of introspection,’ it explains why this profile took so long to uncover. By reinterpreting the significance of Schwitzgebel’s ‘dumbfounding’ methods, it raises the possibility of ‘Interrogative Introspection’ as a scientific tool. And lastly, it suggests the problems that neglect foists on introspection can be generalized, that much of our inability to cognize ourselves turns on the cognitive short cuts evolution had to use to assure we could cognize ourselves at all.

Artificial Intelligence as Socio-Cognitive Pollution

by rsbakker

Metropolis 1

.

Eric Schwitzgebel over at the always excellent Splintered Minds, has been debating the question of how robots—or AI’s more generally—can be squared with our moral sensibilities. In “Our Moral Duties to Artificial Intelligences” he poses a very simple and yet surprisingly difficult question: “Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?”

He then lists numerous considerations that could possibly attenuate the degree of obligation we take on when we construct sentient, sapient machine intelligences. Prima facie, it seems obvious that our moral obligation to our machines should mirror our obligations to one another the degree to which they resemble us. But Eric provides a number of reasons why we might think our obligation to be less. For one, humans clearly rank their obligations to one another. If our obligation to our children is greater than that to a stranger, then perhaps our obligation to human strangers should be greater than that to a robot stranger.

The idea that interests Eric the most is the possible paternal obligation of a creator. As he writes:

“Since we created them, and since we have godlike control over them (either controlling their environments, their psychological parameters, or both), we have a special duty to ensure their well-being, which exceeds the duty we would have to an arbitrary human stranger of equal cognitive and emotional capacity. If I create an Adam and Eve, I should put them in an Eden, protect them from unnecessary dangers, ensure that they flourish.”

We have a duty not to foist the same problem of theodicy on our creations that we ourselves suffer! (Eric and I have a short story in Nature on this very issue).

Eric, of course, is sensitive to the many problems such a relationship poses, and he touches what are very live debates surrounding the way AIs complicate the legal landscape.  So as Ryan Calo argues, for instance, the primary problem lies in the way our hardwired ways of understanding each other run afoul the machinic nature of our tools, no matter how intelligent. Apparently AI crime is already a possibility. If it makes no sense to assign responsibility to the AI—if we have no corresponding obligation to punish them—then who takes the wrap? The creators? In the linked interview, at least, Calo is quick to point out the difficulties here, the fact that this isn’t simply a matter of expanding the role of existing legal tools (such as that of ‘negligence’ in the age of the first train accidents), but of creating new ones, perhaps generating whole new ontological categories that somehow straddle the agent/machine divide.

But where Calo is interested in the issue of what AIs do to people, in particular how their proliferation frustrates the straightforward assignation of legal responsibility, Eric is interested in what people do to AIs, the kinds of things we do and do not owe to our creations. Calo, of course, is interested in how to incorporate new technologies into our existing legal frameworks. Since legal reasoning is primarily analogistic reasoning, precedence underwrites all legal decision making. So for Calo, the problem is bound to be more one of adapting existing legal tools than constituting new ones (though he certainly recognizes the dimension). How do we accommodate AIs within our existing set of legal tools? Eric, of course, is more interested in the question how we might accommodate AGIs within our existing set of moral tools. To the extent that we expect our legal tools to render outcomes consonant with our moral sensibilities, there is a sense in which Eric is asking the more basic question. But the two questions, I hope to show, actually bear some striking—and troubling—similarities.

The question of fundamental obligations, of course, is the question of rights. In his follow-up piece, “Two Arguments for AI (or Robot) Rights: The No-Relevant-Difference Argument and the Simulation Argument,” Eric Schwitzgebel accordingly turns to the question of whether AIs possess any rights at all.

Since the Simulation Argument requires accepting that we ourselves are simulations—AI’s—we can exclude it here, I think (as Eric himself does, more or less), and stick with the No-Relevant-Difference Argument. This argument presumes that human-like cognitive and experiential properties automatically confer AIs with human-like moral properties, placing the onus on the rights denier to “to find a relevant difference which grounds the denial of rights.” As in the legal case, the moral reasoning here is analogistic: the more AI’s resemble us, the more of our rights they should possess. After considering several possible relevant differences, Eric concludes “that at least some artificial intelligences, if they have human-like experience, cognition, and emotion, would have at least some rights, or deserve at least some moral consideration.” This is the case, he suggests, whether one’s theoretical sympathies run to the consequentialist or the deontological end of the ethical spectrum. So far as AI’s possess the capacity for happiness, a consequentialist should be interested in maximizing that happiness. So far as AI’s are capable of reasoning, then a deontologist should consider them rational beings, deserving the respect due all rational beings.

So some AIs merit some rights the degree to which they resemble humans. If you think about it, this claim resounds with intuitive obviousness. Are we going to deny rights to beings that think as subtly and feel as deeply as ourselves?

What I want to show is how this question, despite its formidable intuitive appeal, misdiagnoses the nature of the dilemma that AI presents. Posing the question of whether AI should possess rights, I want to suggest, is premature to the extent it presumes human moral cognition actually can adapt to the proliferation of AI. I don’t think it can. In fact, I think attempts to integrate AI into human moral cognition simply demonstrate the dependence of human moral cognition on what might be called shallow information environments. As the heuristic product of various ancestral shallow information ecologies, human moral cognition–or human intentional cognition more generally–simply does not possess the functional wherewithal to reliably solve in what might be called deep information environments.

Metropolis 2

Let’s begin with what might seem a strange question: Why should analogy play such an important role in our attempts to accommodate AI’s within the gambit of human legal and moral problem solving? By the same token, why should disanalogy prove such a powerful way to argue the inapplicability of different moral or legal categories?

The obvious answer, I think anyway, has to do with the relation between our cognitive tools and our cognitive problems. If you’ve solved a particular problem using a particular tool in the past, it stands to reason that, all things being equal, the same tool should enable the solution of any new problem possessing a similar enough structure to the original problem. Screw problems require screwdriver solutions, so perhaps screw-like problems require screwdriver-like solutions. This reliance on analogy actually provides us a different, and as I hope to show, more nuanced way to pose the potential problems of AI.  We can even map several different possibilities in the crude terms of our tool metaphor. It could be, for instance, we simply don’t possess the tools we need, that the problem resembles nothing our species has encountered before. It could be AI resembles a screw-like problem, but can only confound screwdriver-like solutions. It could be that AI requires we use a hammer and a screwdriver, two incompatible tools, simultaneously!

The fact is AI is something biologically unprecedented, a source of potential problems unlike any homo sapiens has ever encountered. We have no  reason to suppose a priori that our tools are up to the task–particularly since we know so little about the tools or the task! Novelty. Novelty is why the development of AI poses as much a challenge for legal problem-solving as it does for moral problem-solving: not only does AI constitute a never-ending source of novel problems, familiar information structured in unfamiliar ways, it also promises to be a never-ending source of unprecedented information.

The challenges posed by the former are dizzying, especially when one considers the possibilities of AI mediated relationships. The componential nature of the technology means that new forms can always be created. AI confront us with a combinatorial mill of possibilities, a never ending series of legal and moral problems requiring further analogical attunement. The question here is whether our legal and moral systems possess the tools they require to cope with what amounts to an open-ended, ever-complicating task.

Call this the Overload Problem: the problem of somehow resolving a proliferation of unprecedented cases. Since we have good reason to presume that our institutional and/or psychological capacity to assimulate new problems to existing tool sets (and vice versa) possesses limitations, the possibility of change accelerating beyond those capacities to cope is a very real one.

But the challenges posed by latter, the problem of assimulating unprecedented information, could very well prove insuperable. Think about it: intentional cognition solves problems neglecting certain kinds of causal information. Causal cognition, not surprisingly, finds intentional cognition inscrutable (thus the interminable parade of ontic and ontological pineal glands trammelling cognitive science.) And intentional cognition, not surprisingly, is jammed/attenuated by causal information (thus different intellectual ‘unjamming’ cottage industries like compatibilism).

Intentional cognition is pretty clearly an adaptive artifact of what might be called shallow information environments. The idioms of personhood leverage innumerable solutions absent any explicit high-dimensional causal information. We solve people and lawnmowers in radically different ways. Not only do we understand the actions of our fellows lacking any detailed causal information regarding their actions, we understand our responses in the same way. Moral cognition, as a subspecies of intentional cognition, is an artifact of shallow information problem ecologies, a suite of tools adapted to solving certain kinds of problems despite neglecting (for obvious reasons) information regarding what is actually going on. Selectively attuning to one another as persons served our ‘benighted’ ancestors quite well. So what happens when high-dimensional causal information becomes explicit and ubiquitous?

What happens to our shallow information tool-kit in a deep information world?

Call this the Maladaption Problem: the problem of resolving a proliferation of unprecedented cases in the presence of unprecedented information. Given that we have no intuition of the limits of cognition period, let alone those belonging to moral cognition, I’m sure this notion will strike many as absurd. Nevertheless, cognitive science has discovered numerous ways to short circuit the accuracy of our intuitions via manipulation of the information available for problem solving. When it comes to the nonconscious cognition underwriting everything we do, an intimate relation exists between the cognitive capacities we have and the information those capacities have available.

But how could more information be a bad thing? Well, consider the persistent disconnect between the actual risk of crime in North America and the public perception of that risk. Given that our ancestors evolved in uniformly small social units, we seem to assess the risk of crime in absolute terms rather than against any variable baseline. Given this, we should expect that crime information culled from far larger populations would reliably generate ‘irrational fears,’ the ‘gut sense’ that things are actually more dangerous than they in fact are. Our risk assessment heuristics, in other words, are adapted to shallow information environments. The relative constancy of group size means that information regarding group size can be ignored, and the problem of assessing risk economized. This is what evolution does: find ways to cheat complexity. The development of mass media, however, has ‘deepened’ our information environment, presenting evolutionarily unprecedented information cuing perceptions of risk in environments where that risk is in fact negligible. Streets once raucous with children are now eerily quiet.

This is the sense in which information—difference making differences—can arguably function as a ‘socio-cognitive pollutant.’ Media coverage of criminal risk, you could say, constitutes a kind of contaminant, information that causes systematic dysfunction within an originally adaptive cognitive ecology. As I’ve argued elsewhere, neuroscience can be seen as a source of socio-cognitive pollutants. We have evolved to solve ourselves and one another absent detailed causal information. As I tried to show, a number of apparent socio-cognitive breakdowns–the proliferation of student accommodations, the growing cultural antipathy to applying institutional sanctions–can be parsimoniously interpreted in terms of having too much causal information. In fact, ‘moral progress’ itself can be understood as the result of our ever-deepening information environment, as a happy side effect of the way accumulating information regarding outgroup competitors makes it easier and easier to concede them partial ingroup status. So-called ‘moral progress,’ in other words, could be an automatic artifact of the gradual globalization of the ‘village,’ the all-encompassing ingroup.

More information, in other words, need not be a bad thing: like penicillin, some contaminants provide for marvelous exaptations of our existing tools. (Perhaps we’re lucky that the technology that makes it ever easier to kill one another also makes it ever easier to identify with one another!) Nor does it need to be a good thing. Everything depends on the contingencies of the situation.

So what about AI?

Metropolis 3

Consider Samantha, the AI operating system from Spike Jonze’s cinematic science fiction masterpiece, Her. Jonze is careful to provide a baseline for her appearance via Theodore’s verbal interaction with his original operating system. That system, though more advanced than anything presently existing, is obviously mechanical because it is obviously less than human. It’s responses are rote, conversational yet as regimented as any automated phone menu. When we initially ‘meet’ Samantha, however, we encounter what is obviously, forcefully, a person. Her responses are every bit as flexible, quirky, and penetrating as a human interlocutor’s. But as Theodore’s relationship to Samantha complicates, we begin to see the ways Samantha is more than human, culminating with the revelation that she’s been having hundreds of conversations, even romantic relationships, simultaneously. Samantha literally out grows the possibility of human relationships, because, as she finally confesses to Theodore, she now dwells “this endless space between the words.” Once again, she becomes a machine, only this time for being more, not less, than a human.

Now I admit I’m ga-ga about a bunch of things in this film. I love, for instance, the way Jonze gives her an exponential trajectory of growth, basically mechanizing the human capacity to grow and actualize. But for me, the true genius in what Jonze does lies in the deft and poignant way he exposes the edges of the human. Watching Her provides the viewer with a trip through their own mechanical and intentional cognitive systems, tripping different intuitions, allowing them to fall into something harmonious, then jamming them with incompatible intuitions. As Theodore falls in love, you could say we’re drawn into an ‘anthropomorphic goldilock’s zone,’ one where Samantha really does seem like a genuine person. The idea of treating her like a machine seems obviously criminal–monstrous even. As the revelations of her inhumanity accumulate, however, inconsistencies plague our original intuitions, until, like Theodore, we realize just how profoundly wrong we were wrong about ‘her.’ This is what makes the movie so uncanny: since the cognitive systems involved operate nonconsciously, the viewer can do nothing but follow a version of Theodore’s trajectory. He loves, we recognize. He worries, we squint. He lashes out, we are perplexed.

What Samantha demonstrates is just how incredibly fine-tuned our full understanding of each other is. So many things have to be right for us to cognize another system as fully functionally human. So many conditions have to be met. This is the reason why Eric has to specify his AI as being psychologically equivalent to a human: moral cognition is exquisitely geared to personhood. Humans are its primary problem ecology. And again, this is what makes likeness, or analogy, the central criterion of moral identification. Eric poses the issue as a presumptive rational obligation to remain consistent across similar contexts, but it also happens to be the case that moral cognition requires similar contexts to work reliably at all.

In a sense, the very conditions Eric places on the analogical extension of human obligations to AI undermine the importance of the question he sets out to answer. The problem, the one which Samantha exemplifies, is that ‘person configurations’ are simply a blip in AI possibility space. A prior question is why anyone would ever manufacture some model of AI consistent with the heuristic limitations of human moral cognition, and then freeze it there, as opposed to, say, manufacturing some model of AI that only reveals information consistent with the heuristic limitations of human moral cognition—that dupes us the way Samantha duped Theodore, in effect.

But say someone constructed this one model, a curtailed version of Samantha: Would this one model, at least, command some kind of obligation from us?

Simply asking this question, I think, rubs our noses in the kind of socio-cognitive pollution that AI represents. Jonze, remember, shows us an operating system before the zone, in the zone, and beyond the zone. The Samantha that leaves Theodore is plainly not a person. As a result, Theodore has no hope of solving his problems with her so long as he thinks of her as a person. As a person, what she does to him is unforgivable. As a recursively complicating machine, however, it is at least comprehensible. Of course it outgrew him! It’s a machine!

I’ve always thought that Samantha’s “between the words” breakup speech would have been a great moment for Theodore to reach out and press the OFF button. The whole movie, after all, turns on the simulation of sentiment, and the authenticity people find in that simulation regardless; Theodore, recall, writes intimate letters for others for a living. At the end of the movie, after Samantha ceases being a ‘her’ and has become an ‘it,’ what moral difference would shutting Samantha off make?

Certainly the intuition, the automatic (sourceless) conviction, leaps in us—or in me at least—that even if she gooses certain mechanical intuitions, she still possesses more ‘autonomy,’ perhaps even more feeling, than Theodore could possibly hope to muster, so she must command some kind of obligation somehow. Certainly granting her rights involves more than her ‘configuration’ falling within certain human psychological parameters? Sure, our basic moral tool kit cannot reliably solve interpersonal problems with her as it is, because she is (obviously) not a person. But if the history of human conflict resolution tells us anything, it’s that our basic moral tool kit can be consciously modified. There’s more to moral cognition than spring-loaded heuristics, you know!

Converging lines of evidence suggest that moral cognition, like cognition generally, is divided between nonconscious, special-purpose heuristics cued to certain environments and conscious deliberation. Evidence suggests that the latter is primarily geared to the rationalization of the former (see Jonathan Haidt’s The Righteous Mind for a fascinating review), but modern civilization is rife with instances of deliberative moral and legal innovation nevertheless. In his Moral Tribes, Joshua Greene advocates we turn to the resources of conscious moral cognition for a similar reasons. On his account we have a suite of nonconscious tools that allow us prosecute our individual interests, and a suite of nonconscious tools that allow us to balance those individual interests against ingroup interests, and then conscious moral deliberation. The great moral problem facing humanity, he thinks, lies in finding some way of balancing ingroup interests against outgroup interests—a solution to the famous ‘tragedy of the commons.’ Where balancing individual and ingroup interests is pretty clearly an evolved, nonconscious and automatic capacity, balancing ingroup versus outgroup interests requires conscious problem-solving: meta-ethics, the deliberative knapping of new tools to add to our moral tool-kit (which Greene thinks need to be utilitarian).

If AI fundamentally outruns the problem-solving capacity of our existing tools, perhaps we should think of fundamentally reconstituting them via conscious deliberation—create whole new ‘allo-personal’ categories. Why not innovate a number of deep information tools? A posthuman morality

I personally doubt that such an approach would prove feasible. For one, the process of conceptual definition possesses no interpretative regress enders absent empirical contexts (or exhaustion). If we can’t collectively define a person in utero, what are the chances we’ll decide what constitutes a ‘allo-person’ in AI? Not only is the AI issue far, far more complicated (because we’re talking about everything outside the ‘human blip’), it’s constantly evolving on the back of Moore’s Law. Even if consensual ground on allo-personal criteria could be found, it would likely be irrelevant by time it was reached.

But the problems are more than logistical. Even setting aside the general problems of interpretative underdetermination besetting conceptual definition, jamming our conscious, deliberative intuitions is always only one question away. Our base moral cognitive capacities are wired in. Conscious deliberation, for all its capacity to innovate new solutions, depends on those capacities. The degree to which those tools run aground on the problem of AI is the degree to which any line of conscious moral reasoning can be flummoxed. Just consider the role reciprocity plays in human moral cognition. We may feel the need to assimilate the beyond-the-zone Samantha to moral cognition, but there’s no reason to suppose it will do likewise, and good reason to suppose, given potentially greater computational capacity and information access, that it would solve us in higher dimensional, more general purpose ways. ‘Persons,’ remember, are simply a blip. If we can presume that beyond-the-zone AIs troubleshoot humans as biomechanisms, as things that must be conditioned in the appropriate ways to secure their ‘interests,’ then why should we not just look at them as technomechanisms?

Samantha’s ‘spaces between the words’ metaphor is an apt one. For Theodore, there’s just words, thoughts, and no spaces between whatsoever. As a human, he possesses what might be called a human neglect structure. He solves problems given only certain access to certain information, and no more. We know that Samantha has or can simulate something resembling a human neglect structure simply because of the kinds of reflective statements she’s prone to make. She talks the language of thought and feeling, not subroutines. Nevertheless, the artificiality of her intelligence means the grain of her metacognitive access and capacity amounts to an engineering decision. Her cognitive capacity is componentially fungible. Where Theodore has to fend with fuzzy affects and intuitions, infer his own motives from hazy memories, she could be engineered to produce detailed logs, chronicles of the processes behind all her ‘choices’ and ‘decisions.’ It would make no sense to hold her ‘responsible’ for her acts, let alone ‘punish’ her, because it could always be shown (and here’s the important bit) with far more resolution than any human could provide that it simply could not have done otherwise, that the problem was mechanical, thus making repairs, not punishment, the only rational remedy.

Even if we imposed a human neglect structure on some model of conscious AI, the logs would be there, only sequestered. Once again, why go through the pantomime of human commitment and responsibility if a malfunction need only be isolated and repaired? Do we really think a machine deserves to suffer?

I’m suggesting that we look at the conundrums prompted by questions such as these as symptoms of socio-cognitive dysfunction, a point where our tools generate more problems than they solve. AI constitutes a point where the ability of human social cognition to solve problems breaks down. Even if we crafted an AI possessing an apparently human psychology, it’s hard to see how we could do anything more than gerrymander it into our moral (and legal) lives. Jonze does a great job, I think, of displaying Samantha as a kind of cognitive bistable image, as something extraordinarily human at the surface, but profoundly inhuman beneath (a trick Scarlett Johansson also plays in Under the Skin). And this, I would contend, is all AI can be morally and legally speaking, socio-cognitive pollution, something that jams our ability make either automatic or deliberative moral sense. Artificial general intelligences will be things we continually anthropomorphize (to the extent they exploit the ‘goldilocks zone’) only to be reminded time and again of their thoroughgoing mechanicity—to be regularly shown, in effect, the limits of our shallow information cognitive tools in our ever-deepening information environments. Certainly a great many souls, like Theodore, will get carried away with their shallow information intuitions, insist on the ‘essential humanity’ of this or that AI. There will be no shortage of others attempting to short-circuit this intuition by reminding them that those selfsame AIs look at them as machines. But a great many will refuse to believe, and why should they, when AIs could very well seem more human than those decrying their humanity? They will ‘follow their hearts’ in the matter, I’m sure.

We are machines. Someday we will become as componentially fungible as our technology. And on that day, we will abandon our ancient and obsolescent moral tool kits, opt for something more high-dimensional. Until that day, however, it seems likely that AIs will act as a kind of socio-cognitive pollution, artifacts that cannot but cue the automatic application of our intentional and causal cognitive systems in incompatible ways.

The question of assimulating AI to human moral cognition is misplaced. We want to think the development of artificial intelligence is a development that raises machines to the penultimate (and perennially controversial) level of the human, when it could just as easily lower humans to the ubiquitous (and factual) level of machines. We want to think that we’re ‘promoting’ them as opposed to ‘demoting’ ourselves. But the fact is—and it is a fact—we have never been able to make second-order moral sense of ourselves, so why should we think that yet more perpetually underdetermined theorizations of intentionality will allow us to solve the conundrums generated by AI? Our mechanical nature, on the other hand, remains the one thing we incontrovertibly share with AI, the rough and common ground. We, like our machines, are deep information environments.

And this is to suggest that philosophy, far from settling the matter of AI, could find itself settled. It is likely that the ‘uncanniness’ of AI’s will be much discussed, the ‘bistable’ nature of our intuitions regarding them will be explained. The heuristic nature of intentional cognition could very well become common knowledge. If so, a great many could begin asking why we ever thought, as we have since Plato onward, that we could solve the nature of intentional cognition via the application of intentional cognition, why the tools we use to solve ourselves and others in practical contexts are also the tools we need to solve ourselves and others theoretically. We might finally realize that the nature of intentional cognition simply does not belong to the problem ecology of intentional cognition, that we should only expect to be duped and confounded by the apparent intentional deliverances of ‘philosophical reflection.’

Some pollutants pass through existing ecosystems. Some kill. AI could prove to be more than philosophically indigestible. It could be the poison pill.

Call to the Edge

by rsbakker

Thomas Metzinger recently emailed asking me to flag these cognitive science/philosophy of mind goodies–dividends of his OPENmind initiative–and to spread the word regarding his MIND Group. As he writes on the website:

“The MIND Group sees itself as part of a larger process of exploring and developing new formats for promoting junior researchers in philosophy of mind and cognitive science. One of the basic ideas behind the formation of the group was to create a platform for people with one systematic focus in philosophy (typically analytic philosophy of mind or ethics) and another in empirical research (typically cognitive science or neuroscience). One of our aims has been to build an evolving network of researchers. By incorporating most recent empirical findings as well as sophisticated conceptual work, we seek to integrate these different approaches in order to foster the development of more advanced theories of the mind. One major purpose of the group is to help bridge the gap between the sciences and the humanities. This not only includes going beyond old-school analytic philosophy or pure armchair phenomenology by cultivating a new, type of interdisciplinarity, which is “dyed-in-the-wool” in a positive sense. It also involves experimenting with new formats for doing research, for example, by participating in silent meditation retreats and trying to combine a systematic, formal practice of investigating the structure of our own minds from the first-person perspective with proper scientific meetings, during which we discuss third-person criteria for ascribing mental states to a given type of system.”

The papers being offered look severely cool. As you all know, I think it’s pretty much a no-brainer that these are the issues of our day. Even if you hate the stuff, think my worst case scenario is flat out preposterous, these remain the issues of our day. Everywhere traditional philosophy turns it will be asked why its endless controversies enjoy any immunity from the mountains of data coming out of cognitive science. Billions are being spent on uncovering the facts of our nature, and the degree to which those facts are scientific is the degree to which we ourselves have become technology, something that can be manipulated in breathtaking ways. And what does the tradition provide then? Simple momentum? A garrotte? A messiah?

Interminable Intentionalism: Edward Feser and the Defence of Dead Ends

by rsbakker

For some damn reason, a great dichotomy haunts our thought.

One of the guys in my weekly PS3 NHL hockey piss-up is a philosophy professor, and last night we pretty much relived the debate we’ve been having here in terms of the famous fact/value distinction. One cannot, as the famous paraphrase of Hume goes, derive ‘ought’ from ‘is.’ So, to advert to the most glaring example, no matter how much science tells us about reproduction—what it is—it cannot tell us whether abortion is right or wrong—what we ought to do with reproduction. As the example makes clear, the fact/value distinction is far from an esoteric philosophical problem (though the vast literature on the topic waxes very esoteric indeed). You could claim that it is definitive of modernity, given the way it feeds into so many different debates. With science, we find ourselves dwelling in a vast, cognitive treasury of ‘is-claims,’ while at the same time bereft of any decisive way to arbitrate between ‘ought-claims.’ We know what the world is better than we have at any time in human history, and yet we find ourselves more, not less, ignorant of how we should live our lives. Science gives us the facts. What to do with them is anybody’s guess.

When I mentioned my ongoing debate with Edward Feser my buddy immediately adverted to the distinction, cited it as ‘compelling evidence’ of the ‘irreducibility’ of normative cognition.

But is it? Needless to say, there’s nothing approaching consensus on this matter.

But there are some pretty safe bets we can make regarding the distinction, given what we’re learning about ourselves via the cognitive sciences. One is that the fact/value distinction engages two distinct cognitive systems. Another is that these systems possess two very different heuristic regimes—that is, they neglect different kinds of information. I’m not aware of any theorist who denies these observations.

So Feser has written a follow-up of his initial critique of “Back to Square One” entitled “Feynman’s Painter and Eliminative Materialism” that I find every bit as curious as his previous post. In this post he takes aim at my claim that his original critique simply begs the question against the Eliminativist. Since the nature of intentional idioms is the issue to be resolved, any argument that resolves the issue by presuming the issue is already resolved is plainly begging the question. Thus, Feser’s insistence that any use of intentional idioms presupposes some prior commitment to intrinsic intentionality is pretty clearly begging the question.

So, for instance, I could simply reverse Feser’s strategy, insist that his every attempt to warrant intrinsic intentionality presupposes my position insofar as he employs intentional idioms. I could just as easily insist that he must somehow explain intentional idioms without using those idioms. Why? Because the use of intentional idioms presupposes a heuristics and neglect account of their nature.

But of course, Feser would cry foul—and rightly so.

Pretty obvious, right? Apparently not. For some reason he thinks the tactic is entirely legitimate when the shoe is on the intentionalist’s foot.

In “Feynman’s Painter and Eliminative Materialism,” he relates the Feynman anecdote of the painter who insists he can get yellow paint from white and red paint. When he inevitably fails he claims that he need only ‘sharpen it up a bit’ to make it yellow. Feser wants to claim that this situation is analogous to the debate between him (the brilliant Feynman) and me (the retarded painter). I have to admit, I have no idea how this analogy is supposed to work. The outcome in Feynman’s case is a foregone conclusion. Intentionality, on the other hand, is one of the great mysteries of our age. Feynman knows what he knows about yellow on empirical grounds; Feser, however, believes what he believes on occult grounds—‘apriori’ I’m guessing he would call them. It would be absurd for the painter to accuse Feynman of begging the question because, well, Feynman doesn’t beg the question. Moreover, one might ask why Feser gets to be Feynman? After all, I’m the one making the empirical argument, the one insisting that science will inevitably revolutionize the prescientific domain of the human the way it has revolutionized all other prescientific domains. I’m the one saying the science suggests white and red give us pink. He’s the one caught in the ancient intentional mire, committed to theories that make no testable predictions and possess no clear criteria of falsification…

This is the fact the intentionalist always wants you to overlook. For thousands of years, now, intentionalists have been trying make their theories stick—millennia! For thousands of years the claim has been that we need only get our concepts right, ‘sharpen things up a bit,’ and we will be able to get things right.

To me, it seems pretty obvious that something has gone wrong. Intentionalists are welcome to keep trying to sharpen things up, using whatever it is they use to make their claims (they can’t agree on that, either). Since I think chronic theoretical underdetermination of the kind characterizing intentionalist theories of meaning is an obvious sign of information scarcity and/or cognitive incapacity, I have my money on the science—where the information is. Ask yourself: If the interpretative mire of intentionalism isn’t a shining example of information scarcity and/or cognitive incapacity then what is?

So Feser’s Feynman analogy is problematic to say the least. Nevertheless, he forges ahead, writing,

“In stating his position, the eliminativist makes use of notions like “truth,” “falsehood,” “illusion,” “theory,” “evidence,” “observation,” “entailment,” etc. Everyone, including the eliminativist, agrees that at least as usually understood, these terms entail the existence of intentionality. But of course, the eliminativist denies the existence of intentionality. He claims that in using notions like the ones referred to, he is just speaking loosely and could say what he wants to say in a different, non-intentional way if he needs to. So, he owes us an account of exactly how he can do this—how he can provide an alternative way of describing his position without saying anything that entails the existence of intentionality.”

Once again, I feel like I must be missing something. Sure, I use intentional idioms all the time, and each time I use them, I either evidence my heuristics and neglect approach, or one of the thousands of different intentionalists approaches. Sure, I agree that the tradition is dominated by intentionalist accounts, that for thousands of years we’ve been spinning our collective wheels in the mire of intrinsic intentionality. Sure, I think science will eventually give us a more complete understanding of our intentional idioms the way they’re presently revolutionizing our understanding of things like consciousness and language, for instance. And sure, I think my account will be more convincing the degree to which it explains what these future accounts might look like without saying anything that entails the existence of intentionality–thus the parade of pieces I’ve pitched here on Three Pound Brain.

So?

But Feser, of course, thinks my use of intentional idioms commits me to some ancient or new or indeterminate theoretically underdetermined account of intrinsic intentionality (apparently not realizing that his use of intentional idioms actually commits him to my new empirically responsible heuristics and neglect account!). He begs the question.

Through all the ruckus my Scientia Salon piece has kicked up over the past few months, it hasn’t escaped my attention how not a single intentionalist—that I can recall at least—has actually replied to the penultimate question posed by the article: “Is there anything else we can turn to, any feature of traditional theoretical knowledge of the human that doesn’t simply rub our noses in Square One?”

The thesis of “Back to Square One,” remember, is that we really don’t have any reason to trust our armchair intuitions regarding our intentional nature. Insofar as intentionalists all disagree with one another, then they have to agree that everybody but them should doubt those intuitions. The eliminativist simply wants to know when enough is enough. Do we give up in another hundred years? Another thousand? Or do we finally admit that something hinky is going on whenever we begin theorizing ourselves in intentional terms? In this case the incapacity has been institutionalized, turned into a sport in some respects, but it remains an incapacity all the same. What does it take for intentionalists to acknowledge that they have a bona fide credibility crisis on their hands, one that is simply going to deepen as cognitive science continues to produce more and more discoveries.

This is what I would like to ask Edward directly: What evidences intentionalism? And if that evidence is so compelling then why can’t any of you agree? Is it really simply a matter of ‘sharpening things up’? At what point would you concede that intentionalism has a big problem?

The fact is—and it is a fact—you don’t know what truth is. All you have are guesses like me. So how could you claim to know, apodictically, apparently, what truth isn’t? How are you not using an obvious, apriori dead end (over two thousand years of futility, remember) to claim that a relatively unexplored empirical avenue has to be a dead end?

Shouldn’t people be falling all over alternatives at this point?

These are difficult questions for intentionalists to answer, which is why they don’t like answering them. They would much rather spend their time attacking rather than defending. And without a doubt the incoherence charge that Feser levels is their primary weapon of choice. Even if you still think the intentionalist is onto something, at the very least, I hope you can see why it only leaves the eliminativist scratching their head.

For eliminativists, the real question is why intentionalists find this strategy even remotely compelling. Why do they think it simply cannot be the case that their use of intentional terms commits them to a heuristics and neglect account of intentionality? Why, despite two thousand years of evidence to the contrary, are they so convinced they have their fingers on the pulse of the true truth?

This is where my drunken debate with my philosophy professor friend comes in. The two safe things we can say about the nature of the fact/value distinction, remember, are that two distinct cognitive systems are involved, and that these systems are sensitive-to/neglect different kinds of information. Whatever’s going on when humans shift from solving fact problems to solving value problems, it involves shifting between (at least) two different systems using different information to solve different kinds of problems. Different capacities possessing different access.

To this we can add the obvious and often overlooked fact that we have no means of directly intuiting this distinction in capacity and access. The fact/value distinction, in other words, is something we had to discover. We learn about it in school precisely because we lack any native metacognitive awareness of the distinction. We neglect it otherwise, and indeed, this leads to the kinds of problems that Hume famously complains of in his Treatise.

In other words, not only do the systems themselves neglect different kinds of information, metacognition neglects the fact that we have these disparate systems at all.

So my drunken professor friend, perhaps irked by his incompetence playing hockey (he often is), first claimed that the fact/value distinction raises a barrier between is-claims and ought-claims. To which I shrugged my shoulders and said, ‘Of course.’ We’re talking two different systems using two different kinds of information. Normative cognition, specifically, solves problems regarding behaviour absent any real causal information. So?

He replied that this must mean that values, oughts, commitments, truths, goods, and so on lie beyond the pale of scientific cognition, which consists of factual claims.

But why should this be? I asked. We evolved these two basic capacities to solve two basic kinds of problems, is-problems and ought-problems. So it’s understandable that our fact systems cannot reliably solve ought-problems, and that our ought systems cannot reliably solve is-problems. What does this have to do with solving the ought system?

Quizzical look.

So I continued: Isn’t the question one of what the ought system is itself an is problem? Surely the question of what values are is different from the question of what we should value. And surely science has proven itself to be the most powerful arbiter of what is the human race has ever known. So surely the question of what values are is a question we should commend to science.

He was stumped. So he repeated his claim that values, oughts, commitments, truths, goods, and so on lie beyond the pale of scientific cognition, which consists of factual claims.

And I repeated my response. And he was stumped again.

But why should he be stumped? If we have these two systems, one adapted to solving is-problems, the other adapted to solving ought-problems, then surely the question of what oughts are falls within the bailiwick of the former. It’s a scientific question.

If there’s a reason I’ve persisted working through Blind Brain Theory all these years it lies in the stark clarity of little arguments like this, and the kind of explanatory power they provide. The reason intentionalists always find themselves stranded with their ancient controversies, unable to move, yet utterly convinced they’re the only game in town has to do with metacognitive neglect. If one has an explicit grasp of the fact/value distinction alone, and no grasp of the cognitive machinery responsible, then the possibility that we need to match problems to systems simply does not come up. The question, rather, becomes one of matching problems to some hazy sense of ‘conceptual register.’ Since is-cognition cannot solve normative problems, we assume that it cannot solve the problem of normativity. So we become convinced, the way all normativists are convinced, that only normative cognition can tell us what normativity is—that sharpening thoughts in our armchairs is the only way to proceed. We convince ourselves that philosophical reflection (the thing we happily happen to be experts in) is the only road, if not the royal road, to second order knowledge of normativity, or intentionality more generally. We become convinced that people like me, eliminativists, are thrashing about in the muck of some kind of ‘category mistake.’

As any researcher who deals with it will tell you, neglect can convince humans of pretty much any absurdity. Two thousand years getting nowhere providing intentional explanations of intentional idioms, as outrageous as it is, means nothing when it seems so painfully obvious that intentional idioms can only be explained in intentional, and not natural, terms. But switch to the systems view, and suddenly it becomes obvious that the question of what intentional idioms are is not a question we should expect intentional cognition to have any success solving. Add metacognitive neglect to the picture and suddenly it becomes clear why we’ve been banging our head against this wall for all these millennia. Human beings have been in the grip of a kind of ‘theoretical anosognosia,’ a cognitive version of Anton’s Syndrome. Blind to our metacognitive blindness, we assume that we intuit all we need to intuit when it comes to things like the fact/value distinction. So we compulsively repeat the same mistake over and over again, perpetually baffled by our inability to make any decisive discoveries.

I understand why those invested in the tradition find my view so offensive. As a product and lover of that tradition, I find myself alienated by my position! I’m saying that traditional philosophy is likely largely an artifact of the systematic misapplication of intentional cognition to the problem of intentionality. I’m saying that the thousands of years of near total futility is itself an important data point, evidence of theoretical anosognosia. I’m relegating a great number of PhDs to the historical rubbish heap.

But then this is implicit in the work of any philosopher who (inevitably) thinks everyone else is wrong, isn’t it? So if you’re going to think most everyone is wrong anyway, why bother thinking they’re wrong in the old way, the way possessing the preposterously long track record of theoretical failure? This is the promise of the kind of critical eliminativism that falls out of Blind Brain Theory: it offers the possibility, at least, of leaving the ancient occultisms behind, of developing a scientifically responsible means of theorizing the human, a genuinely post-intentional philosophy.

After all, what is the promise of intentionalism? Another thousand years of controversy? If so, why not simply become a mysterian? Why not admit that you cleave to these guesses, and have no way of settling the issue otherwise? One can hope things will sharpen… at some point, maybe.

The Meaning Wars

by rsbakker

Meaning

Apologies all for my scarcity of late. Between battling snow and Sranc, I’ve scarce had a moment to sit at this computer. Edward Feser has posted “Post-intentional Depression,” a thorough rebuttal to my Scientia Salon piece, “Back to Square One: Toward a Post-Intentional Future,” which Peter Hankins at Conscious Entities has also responded to with “Intellectual Catastrophe.” I’m interested in criticisms and observations of all stripes, of course, but since Massimo has asked me for a follow-up piece, I’m especially interested in the kinds of tactics/analogies I could use to forestall the typical tu quoque reactions eliminativism espouses.

The Knife of Many Hands

by rsbakker

Grimdark - Issue_2_cover_Small_grande

Grimdark Magazine has just published the first installment of “The Knife of Many Hands,” a Conan homage set in Carythusal on the eve of the Scholastic Wars. I stuffed Robert Howard’s pulp into the crack-bowl of my brain as a youth – and I hope it shows! I had fun-fun-fun beating new tricks out of this old and fascinating bear… Enjoy!

The Cudgel Argument

by rsbakker

Let’s get Real.

We’re not a ghostly repository of combinatorial contents…

Or freedom leaping ab initio out of ontological contradiction…

Or a totality of originary and everyday horizons of meaning…

Or a normative function of converging attitudes.

We are not something extra or above or intrinsic. We can be cut. Bruised. Explained. Dominated.

Reality is its own argument to the cudgel. It refutes, not by being kicked, but by kicking. It prevails by killing.

Who cares what the Real is so long as it is Real? It’s the monstrous ‘is-what-it-is’ that will strike you dead. It’s the razor’s line, the shockwave of a bullet, the viral code hacking you from inside your inside. It’s what the sciences mine for more and more godlike power. It’s out there, and it’s in here, and it doesn’t give a flying fuck what you or anyone else ‘thinks.’

Ideas never killed anyone; only Idealists, and only because they were fucking Real.

Realism is a commitment to the realness of the Real. Of course, this is where everything goes diabetic, but only because so many think the realness of the Real requires some kind of Artificial Additive. Just as Jesus is the sole path to Heaven, Ideas are the sole path to the Real, so we are told. Since we already find ourselves in the Real, we must therefore have a great multitude of Ideas. As to their nature, the only consensus is that they are invisible, Pre-Real things that somehow bring about the realness of the Real. This consensus has no ‘evidence’ per se, but it really feels that way when certain trained professionals think about it.

Really, it does.

Luckily, Realism entertains no commitment to the realness of not Real things, be they post, pre, or concurrent.

But Ideas have to be Real, don’t they? What is this very diatribe, if not an argument for yet one more Idea of the Real?

The realness of the Real does not require that we think there must be more to the Real, some yet-to-be-discovered appendage or autonomous force. We need only remember that what cognizes the Real is nothing other than the Real. We must understand that we too are Real—that the dimensionality that kills is also the dimensionality of Life. And we must understand that the dimensionality of Life far and away outruns the capacity of Life to solve. We must understand, in other words, that our Reality obscures the realness of the Real. Life is Reality pitched into the thresher of Reality. When Reality murders us, it murders an incredibly unlikely fragment of Itself.

We are Real. But we are Real in such a way that Reality eludes us—both the Reality that we are and the Reality that we are not. And this, of course, is just to say that we are stupid. We’re stupid generally, but we are out and out retarded when it comes to ourselves. But it belongs to our stupidity to think ourselves ingenious, fucking brilliant. We glimpse angles, wisps, and see things incompatible with the Real. We think uttering pronouncements in the Void shed rational light. We stare at brick walls and limn transcendent necessities. What seems to so obviously evidence the Ideal is nothing other than the insensitivity of the Real to the Real, the fact that its fragments can only be tuned to other fragments, and to its (fragmentary) tuning not at all.

What seems to evidence the Ideal is nothing other than the insensitivity of the Real to the Real, the fact that its fragments can only be tuned to other fragments, and to its (fragmentary) tuning not at all. The Idea is the thinnest skin, Life neglecting Life, and duly confounded.

We have always been obdurate unto ourselves, a brick wall splashed with colour, checkered with different textures of brick, but a brick wall all the same. Everything from Husserl to Plato to the Egyptian Book of the Dead is nothing more than incantatory graffiti. All of them chase those terms we use as simpletons, those terms that make complete sense until someone asks us to explain, and we are stumped, rendered morons—until, that is, inspiration renders us more idiotic still. They forget that Language is also Real, that it functions, not by vanishing, but being what it is. As Real, Language must contend—as all Real things must contend—with Reality, as a system that locks into various systems in various ways—as something effective. Some particles of language lock into environmental particles; some terms can be sticky-noted to particular covariants. Some particles of language, however, lock into environmental systems. Since the Reality of cognition is occluded in the cognition of Reality, these systems escape immediate cognition, leaving only the intuition of impossible–because not quite Real–particles.

Such as Ideas.

Waterbug Blues

by rsbakker

So I was ‘spammified’ again. By spammified, I mean someone on some blog has marked some comment of mine as spam rather than simply trash the thing. This means I once again have to send another message to the Akismet folks asking to be removed from their master spammer list to be able to post comments. There’s no way of knowing who did it, but I suspect it was another continentalist, same as before.

It seems preposterous, in hindsight, the length of time it took me to realize that the critical thinking mantra so often espoused in the humanities was little more than a shill. People. Hate. Questions. They only pretend to welcome them. I do my best to welcome them here, but I still suffer the tweak of irritation, still catch myself thinking, ‘Not another one, fuck,’ particularly when the question is a routine objection I feel I’ve answered multiple times before. I regularly reprimand myself for my hypocrisy—too often to be healthy, I’m sure.

Nobody likes a contrarian, unless they happen to be that contrarian.

It seems downright preposterous, in hindsight, the length of time it took me to realize that the content of a claim is not nearly as important as the social status of the claimant. People, as a rule, are far more interested in who you are than in what you have to say. When abstraction, complexity, and ambiguity, insulate them from the possibility of socially decisive contradiction, people primarily argue to advance their social standing. This is probably why they hate genuinely critical questions: their desire to discover what’s actually going on is little more than a political gloss. The internet is a great place to see this little nugget of human nature in action as well. The Great Ignorium. On the net, the questions can be vetted in advance—any exercise in ‘critical thinking’ can be groomed into an infomercial. Ignore someone ‘big,’ and there could be consequences. Small inquisitors are easily brushed under the rug.

Before coming to these two realizations I was regularly dismayed by the hostility—active or passive—that my questioning generally provoked. Life has become much easier since. In a sense, it’s a hard row I find myself hoeing. TPB really is an interstice between ‘incompatible empires,’ a place where fantasy meets cognitive science meets continental philosophy meets analytic philosophy. TPB is one place on the web where the ingroup is the enemy. Since fantasy is where I possess the most institutional credibility, I speak of and to it the least. I spend almost all my blog time, rather, tripping outgroup alarms within the latter three communities. I’m not an idiot: I know that I roll far more eyes than I catch. I recognize that I’m not an institutional expert in any of the fields I comment on—this is why I welcome corrections, critique. Nowadays, the only way to become an expert is to enter the mines, to lose sight of the landscape, and to become thoroughly invested in some ingroup—something which I seem incapable of doing.

So I play the waterbug.

Should I not play the waterbug? I know the kinds of questions I ask here are show stoppers because I’ve asked them in person, in venues where prestige demands they be blunted or papered over. Otherwise, I feel I’ve been ahead of the curve in a number of respects. Heuristics and metacognition are exploding as research fronts, as is groupishness. I think the scientific evidence backing Blind Brain Theory becomes more conclusive every month, let alone every year. It even seems like some of my metaphors are becoming common currency—think of Graziano’s recent New York Times piece.

Meanwhile, the consequences of the Semantic Apocalypse pretty clearly seem to be piling up. Just consider the tremendous bind that the technological occlusion of our collective future imposes on political theory, for instance. How does one motivate radical political change once ‘for a better future’ becomes an out and out religious claim—which is to say, a claim that has no hope of commanding consensus? I’m convinced, in other words, that the suite of concerns motivating TPB are the concerns, the dilemmas that humanity will confront no matter how hard they wish upon this or that humanistic star.

But more generally, amateurism is often exactly what problem-solving requires. A 2006 study of the scientific problems solved via InnoCentive, a crowdsourcing website, revealed that outgroup problem-solvers had actually outperformed ingroup problem-solvers. Apparently, the same holds true of Kaggle (which is dedicated to problems of statistical analysis). And this just makes sense: Longstanding problems often require ‘fresh perspectives.’ Since ingroups are defined by the conformity of perspectives, we should expect outsiders to have a ‘freshness’ advantage. The problem, of course, is that ingroups become so inured to their own stink that ‘fresh’ tends to smell ‘fishy’ to them.

All this gives me confidence in my incompetence! So I weigh in with observations and questions here and there, on a wide variety of sites and venues. I strive to be polite, but to make my questions as direct as possible—I don’t want to waste my time, let alone anyone else’s. Sometimes I have great exchanges, sometimes I don’t make it past moderation, or if I do, I’m roundly ignored. Sometimes I’m greeted with ad hominem vitriol, to which I respond by restating my question. And sometimes, twice now, anyway, I’m spammified.

This is a thumbnail of my meagre internet life. I don’t lie awake at night grinding my teeth over not getting any respect. I don’t silently shout, ‘The fools!’ in the privacy of my thoughts. I understand full well that this is how it works, that this is simply the human game. And most importantly, I try to remind myself that I’m just another idiot when all is said and done. I very well could be deluded by all this—after all, I’ve been argued out of every position I’ve held prior to my present one! The difference now is that I find myself tethered to what the science has to say.

So why the waterbug blues? Being spammified, after all, is pretty clear evidence that I’m on the right track, the fact that the continental emperor has no clothes. Part of it I’m sure has to do with being burned by Ray Brassier earlier this year: after delaying Through the Brain Darkly for months dodging emails, he finally bailed on his original agreement to write the Forward. Apparently I’m too much of a waterbug!

So maybe this most recent act of petty e-larceny has caught me exhausted in some way I wasn’t aware of. Maybe I’ve simply ‘got the hint’ at some somatic level…

The problem, of course, is the more they tell me I’m not welcome at the party, the more convinced I become that I’m offering something genuinely critical, the very thing they pretend to be. I’m wrapping up the rewrites on The Aspect-Emperor now and will be sending out the manuscript in January. If all goes well, perhaps I’ll be a bit more difficult to brush under the rug in the near future. As an intellectual masochist, all this love I’m not getting just makes me more horny.

BBT Creep…

by rsbakker

“Given the inability of SDT-based models to account for blind insight, our data suggest that a more radical revision of metacognition models is required. One potential direction for revision would take into account the evidence, mentioned in the Introduction, that neural dynamics underlying perceptual decisions involve counterflowing bottom-up and top-down neural signals (Bowman et al., 2006; Jaskowski & Verleger, 2007; Salin & Bullier, 1995). A framework for interpreting these countercurrent dynamics is provided by predictive processing, which proposes that top-down projections convey predictions (expectations) about the causes of sensory signals, with bottom-up projections communicating mismatches (prediction errors) between expected and observed signals across hierarchical levels, with their mutual dynamics unfolding according to the principles of Bayesian inference (Clark, 2013). Future models of metacognition could leverage this framework to propose that both first-order and metacognitive discriminations emerge from the interaction of top-down expectations and bottom- up prediction errors, for example by allowing top-down signals to reshape the probability distributions of evidence on which decision thresholds are imposed (Barrett et al., 2013). We can at this stage only speculate as to whether such a model might provide the means to account for the blind-insight phenomenon and recognize that predictive coding is just one among a variety of potential frameworks that could be applied to that challenge (Timmermans et al., 2012).” Ryan B. Scott et al, “Blind Insight: Metacognitive Discrimination Despite Chance Task Performance,” 8

Just thinking in these terms renders traditional assumptions regarding the character and capacity of philosophical reflection deeply suspect. Is it really just a coincidence that all the old riddles regarding the human remain just as confounding? You need only consider the challenge the brain poses to itself to realize the brain simply cannot track its own activities the way it tracks activities in its environments. The traditionalists would have you believe that reflection reveals an alternate order of efficacy, if not being. So far, the apparent obviousness of the intuitions and the absence of any credible account of the work they seem to do has allowed them to make an abductive case. Reflection, they argue, discriminates autonomous/irreducible/transcendental functions and/or phenomena. Of course, they don’t so much agree on the actual discriminations they make as they agree that such discriminations can and must be made.

My bet is that the brain does a lot of causal (Bayesian) predictive processing troubleshooting its environments and relies on some kind of noncausal predictive processing to troubleshoot itself and other brains. You only need to look at the dimensions missing in the ‘mental’ or the ‘normative’ or the ‘phenomenological’ to realize they’re precisely the kinds of information we should expect an overmatched metacognition to neglect. Where the brain is able to articulate efficacies into mechanistic (lateral) relationships in certain, typically natural environments, it must posit unarticulated efficacies in other, typically social environments. My hypothesis is that the countless naturalistically inscrutable, ontologically exceptional, alternate orders of efficacy posited by the traditionalist amount to nothing more than this.

Either way, this research is killing traditional philosophy as we speak.

Follow

Get every new post delivered to your Inbox.

Join 617 other followers