Three Pound Brain

No bells, just whistling in the dark…

Tag: philosophy

Intentional Philosophy as the Neuroscientific Explananda Problem

by rsbakker

The problem is basically that the machinery of the brain has no way of tracking its own astronomical dimensionality; it can at best track problem-specific correlational activity, various heuristic hacks. We lack not only the metacognitive bandwidth, but the metacognitive access required to formulate the explananda of neuroscientific investigation.

A curious consequence of the neuroscientific explananda problem is the glaring way it reveals our blindness to ourselves, our medial neglect. The mystery has always been one of understanding constraints, the question of what comes before we do. Plans? Divinity? Nature? Desires? Conditions of possibility? Fate? Mind? We’ve always been grasping for ourselves, I sometimes think, such was the strategic value of metacognitive capacity in linguistic social ecologies. The thing to realize is that grasping, the process of developing the capacity to report on our experience, was bootstapped out of nothing and so comprised the sum of all there was to the ‘experience of experience’ at any given stage of our evolution. Our ancestors had to be both implicitly obvious, and explicitly impenetrable to themselves past various degrees of questioning.

We’re just the next step.

What is it we think we want as our neuroscientific explananda? The various functions of cognition. What are the various functions of cognition? Nobody can seem to agree, thanks to medial neglect, our cognitive insensitivity to our cognizing.

Here’s what I think is a productive way to interpret this conundrum.

Generally what we want is a translation between the manipulative and the communicative. It is the circuit between these two general cognitive modes that forms the cornerstone of what we call scientific knowledge. A finding that cannot be communicated is not a finding at all. The thing is, this—knowledge itself—all functions in the dark. We are effectively black boxes to ourselves. In all math and science—all of it—the understanding communicated is a black box understanding, one lacking any natural understanding of that understanding.

Crazy but true.

What neuroscience is after, of course, is a natural understanding of understanding, to peer into the black box. They want manipulations they can communicate, actionable explanations of explanation. The problem is that they have only heuristic, low-dimensional, cognitive access to themselves: they quite simply lack the metacognitive access required to resolve interpretive disputes, and so remain incapable of formulating the explananda of neuroscience in any consensus commanding way. In fact, a great many remain convinced, on intuitive grounds, that the explananda sought, even if they could be canonically formulated, would necessarily remain beyond the pale of neuroscientific explanation. Heady stuff, given the historical track record of the institutions involved.

People need to understand that the fact of a neuroscientific explananda problem is the fact of our outright ignorance of ourselves. We quite simply lack the information required to decide what it is we’re explaining. What we call ‘philosophy of mind’ is a kind of metacognitive ‘crash space,’ a point where our various tools seem to function, but nothing ever comes of it.

The low-dimensionality of the information begets underdetermination, underdetermination begets philosophy, philosophy begets overdetermination. The idioms involved become ever more plastic, more difficult to sort and arbitrate. Crash space bloats. In a sense, intentional philosophy simply is the neuroscientific explananda problem, the florid consequence of our black box souls.

The thing that can purge philosophy is the thing that can tell you what it is.

BBT Creep: The Inherence Heuristic

by rsbakker

Exciting stuff! For years now the research has been creeping toward my grim semantic worst-case scenario–but “The inherence heuristic” is getting close, very close, especially the way it explicitly turns on the importance of heuristic neglect. The pieces have been there for quite some time; now researchers are beginning to put them together.

One way of looking at blind brain theory’s charge against intentionalism is that so-called intentional phenomena are pretty clear cut examples of inherence heuristics as discussed in this article, ways to handle complex systems absent any causal handle on those systems.  When Cimpion and Saloman write,

“To reiterate, the pool of facts activated by the mental shotgun for the purpose of generating an explanation for a pattern may often be heavily biased toward the inherent characteristics of that pattern’s constituents. As a result, when the storytelling part of the heuristic process takes over and attempts to make sense of the information at its disposal, it will have a rather limited number of options. That is, it will often be forced to construct a story that explains the existence of a pattern in terms of the inherent features of the entities within that pattern rather than in terms of factors external to it. However, the one-sided nature of the information delivered by the mental shotgun is not an impediment to the storytelling process. Quite the contrary – the less information is available, the easier it will be to fit it all into a coherent story.” 464

I think they are also describing what’s going on when philosophers attempt to theoretically solve intentionality, intentional cognition, relying primarily on the resources of intentional cognition. In fact, once you understand the heuristic nature of intentional cognition, the interminable nature of intentional philosophy becomes very easy to understand. We have no way of carving the complexities of cognition at the joints of the world, so we carve it at the joints of the problem instead. When your neighbour repairs your robotic body servant, rather than cognizing all the years he spent training to be a spy before being inserted into your daily routines, you ‘attribute’ him ‘knowledge,’ something miraculously efficacious in its own  right, inherent. And for the vast majority of problems you encounter, it works. Then the philosopher asks, ‘What is knowledge?’ and because adducing causal information scrambles our intuitions of ‘inherence,’ he declares only intentional idioms can cognize intentional phenomena, and the species remains stumped to this very day. Exactly as we should expect. Why should we think tools adapted to do without information regarding our nature can decode their own nature? What would this ‘nature’ be?

The best way to understand intentional philosophy, on a blind brain view, is as a discursive ‘crash space,’ a point where the application of our cognitive tools outruns their effectiveness in ways near and far. I’ve spent the last few years, now, providing various diagnoses of the kinds of theoretical wrecks we find in this space. Articles such as this convince me I won’t be alone for much longer!

So to give a brief example. Once one understands the degree to which intentional idioms turn on ‘inherence heuristics’–ways to manage causal systems absent any behavioural sensitivity to the mechanics of those systems–you can understand the deceptiveness of things like ‘intentional stances,’ the way they provide an answer that functions more like a get-out-of-jail-free card than any kind of explanation.

Given that ‘intentional stances’ belong to intentional cognition, then the fact that intentional cognition solves problems neglecting what is actually going on reflects rather poorly on the theoretical fortunes of the intentional stance. The fact is ‘intentional stances’ leave us with a very low dimensional understanding of our actual straits when it comes to understanding cognition–as we should expect, given that it utilizes a low dimensional heuristic system geared to solving practical problems on the fly and theoretical problems not at all.

All along I’ve been trying to show the way heuristics allow us to solve the explanatory gap, to finally get rid of intentional occultisms like the intentional stance and replace them with a more austere, and more explanatorily comprehensive picture. Now that the cat’s out of the bag, more and more cognitive scientists are going to explore the very real consequences of heuristic neglect. They will use it to map out the neglect structure of the human brain in ever finer detail, thus revealing where our intuitions trip over their own heuristic limits, and people will begin to see how thought can be construed as mangles of parallel-distributed processing meat. It will be clear that the ‘real patterns’ are not the ones required to redeem reflection, or its jargon. Nothing can do that now. Mark my words, inherence heuristics have a bright explanatory future.

Bonfire bright.

Alien Philosophy (cont’d)

by rsbakker

B: Thespian Souls

Given a convergent environmental and biological predicament, we can suppose our Thespians would have at least flirted with something resembling Aristotle’s dualism of heaven and earth. But as I hope to show, the ecological approach pays even bigger theoretical dividends when one considers what has to be the primary domain of human philosophical speculation: ourselves.

With evolutionary convergence, we can presume our Thespians would be eusocial, [1] displaying the same degree of highly flexible interdependence as us. This observation, as we shall see, possesses some startling consequences. Cognitive science is awash in ‘big questions’ (philosophy), among them the problem of what is typically called ‘mindreading,’ our capacity to explain/predict/manipulate one another on the basis of behavioural data alone. How do humans regularly predict the output of something so preposterously complicated as human brains on the basis of so little information?

The question is equally applicable to our Thespians, who would, like humans, possess formidable socio-cognitive capacities. As potent as those capacities were, however, we can also suppose they would be bounded, and—here’s the thing—radically so. When one Thespian attempts to cognize another, they, like us, will possess no access whatsoever to the biological systems actually driving behaviour. This means that Thespians, like us, would need to rely on so-called ‘fast and frugal heuristics’ to solve each other. [2] That is to say they would possess systems geared to the detection of specific information structures, behavioural precursors that reliably correlate, as opposed to cause, various behavioural outcomes. In other words, we can assume that Thespians will possess a suite of powerful, special purpose tools adapted to solving systems in the absence of causal information.

Evolutionary convergence means Thespians would understand one another (as well as other complex life) in terms that systematically neglect their high-dimensional, biological nature. As suggestive as this is, things get really interesting when we consider the way Thespians pose the same basic problem of computational intractability (the so-called ‘curse of dimensionality’) to themselves as they do to their fellows. The constraints pertaining to Thespian social cognition, in other words, also apply to Thespian metacognition, particularly with respect to complexity. Each Thespian, after all, is just another Thespian, and so poses the same basic challenge to metacognition as they pose to social cognition. By sheer dint of complexity, we can expect the Thespian brain would remain opaque to itself as such. This means something that will turn out to be quite important: namely that Thespian self-understanding, much like ours, would systematically neglect their high-dimensional, biological nature. [3]

This suggests that life, and intelligent life in particular, would increasingly stand out as a remarkable exception as the Thespians cobbled together a mechanical understanding of nature. Why so? Because it seems a stretch to suppose they would possess a capacity so extravagant as accurate ‘meta-metacognition.’ Lacking such a capacity would strand them with disparate families of behaviours and entities, each correlated with different intuitions, which would have to be recognized as such before any taxonomy could be made. Some entities and behaviours could be understood in terms of mechanical conditions, while others could not. So as extraordinary as it sounds, it seems plausible to think that our Thespians, in the course of their intellectual development, would stumble across some version of their own ‘fact-value distinction.’ All we need do is posit a handful of ecological constraints.

But of course things aren’t nearly so simple. Metacognition may solve for Thespians the same ‘fast and frugal’ manner as social cognition, but it entertains a far different relationship to its putative target. Unlike social cognition, which tracks functionally distinct systems (others) via the senses, metacognition is literally hardwired to the systems it tracks. So even though metacognition faces the same computational challenge as social cognition—cognizing a Thespian—it requires a radically different set of tools to do so. [4]

It serves to recall that evolved intelligence is environmentally oriented intelligence. Designs thrive or vanish depending on their ability to secure the resources required to successfully reproduce. Because of this, we can expect that all intelligent aliens, not just Thespians, would possess highdimensional cognitive relations with their environments. Consider our own array of sensory modalities, how the environmental here and now ‘hogs bandwidth.’ The degree to which your environment dominates your experience is the degree to which you’re filtered to solve your environments. We live in the world simply because we’re distilled from it, the result of billions of years of environmental tuning. We can presume our aliens would be thoroughly ‘in the world’ as well, that the bulk of their cognitive capacities would be tasked with the behavioural management of their immediate environments for similar evolutionary reasons.

Since all cognitive capacities are environmentally selected, we can expect whatever basic metacognitive capacity the Thespians possess will also be geared to the solution of environmental problems. Thespian metacognition will be an evolutionary artifact of getting certain practical matters right in certain high-impact environments, plain and simple. Add to this the problem of computational intractability (which metacognition shares with social cognition) and it becomes almost certain that Thespian metacognition would consist of multiple fast and frugal heuristics (because solving on the basis of scarce data requires less, not more, parameters geared to particular information structures to be effective). [5] We have very good reason to suspect the Thespian brain would access and process its own structure and dynamics in ways that would cut far more corners than joints. As is the case with social cognition, it would belong to Thespian nature to neglect Thespian nature—to cognize the cognizer as something other, something geared to practical contexts.

Thespians would cognize themselves and their fellows via correlational, as opposed to causal, heuristic cognition. The curse of dimensionality necessitates it. It’s hard, I think, to overstate the impact this would have on an alien species attempting to cognize their nature. What it means is that the Thespians would possess a way to engineer systematically efficacious comportments to themselves, each other, even their environments, without being able to reverse engineer those relationships. What it means, in other words, is that a great deal of their knowledge would be impenetrable—tacit, implicit, automatic, or what have you. Thespians, like humans, would be able to solve a great many problems regarding their relations to themselves, their fellows, and their world without possessing the foggiest idea of how. The ignorance here is structural ignorance, as opposed to the ignorance, say, belonging to original naivete. One would expect the Thespians would be ignorant of their nature absent the cultural scaffolding required to unravel the mad complexity of their brains. But the problem isn’t simply that Thespians would be blind to their inner nature; they would also be blind to this blindness. Since their metacognitive capacities consistently yield the information required to solve in practical, ancestral contexts, the application of those capacities to the theoretical question of their nature would be doomed from the outset. Our Thespians would consistently get themselves wrong.

Is it fair to say they would be amazed by their incapacity, the way our ancestors were? [6] Maybe—who knows. But we could say, given the ecological considerations adduced here, that they would attempt to solve themselves assuming, at least initially, that they could be solved, despite the woefully inadequate resources at their disposal.

In other words, our Thespians would very likely suffer what might be called theoretical anosognosia. In clinical contexts, anosognosia applies to patients who, due to some kind of pathology, exhibit unawareness of sensory or cognitive deficits. Perhaps the most famous example is Anton-Babinski Syndrome, where physiologically blind patients persistently claim they can in fact see. This is precisely what we could expect from our Thespians vis a vis their ‘inner eye.’ The function of metacognitive systems is to engineer environmental solutions via the strategic uptake of limited amounts of information, not to reverse engineer the nature of the brain it belongs to. Repurposing these systems means repurposing systems that generally take the adequacy of their resources for granted. When we catch our tongue at Christmas dinner, we just do; we ‘implicitly assume’ the reliability our metacognitive capacity to filter our speech. It seems wildly implausible to suppose that theoretically repurposing these systems would magically engender a new biological capacity to automatically assess the theoretical viability of the resources available. It stands to reason, rather, that we would assume sufficiency the same as before, only to find ourselves confounded after the fact.

Of course, saying that our Thespians suffer theoretical anosognosia amounts to saying they would suffer chronic, theoretical hallucinations. And once again, ecological considerations provide a way to guess at the kinds of hallucinations they might suffer.

Dualism is perhaps the most obvious. Aristotle, recall, drew his conclusions assuming the sufficiency of the information available. Contrasting the circular, ageless, repeating motion of the stars and planets to the linear riot of his immediate surroundings, he concluded that the celestial and the terrestrial comprised two distinct ontological orders governed by different natural laws, a dichotomy that prevailed some 1800 years. The moral is quite clear: Where and how we find ourselves within a system determines what kind of information we can access regarding that system, including information pertaining to the sufficiency of that information. Lacking instrumentation, Aristotle simply found himself in a position where the ontological distinction between heaven and earth appeared obvious. Unable to cognize the limits imposed by his position within the observed systems, he had no idea that he was simply cognizing one unified system from two radically different perspectives, one too near, the other too far.

Trapped in a similar structural bind vis a vis themselves, our navel-gazing Thespians would almost certainly mistake properties pertaining to neglect with properties pertaining to what is, distortions in signal, for facts of being. Once again, since the posits possessing those properties belong to correlative cognitive systems, they would resist causal cognition. No matter how hard Thespian philosophers tried, they would find themselves unable to square their apparent functions with the machinations of nature more generally. Correlative functions would appear autonomous, as somehow operating outside the laws of nature. Embedded in their environment in a manner that structurally precludes accurately intuiting that embedment, our alien philosophers would conceive themselves as something apart, ontologically distinct. Thespian philosophy would have its own versions of ‘souls’ or ‘minds’ or ‘Dasein’ or ‘a priori’ or what have you—a disparate order somehow ‘accounting’ for various correlative cognitive modes, by anchoring the bare cognition of constraint in posits (inherited or not) rationalized on the back of Thespian fashion.

Dualisms, however, require that manifest continuities be explained, or explained away. Lacking any ability to intuit the actual machinations binding them to their environments, Thespians would be forced to rely on the correlative deliverances of metacognition to cognize their relation to their world—doing so, moreover, without the least inkling of as much. Given theoretical anosognosia (the inability to intuit metacognitive incapacity), it stands to reason that they would advance any number of acausal versions of this relationship, something similar to ‘aboutness,’ and so reap similar bewilderment. Like us, they would find themselves perpetually unable to decisively characterize ‘knowledge of the world.’ One could easily imagine the perpetually underdetermined nature of these accounts convincing some Thespian philosophers that the deliverances of metacognition comprised the whole of existence (engendering Thespian idealism), or were at least the most certain, most proximate thing, and therefore required the most thorough and painstaking examination (engendering a Thespian phenomenology)…

Could this be right?

This story is pretty complex, so it serves to review the modesty of our working assumptions. The presumption of interstellar evolutionary convergence warranted assuming that Thespian cognition, like human cognition, would be bounded, a complex bundle of ‘kluges,’ heuristic solutions to a wide variety of ecological problems. The fact that Thespians would have to navigate both brute and intricate causal environments, troubleshoot both inorganic and organic contexts, licenses the claim that Thespian cognition would be bifurcated between causal systems and a suite of correlational systems, largely consisting of ‘fast and frugal heuristics,’ given the complexity and/or the inaccessibility of the systems involved. This warranted claiming that both Thespian social cognition and metacognition would be correlational, heuristic systems adapted to solve very complicated ecologies on the basis of scarce data. This posed the inevitable problem of neglect, the fact that Thespians would have no intuitive way of assessing the adequacy of their metacognitive deliverances once they applied them to theoretical questions. This let us suppose theoretical anosognosia, the probability that Thespian philosophers would assume the sufficiency of radically inadequate resources—systematically confuse artifacts of heuristic neglect for natural properties belonging to extraordinary kinds. And this let us suggest they would have their own controversies regarding mind-body dualism, intentionality, even knowledge of the external world.

As with Thespian natural philosophy, any number of caveats can be raised at any number of junctures, I’m sure. What if, for instance, Thespians were simply more pragmatic, less inclined to suffer speculation in the absence of decisive application? Such a dispositional difference could easily tilt the balance in favour of skepticism, relegating the philosopher to the ghettos of Thespian intellectual life. Or what if Thespians were more impressed by authority, to the point where reflection could only be interrogated refracted through the lens of purported revelation? There can be no doubt that my account neglects countless relevant details. Questions like these chip away at the intuition that the Thespians, or something like them, might be real

Luckily, however, this doesn’t matter. The point of posing the problem of xenophilosophy wasn’t so much to argue that Thespians are out there, as it was, strangely enough, to recognize them in here

After all, this exercise in engineering alien philosophy is at once an exercise in reverse-engineering our own. Blind Brain Theory only needs Thespians to be plausible to demonstrate its abductive scope, the fact that it can potentially explain a great many perplexing things on nature’s dime alone.

So then what have we found? That traditional philosophy something best understood as… what?

A kind of cognitive pathology?

A disease?

 

IV: Conclusion

It’s worth, I think, spilling a few words on the subject of that damnable word, ‘experience.’ Dogmatic eliminativism is a religion without gods or ceremony, a relentlessly contrarian creed. And this has placed it in the untenable dialectical position of apparently denying what is most obvious. After all, what could be more obvious than experience?

What do I mean by ‘experience’? Well, the first thing I generally think of is Holocaust, and the palpable power of the Survivor.

Blind Brain Theory paints a theoretical portrait wherein experience remains the most obvious thing in practical, correlational ecologies, while becoming a deeply deceptive, largely chimerical artifact in high-dimensional, causal ones. We have no inkling of tripping across ecological boundaries when we propose to theoretically examine the character of experience. What was given to deliberative metacognition in some practical context (ruminating upon a social gaffe, say) is now simply given to deliberative metacognition in an artificial one—‘philosophical reflection.’ The difference between applications is nothing if not extreme, and yet conclusions are drawn assuming sufficiency, again and again and again—for millennia.

Think of the difference between your experience and your environment, say, in terms of the difference between concentrating on a mental image of your house and actually observing it. Think of how few questions the mental image can answer compared to the visual image. Where’s the grass the thickest? Is there birdshit on the lane? Which branch comes closest to the ground? These questions just don’t make sense in the context of mental imagery.

Experience, like mental imagery, is something that only answers certain questions. Of course, the great, even cosmic irony is that this is the answer that has been staring us in the fucking face all along. Why else would experience remain an enduring part of philosophy, the institution that asks how things in the most general sense hang together in the most general sense without any rational hope of answer?

Experience is obvious—it can be nothing but obvious. The palpable power of the Holocaust Survivor is, I think, as profound a testament to the humanity of experience as there is. Their experience is automatically our own. Even philosophers shut up! It correlates us in a manner as ancient as our species, allows us to engineer the new. At the same time, it cannot but dupe and radically underdetermine our ancient, Sisyphean ambition to peer into the soul through the glass of the soul. As soon as we turn our rational eye to experience in general, let alone the conditions of possibility of experience, we run afoul illusions, impossible images that, in our diseased state, we insist are real.

This is what our creaking bookshelves shout in sum. The narratives, they proclaim experience in all its obvious glory, while treatise after philosophical treatise mutters upon the boundary of where our competence quite clearly comes to an end. Where we bicker.

Christ.

At least we have reason to believe that philosophers are not alone in the universe.

 

Notes

[1] In the broad sense proposed by Wilson in The Social Conquest of the Earth.

[2] This amounts to taking a position in the mindreading debate that some theorists would find problematic, particularly those skeptical of modularity and/or with representationalist sympathies. Since the present account provides a parsimonious means of explaining away the intuitions informing both positions, it would be premature to engage the debate regarding either at this juncture. The point is to demonstrate what heuristic neglect, as a theoretical interpretative tool, allows us to do.

[3] The representationalist would cry foul at this point, claim the existence of some coherent ‘functional level’ accessible to deliberative metacognition (the mind) allows for accurate and exhaustive description. But once again, since heuristic neglect explains why we’re so prone to develop intuitions along these lines, we can sidestep this debate as well. Nobody knows what the mind is, or whatever it is they take themselves to be describing. The more interesting question is one of whether a heuristic neglect account can be squared with the research pertaining directly to this field. I suspect so, but for the interim I leave this to individuals more skilled and more serious than myself to investigate.

[4] In the literature, accounts that claim metacognitive functions for mindreading are typically called ‘symmetrical theories.’ Substantial research supports the claim that metacognitive reporting involves social cognition. See Carruthers, “How we know our own minds: the relationship between mindreading and metacognition,” for an outstanding review.

[5] Gerd Gigerenzer and the Adaptive Behaviour and Cognition Research Group have demonstrated that simple heuristics are often far more effective than even optimization methods possessing far greater resources. “As the amount of data available to make predictions in an environment shrinks, the advantage of simple heuristics over complex algorithms grows” (Hertwig and Hoffrage, “The Research Agenda,” Simple Heuristics in a Social World, 23).

[6] “Quid est enim tempus? Quis hoc facile breuiterque explicauerit? Quis hoc ad uerbum de illo proferendum uel cogitatione comprehenderit? Quid autem familiarius et notius in loquendo commemoramus quam tempus? Et intellegimus utique cum id loquimur, intellegimus etiam cum alio loquente id audimus. Quid est ergo tempus? Si nemo ex me quærat, scio; si quærenti explicare uelim, nescio.

Alien Philosophy

by rsbakker

The highest species concept may be that of a terrestrial rational being; however, we shall not be able to name its character because we have no knowledge of non-terrestrial rational beings that would enable us to indicate their characteristic property and so to characterize this terrestrial being among rational beings in general. It seems, therefore, that the problem of indicating the character of the human species is absolutely insoluble, because the solution would have to be made through experience by means of the comparison of two species of rational being, but experience does not offer us this. (Kant: Anthropology from a Pragmatic Point of View, 225)

 

Are there alien philosophers orbiting some faraway star, opining in bursts of symbolically articulated smells, or parsing distinctions-without-differences via the clasp of neural genitalia? What would an alien philosophy look like? Do we have any reason to think we might find some of them recognizable? Do the Greys have their own version of Plato? Is there a little green Nietzsche describing little green armies of little green metaphors?

 

I: The Story Thus Far

A couple years back, I published a piece in Scientia Salon, “Back to Square One: Toward a Post-intentional Future,” that challenged the intentional realist to warrant their theoretical interpretations of the human. What is the nature of the data that drives their intentional accounts? What kind of metacognitive capacity can they bring to bear?

I asked these questions precisely because they cannot be answered. The intentionalist has next to no clue as to the nature, let alone the provenance, of their data, and even less inkling as to the metacognitive resources at their disposal. They have theories, of course, but it is the proliferation of theories that is precisely the problem. Make no mistake: the failure of their project, their consistent inability to formulate their explananda, let alone provide any decisive explanations, is the primary reason why cognitive science devolves so quickly into philosophy.

But if chronic theoretical underdetermination is the embarrassment of intentionalism, then theoretical silence has to be the embarrassment of eliminativism. If meaning realism offers too much in the way of theory—endless, interminable speculation—then meaning skepticism offers too little. Absent plausible alternatives, intentionalists naturally assume intrinsic intentionality is the only game in town. As a result, eliminativists who use intentional idioms are regularly accused of incoherence, of relying upon the very intentionality they’re claiming to eliminate. Of course eliminativists will be quick to point out the question-begging nature of this criticism: They need not posit an alternate theory of their own to dispute intentional theories of the human. But they find themselves in a dialectical quandary, nonetheless. In the absence of any real theory of meaning, they have no substantive way of actually contributing to the domain of the meaningful. And this is the real charge against the eliminativist, the complaint that any account of the human that cannot explain the experience of being human is barely worth the name. [1] Something has to explain intentional idioms and phenomena, their apparent power and peculiarity; If not intrinsic or original intentionality, then what?

My own project, however, pursues a very different brand of eliminativism. I started my philosophical career as an avowed intentionalist, a one-time Heideggerean and Wittgensteinian. For decades I genuinely thought philosophy had somehow stumbled into ‘Square Two.’ No matter what doubts I entertained regarding this or that intentional account, I was nevertheless certain that some intentional account had to be right. I was invested, and even though the ruthless elegance of eliminativism made me anxious, I took comfort in the standard shibboleths and rationalizations. Scientism! Positivism! All theoretical cognition presupposes lived life! Quality before quantity! Intentional domains require intentional yardsticks!

Then, in the course of writing a dissertation on fundamental ontology, I stumbled across a new, privative way of understanding the purported plenum of the first-person, a way of interpreting intentional idioms and phenomena that required no original meaning, no spooky functions or enigmatic emergences—nor any intentional stances for that matter. Blind Brain Theory begins with the assumption that theoretically motivated reflection upon experience co-opts neurobiological resources adapted to far different kinds of problems. As a co-option, we have no reason to assume that ‘experience’ (whatever it amounts to) yields what philosophical reflection requires to determine the nature of experience. Since the systems are adapted to discharge far different tasks, reflection has no means of determining scarcity and so generally presumes sufficiency. It cannot source the efficacy of rules so rules become the source. It cannot source temporal awareness so the now becomes the standing now. It cannot source decisions so decisions (the result of astronomically complicated winner-take-all processes) become ‘choices.’ The list goes on. From a small set of empirically modest claims, Blind Brain Theory provides what I think is the first comprehensive, systematic way to both eliminate and explain intentionality.

In other words, my reasons for becoming an eliminativist were abductive to begin with. I abandoned intentionalism, not because of its perpetual theoretical disarray (though this had always concerned me), but because I became convinced that eliminativism could actually do a better job explaining the domain of meaning. Where old school, ‘dogmatic eliminativists’ argue that meaning must be natural somehow, my own ‘critical eliminativism’ shows how. I remain horrified by this how, but then I also feel like a fool for ever thinking the issue would end any other way. If one takes mediocrity seriously, then we should expect that science will explode, rather than canonize our prescientific conceits, no matter how near or dear.

But how to show others? What could be more familiar, more entrenched than the intentional philosophical tradition? And what could be more disparate than eliminativism? To quote Dewey from Experience and Nature, “The greater the gap, the disparity, between what has become a familiar possession and the traits presented in new subject-matter, the greater is the burden imposed upon reflection” (Experience and Nature, ix). Since the use of exotic subject matters to shed light on familiar problems is as powerful a tool for philosophy as for my chosen profession, speculative fiction, I propose to consider the question of alien philosophy, or ‘xenophilosophy,’ as a way to ease the burden. What I want to show is how, reasoning from robust biological assumptions, one can plausibly claim that aliens—call them ‘Thespians’—would also suffer their own versions of our own (hitherto intractable) ‘problem of meaning.’ The degree to which this story is plausible, I will contend, is the degree to which critical eliminativism deserves serious consideration. It’s the parsimony of eliminativism that makes it so attractive. If one could combine this parsimony with a comprehensive explanation of intentionality, then eliminativism would very quickly cease to be a fringe opinion.

 

II: Aliens and Philosophy

Of course, the plausibility of humanoid aliens possessing any kind of philosophy requires the plausibility of humanoid aliens. In popular media, aliens are almost always exotic versions of ourselves, possessing their own exotic versions of the capacities and institutions we happen to have. This is no accident. Science fiction is always about the here and now—about recontextualizations of what we know. As a result, the aliens you tend to meet tend to seem suspiciously humanoid, psychologically if not physically. Spock always has some ‘mind’ with which to ‘meld’. To ask the question of alien philosophy, one might complain, is to buy into this conceit, which although flattering, is almost certainly not true.

And yet the environmental filtration of mutations on earth has produced innumerable examples of convergent evolution, different species evolving similar morphologies and functions, the same solutions to the same problems, using entirely different DNA. As you might imagine, however, the notion of interstellar convergence is a controversial one. [2] Supposing the existence of extraterrestrial intelligence is one thing—cognition is almost certainly integral to complex life elsewhere in the universe—but we know nothing about the kinds of possible biological intelligences nature permits. Short of actual contact with intelligent aliens, we have no way of gauging how far we can extrapolate from our case. [3] All too often, ignorance of alternatives dupes us into making ‘only game in town assumptions,’ so confusing mere possibility with necessity. But this debate need not worry us here. Perhaps the cluster of characteristics we identify with ‘humanoid’ expresses a high-probability recipe for evolving intelligence—perhaps not. Either way, our existence proves that our particular recipe is on file, that aliens we might describe as ‘humanoid’ are entirely possible.

So we have our humanoid aliens, at least as far as we need them here. But the question of what alien philosophy looks like also presupposes we know what human philosophy looks like. In “Philosophy and the Scientific Image of Man,” Wilfred Sellars defines the aim of philosophy as comprehending “how things in the broadest possible sense of the term hang together in the broadest possible sense of the term” (1). Philosophy famously attempts to comprehend the ‘big picture.’ The problem with this definition is that it overlooks the relationship between philosophy and ignorance, and so fails to distinguish philosophical inquiry from scientific or religious inquiry. Philosophy is invested in a specific kind of ‘big picture,’ one that acknowledges the theoretical/speculative nature of its claims, while remaining beyond the pale of scientific arbitration. Philosophy is better defined, then, as the attempt to comprehend how things in general hang together in general absent conclusive information.

All too often philosophy is understood in positive terms, either as an archive of theoretical claims, or as a capacity to ‘see beyond’ or ‘peer into.’ On this definition, however, philosophy characterizes a certain relationship to the unknown, one where inquiry eschews supernatural authority, and yet lacks the methodological, technical, and institutional resources of science. Philosophy is the attempt to theoretically explain in the absence of decisive warrant, to argue general claims that cannot, for whatever reason, be presently arbitrated. This is why questions serve as the basic organizing principles of the institution, the shared boughs from which various approaches branch and twig in endless disputation. Philosophy is where we ponder the general questions we cannot decisively answer, grapple with ignorances we cannot readily overcome.

 

III: Evolution and Ecology

A: Thespian Nature

It might seem innocuous enough defining philosophy in privative terms as the attempt to cognize in conditions of information scarcity, but it turns out to be crucial to our ability to make guesses regarding potential alien analogues. This is because it transforms the question of alien philosophy into a question of alien ignorance. If we can guess at the kinds of ignorance a biological intelligence would suffer, then we can guess at the kinds of questions they would ask, as well as the kinds of answers that might occur to them. And this, as it turns out, is perhaps not so difficult as one might suppose.

The reason is evolution. Thanks to evolution, we know that alien cognition would be bounded cognition, that it would consist of ‘good enough’ capacities adapted to multifarious environmental, reproductive impediments. Taking this ecological view of cognition, it turns out, allows us to make a good number of educated guesses. (And recall, plausibility is all that we’re aiming for here).

So for instance, we can assume tight symmetries between the sensory information accessed, the behavioural resources developed, and the impediments overcome. If gamma rays made no difference to their survival, they would not perceive them. Gamma rays, for Thespians, would be unknown unknowns, at least pending the development of alien science. The same can be said for evolution, planetary physics—pretty much any instance of theoretical cognition you can adduce. Evolution assures that cognitive expenditures, the ability to intuit this or that, will always be bound in some manner to some set of ancestral environments. Evolution means that information that makes no reproductive difference makes no biological difference.

An ecological view, in other words, allows us to naturalistically motivate something we might have been tempted to assume outright: original naivete. The possession of sensory and cognitive apparatuses comparable to our own means Thespians will possess a humanoid neglect structure, a pattern of ignorances they cannot even begin to question, that is, pending the development of philosophy. The Thespians would not simply be ignorant of the microscopic and macroscopic constituents and machinations explaining their environments, they would be oblivious to them. Like our own ancestors, they wouldn’t even know they didn’t know.

Theoretical knowledge is a cultural achievement. Our Thespians would have to learn the big picture details underwriting their immediate environments, undergo their own revolutions and paradigm shifts as they accumulate data and refine interpretations. We can expect them to possess an implicit grasp of local physics, for instance, but no explicit, theoretical understanding of physics in general. So Thespians, it seems safe to say, would have their own version of natural philosophy, a history of attempts to answer big picture questions about the nature of Nature in the absence of decisive data.

Not only can we say their nascent, natural theories will be underdetermined, we can also say something about the kinds of problems Thespians will face, and so something of the shape of their natural philosophy. For instance, needing only the capacity to cognize movement within inertial frames, we can suppose that planetary physics would escape them. Quite simply, without direct information regarding the movement of the ground, the Thespians would have no sense of the ground changing position. They would assume that their sky was moving, not their world. Their cosmological musings, in other words, would begin supposing ‘default geocentrism,’ an assumption that would only require rationalization once others, pondering the movement of the skies, began posing alternatives.

One need only read On the Heavens to appreciate how the availability of information can constrain a theoretical debate. Given the imprecision of the observational information at his disposal, for instance, Aristotle’s stellar parallax argument becomes well-nigh devastating. If the earth revolves around the sun, then surely such a drastic change in position would impact our observations of the stars, the same way driving into a city via two different routes changes our view of downtown. But Aristotle, of course, had no decisive way of fathoming the preposterous distances involved—nor did anyone, until Galileo turned his Dutch Spyglass to the sky. [4]

Aristotle, in other words, was victimized not so much by poor reasoning as by various perspectival illusions following from a neglect structure we can presume our Thespians share. And this warrants further guesses. Consider Aristotle’s claim that the heavens and the earth comprise two distinct ontological orders. Of course purity and circles rule the celestial, and of course grit and lines rule the terrestrial—that is, given the evidence of the naked eye from the surface of the earth. The farther away something is, the less information observation yields, the fewer distinctions we’re capable of making, the more uniform and unitary it is bound to seem—which is to say, the less earthly. An inability to map intuitive physical assumptions onto the movements of the firmament, meanwhile, simply makes those movements appear all the more exceptional. In terms of the information available, it seems safe to suppose our Thespians would at least face the temptation of Aristotle’s long-lived ontological distinction.

I say ‘temptation,’ because certainly any number of caveats can be raised here. Heliocentrism, for instance, is far more obvious in our polar latitudes (where the earth’s rotation is as plain as the summer sun in the sky), so there are observational variables that could have drastically impacted the debate even in our own case. Who knows? If it weren’t for the consistent failure of ancient heliocentric models to make correct predictions (the models assumed circular orbits), things could have gone differently in our own history. The problem of where the earth resides in the whole might have been short-lived.

But it would have been a problem all the same, simply because the motionlessness of the earth and the relative proximity of the heavens would have been our (erroneous) default assumptions. Bound cognition suggests our Thespians would find themselves in much the same situation. Their world would feel motionless. Their heavens would seem to consist of simpler stuff following different laws. Any Thespian arguing heliocentrism would have to explain these observations away, argue how they could be moving while standing still, how the physics of the ground belongs to the physics of the sky.

We can say this because, thanks to an ecological view, we can make plausible empirical guesses as to the kinds of information and capacities Thespians would have available. Not only can we predict what would have remained unknown unknowns for them, we can also predict what might be called ‘unknown half-knowns.’ Where unknown unknowns refer to things we can’t even question, unknown half-knowns refer to theoretical errors we cannot perceive simply because the information required to do so remains—you guessed it—unknown unknown.

Think of Plato’s allegory of the cave. The chained prisoners confuse the shadows for everything because, unable to move their heads from side to side, they just don’t ‘know any different.’ This is something we understand so intuitively we scarce ever pause to ponder it: the absence of information or cognitive capacity has positive cognitive consequences. Absent certain difference making differences, the ground will be cognized as motionless rather than moving, and celestial objects will be cognized as simples rather than complex entities in their own right. The ground might as well be motionless and the sky might as well be simple as far as evolution is concerned. Once again, distinctions that make no reproductive difference make no biological difference. Our lack of radio telescope eyes is no genetic or environmental fluke: such information simply wasn’t relevant to our survival.

This means that a propensity to theorize ‘ground/sky dualism’ is built into our biology. This is quite an incredible claim, if you think about it, but each step in our path has been fairly conservative, given that mere plausibility is our aim. We should expect Thespian cognition to be bounded cognition. We should expect them to assume the ground motionless, and the constituents of the sky simple. We can suppose this because we can suppose them to be ignorant of their ignorances, just as we were (and remain). Cognizing the ontological continuity of heaven and earth requires the proper data for the proper interpretation. Given a roughly convergent sensory predicament, it seems safe to say that aliens would be prone as we were to mistake differences in signal with differences in being and so have to discover the universality of nature the same as we did.

But if we can assume our Thespians—or at least some of them—would be prone to misinterpret their environments the way we did, what about themselves? For centuries now humanity has been revising and sharpening its understanding of the cosmos, to the point of drafting plausible theories regarding the first second of creation, and yet we remain every bit as stumped regarding ourselves as Aristotle. Is it fair to say that our Thespians would suffer the same millennial myopia?

Would they have their own version of our interminable philosophy of the soul?

 

Notes

[1] The eliminativism at issue here is meaning eliminativism, and not, as Stich, Churchland, and many others have advocated, psychological eliminativism. But where meaning eliminativism clearly entails psychological eliminativism, it is not at all obvious the psychological eliminativism entails meaning eliminativism. This was why Stich, found himself so perplexed by the implications of reference (see his, Deconstructing the Mind, especially Chapter 1). To assume that folk psychology is a mistaken theory is to assume that folk psychology is representational, something that is true or false of the world. The critical eliminativism espoused here suffers no such difficulty, but at the added cost of needing to explain meaning in general, and not simply commonsense human psychology.

[2] See Kathryn Denning’s excellent, “Social Evolution in Cosmic Context,” http://www.nss.org/resources/library/spacepolicy/Cosmos_and_Culture_NASA_SP4802.pdf

[3] Nicolas Rescher, for instance, makes hash of the time-honoured assumption that aliens would possess a science comparable to our own by cataloguing the myriad contingencies of the human institution. See Finitude, 28, or Unknowability, “Problems of Alien Cognition,” 21-39.

[4] Stellar parallax, on this planet at least, was not measured until 1838.

Introspection Explained

by rsbakker

Las Meninas

So I couldn’t get past the first paper in Thomas Metzinger’s excellent Open MIND offering without having to work up a long-winded blog post! Tim Bayne’s “Introspective Insecurity” offers a critique of Eric Schwitzgebel’s Perplexities of Consciousness, which is my runaway favourite book on introspection (and consciousness, for that matter). This alone might have sparked me to write a rebuttal, but what I find most extraordinary about the case Bayne lays out against introspective skepticism is the way it directly implicates Blind Brain Theory. His  defence of introspective optimism, I want to show, actually vindicates an even more radical form of pessimism than the one he hopes to domesticate.

In the article, Bayne divides the philosophical field into two general camps, the introspective optimists, who think introspection provides reliable access to conscious experience, and introspective pessimists, who do not. Recent years have witnessed a sea change in philosophy of mind circles (one due in no small part to Schwitzgebel’s amiable assassination of assumptions). The case against introspective reliability has grown so prodigious that what Bayne now terms ‘optimism’–introspection as a possible source of metaphysically reliable information regarding the mental/phenomenal–would have been considered rank introspective pessimism not so long ago. The Cartesian presumption of ‘self-transparency’ (as Carruthers calls it in his excellent The Opacity of Mind) has died a sudden death at the hands of cognitive science.

Bayne identifies himself as one of these new optimists. What introspection needs, he claims, is a balanced account, one sensitive to the vulnerabilities of both positions. Where proponents of optimism have difficulty accounting for introspective error, proponents of pessimism have difficulty accounting for introspective success. Whatever it amounts to, introspection is characterized by perplexing failures and thoughtless successes. As he writes in his response piece,  “The epistemology of introspection is that it is not flat but contains peaks of epistemic security alongside troughs of epistemic insecurity” (“Introspection and Intuition,” 1). Since any final theory of introspection will have to account for this mixed ‘epistemic profile,’ Bayne suggests that it provides a useful speculative constraint, a way to sort the metacognitive wheat from the chaff.

According to Bayne, introspective optimists motivate their faith in the deliverances of introspection on the basis of two different arguments: the Phenomenological Argument and the Conceptual Argument. He restricts his presentation of the phenomenological argument to a single quote from Brie Gertler’s “Renewed Acquaintance,” which he takes as representative of his own introspective sympathies. As Gertler writes of the experience of pinching oneself:

When I try this, I find it nearly impossible to doubt that my experience has a certain phenomenal quality—the phenomenal quality it epistemically seems to me to have, when I focus my attention on the experience. Since this is so difficult to doubt, my grasp of the phenomenal property seems not to derive from background assumptions that I could suspend: e.g., that the experience is caused by an act of pinching. It seems to derive entirely from the experience itself. If that is correct, my judgment registering the relevant aspect of how things epistemically seem to me (this phenomenal property is instantiated) is directly tied to the phenomenal reality that is its truthmaker. “Renewed Acquaintance,” Introspection and Consciousness, 111.

When attending to a given experience, it seems indubitable that the experience itself has distinctive qualities that allow us to categorize it in ways unique to first-person introspective, as opposed to third-person sensory, access. But if we agree that the phenomenal experience—as opposed to the object of experience—drives our understanding of that experience, then we agree that the phenomenal experience is what makes our introspective understanding true. “Introspection,” Bayne writes, “seems not merely to provide one with information about one’s experiences, it seems also to ‘say’ something about the quality of that information” (4). Introspection doesn’t just deliver information, it somehow represents these deliverances as true.

Of course, this doesn’t make them true: we need to trust introspection before we can trust our (introspective) feeling of introspective truth. Or do we? Bayne replies:

it seems to me not implausible to suppose that introspection could bear witness to its own epistemic credentials. After all, perceptual experience often contains clues about its epistemic status. Vision doesn’t just provide information about the objects and properties present in our immediate environment, it also contains information about the robustness of that information. Sometimes vision presents its take on the world as having only low-grade quality, as when objects are seen as blurry and indistinct or as surrounded by haze and fog. At other times visual experience represents itself as a highly trustworthy source of information about the world, such as when one takes oneself to have a clear and unobstructed view of the objects before one. In short, it seems not implausible to suppose that vision—and perceptual experience more generally—often contains clues about its own evidential value. As far as I can see there is no reason to dismiss the possibility that what holds of visual experience might also hold true of introspection: acts of introspection might contain within themselves information about the degree to which their content ought to be trusted. 5

Vision is replete with what might be called ‘information information,’ features that indicate the reliability of the information available. Darkness, for instance, is a great example, insofar as it provides visual information to the effect that visual information is missing. Our every glance is marbled with what might be called ‘more than meets the eye’ indicators. As we shall, this analogy to vision will come back and haunt Bayne’s thesis. The thing to keep in mind is the fact that the cognition of missing information requires more information. For the nonce, however, his claim is modest enough to acknowledge his point: as it stands, we cannot rule out the possibility that introspection, like exospection, reliably indicates its own reliability. As such, the door to introspective optimism remains open.

Here we see the ‘foot-in-the-door strategy’ that Bayne adopts throughout the article, where his intent isn’t so much to decisively warrant introspective optimism as it is to point out and elucidate the ways that introspective pessimism cannot decisively close the door on introspection.

The conceptual motivation for introspective optimism turns on the necessity of epistemic access implied in the very concept of ‘what is it likeness.’ The only way for something to be ‘like something’ is for it to like something for somebody. “[I]f a phenomenal state is a state that there is something it is like to be in,” Bayne writes, “then the subject of that state must have epistemic access to its phenomenal character” (5). Introspection has to be doing some kind of cognitive work, otherwise “[a] state to which the subject had no epistemic access could not make a constitutive contribution to what it was like for that subject to be the subject that it was, and thus it could not qualify as a phenomenal state” (5-6).

The problem with this argument, of course, is that it says little about the epistemic access involved. Apart from some unspecified ability to access information, it really implies very little. Bayne convincingly argues that the capacity to cognize differences, make discriminations, follows from introspective access, even if the capacity to correctly categorize those discriminations does not. And in this respect, it places another foot in the introspective door.

Bayne then moves on to the case motivating pessimism, particularly as Eric presents it in his Perplexities of Consciousness. He mentions the privacy problems that plague scientific attempts to utilize introspective information (Irvine provides a thorough treatment of this in her Consciousness as a Scientific Concept), but since his goal is to secure introspective reliability for philosophical purposes, he bypasses these to consider three kinds of challenges posed by Schwitzgebel in Perplexities, the Dumbfounding, Dissociation, and Introspective Variation Arguments. Once again, he’s careful to state the balanced nature of his aim, the obvious fact that

any comprehensive account of the epistemic landscape of introspection must take both the hard and easy cases into consideration. Arguably, generalizing beyond the obviously easy and hard cases requires an account of what makes the hard cases hard and the easy cases easy. Only once we’ve made some progress with that question will we be in a position to make warranted claims about introspective access to consciousness in general. 8

His charge against Schwitzgebel, then, is that even conceding his examples of local introspective unreliability, we have no reason to generalize from these to the global unreliability of introspection as a philosophical tool. Since this inference from local unreliability to global unreliability is his primary discursive target, Bayne doesn’t so much need to problematize Schwitzgebel’s challenges as to reinterpret—‘quarantine’—their implications.

So in the case of ‘dumbfounding’ (or ‘uncertainty’) arguments, Schwitzgebel reveals the epistemic limitations of introspection via a barrage of what seem to be innocuous questions. Our apparent inability to answer these questions leaves us ‘dumbfounded,’ stranded on a cognitive limit we never knew existed. Bayne’s strategy, accordingly, is to blame the questions, to suggest that dumbfounding, rather than demonstrating any pervasive introspective unreliability, simply reveals that the questions being asked possess no determinate answers. He writes:

Without an account of why certain introspective questions leave us dumbfounded it is difficult to see why pessimism about a particular range of introspective questions should undermine the epistemic credentials of introspection more generally. So even if the threat posed by dumbfounding arguments were able to establish a form of local pessimism, that threat would appear to be easily quarantined. 11

Once again, local problems in introspection do not warrant global conclusions regarding introspective reliability.

Bayne takes a similar tack with Schwitzgebel’s dissociation arguments, examples where our naïve assumptions regarding introspective competence diverge from actual performance. He points out the ambiguity between the reliability of experience and the reliability of introspection: Perhaps we’re accurately introspecting mistaken experiences. If there’s no way to distinguish between these, Bayne, suggests, we’ve made room for introspective optimism. He writes: “If dissociations between a person’s introspective capacities and their first-order capacities can disconfirm their introspective judgments (as the dissociation argument assumes), then associations between a person’s introspective judgments and their first-order capacities ought to confirm them” (12). What makes Schwitzgebel’s examples so striking, he goes on to argue, is precisely that fact that introspective judgments are typically effective.

And when it comes to the introspective variation argument, the claim that the chronic underdetermination that characterizes introspective theoretical disputes attests to introspective incapacity, Bayne once again offers an epistemologically fractionate picture of introspection as a way of blocking any generalization from given instances of introspective failure. He thinks that examples of introspective capacity can be explained away, “[b]ut even if the argument from variation succeeds in establishing a local form of pessimism, it seems to me there is little reason to think that this pessimism generalizes” (14).

Ultimately, the entirety of his case hangs on the epistemologically fractionate nature of introspection. It’s worth noting at this point, that from a cognitive scientific point of view, the fractionate nature of introspection is all but guaranteed. Just think of the mad difference between Plato’s simple aviary, the famous metaphor he offers for memory in the Theaetetus, and the imposing complexity of memory as we understand it today. I raise this ‘mad difference’ for two reasons. First, it implies that any scientific understanding of introspection is bound to radically complicate our present understanding. Second, and even more importantly, it evidences the degree to which introspection is blind, not only to the fractionate complexity of memory, but to its own fractionate complexity as well.

For Bayne to suggest that introspection is fractionate, in other words, is for him to claim that introspection is almost entirely blind to its own nature (much as it is to the nature of memory). To the extent that Bayne has to argue the fractionate nature of introspection, we can conclude that introspection is not only blind to its own fractionate nature, it is also blind to the fact of this blindness. It is in this sense that we can assert that introspection neglects its own fractionate nature. The blindness of introspection to introspection is the implication that hangs over his entire case.

In the meantime, having posed an epistemologically plural account of introspection, he’s now on the hook to explain the details. “Why,” he now asks, “might certain types of phenomenal states be elusive in a way that other types of phenomenal states are not?” (15). Bayne does not pretend to possess any definitive answers, but he does hazard one possible wrinkle in the otherwise featureless face of introspection, the 2010 distinction that he and Maja Spener made in “Introspective Humility” between ‘scaffolded’ and ‘freestanding’ introspective judgments. He notes that those introspective judgments that seem to be the most reliable, are those that seem to be ‘scaffolded’ by first-order experiences. These include the most anodyne metacognitive statements we make, where we reference our experiences of things to perspectivally situate them in the world, as in, ‘I see a tree over there.’ Those introspective judgments that seem the least reliable, on the other hand, have no such first-order scaffolding. Rather than piggy-back on first-order perceptual judgments, ‘freestanding’ judgments (the kind philosophers are fond of making) reference our experience of experiencing, as in, ‘My experience has a certain phenomenal quality.’

As that last example (cribbed from the Gertler quote above) makes plain, there’s a sense in which this distinction doesn’t do the philosophical introspective optimist any favours. (Max Engel exploits this consequence to great effect in his Open MIND reply to Bayne’s article, using it to extend pessimism into the intuition debate). But Bayne demurs, admitting that he lacks any substantive account. As it stands, he need only make the case that introspection is fractionate to convincingly block the ‘globalization’ of Schwitzgebel’s pessimism. As he writes:

perhaps the central lesson of this paper is that the epistemic landscape of introspection is far from flat but contains peaks of security alongside troughs of insecurity. Rather than asking whether or not introspective access to the phenomenal character of consciousness is trustworthy, we should perhaps focus on the task of identifying how secure our introspective access to various kinds of phenomenal states is, and why our access to some kinds of phenomenal states appears to be more secure than our access to other kinds of phenomenal states. 16

The general question of whether introspective cognition of conscious experience is possible is premature, he argues, so long as we have no clear idea of where and why introspection works and does not work.

This is where I most agree with Bayne—and where I’m most puzzled. Many things puzzle me about the analytic philosophy of mind, but nothing quite so much as the disinclination to ask what seem to me to be relatively obvious empirical questions.

In nature, accuracy and reliability are expensive achievements, not gifts from above. Short of magic, metacognition requires physical access and physical capacity. (Those who believe introspection is magic—and many do—need only be named magicians). So when it comes to deliberative introspection, what kind of neurobiological access and capacity are we presuming? If everyone agrees that introspection, whatever it amounts to, requires the brain do honest-to-goodness work, then we can begin advancing a number of empirical theses regarding access and capacity, and how we might find these expressed in experience.

So given what we presently know, what kind of metacognitive access and capacity should we expect our beans to possess? Should we, for instance, expect it to rival the resolution and behavioural integration of our environmental capacities? Clearly not. For one, environmental cognition coevolved with behaviour and so has the far greater evolutionary pedigree—by hundreds of millions of years, in fact! As it turns out, reproductive success requires that organisms solve their surroundings, not themselves. So long as environmental challenges are overcome, they can take themselves for granted, neglect their own structure and dynamics. Metacognition, in other words, is an evolutionary luxury. There’s no way of saying how long homo sapiens has enjoyed the particular luxury of deliberative introspection (as an exaptation, the luxury of ‘philosophical reflection’ is no older than recorded history), but even if we grant our base capacity a million year pedigree, we’re still talking about a very young, and very likely crude, system.

Another compelling reason to think metacognition cannot match the dimensionality of environmental cognition lies in the astronomical complexity of its target. As a matter of brute empirical fact, brains simply cannot track themselves the high-dimensional way they track their environments. Thus, once again, ‘Dehaene’s Law,’ the way “[w]e constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79).  The vast resources society is presently expending to cognize the brain attests to the degree to which our brain exceeds its own capacity to cognize in high dimensional terms. However the brain cognizes its own operations, then, it can only do so in a radically low dimensional way. We should expect, in other words, our brains to be relatively insensitive to their own operation—to be blind to themselves.

A third empirical reason to assume that metacognition falls short environmental dimensionality is found in the way it belongs to the very system it tracks, and so lacks the functional independence as well as the passive and active information-seeking opportunities belonging to environmental cognition. The analogy I always like to use here is that of a primatologist sewn into a sack with a troop of chimpanzees versus one tracking them discretely in the field. Metacognition, unlike environmental cognition, is structurally bound to its targets. It cannot move toward some puzzling item—an apple say—peer at it, smell it, touch it, turn it over, crack it open, taste it, scrutinize the components. As embedded, metacognition is restricted to fixed channels of information that it could not possibly identify or source. The brain, you could say, is simply too close to itself to cognize itself as it is.

Viewed empirically, then, we should expect metacognitive access and capacity to be more specialized, more adventitious, and less flexible compared to that of environmental cognition. Given the youth of the system, the complexity of its target, and the proximity of its target, we should expect human metacognition will consist of various kluges, crude heuristics that leverage specific information to solve some specific range of problems. As Gerd Gigerenzer and the Adaptive Behaviour and Cognition Research Group have established, simple heuristics are often far more effective than optimization methods at solving problems. “As the amount of data available to make predictions in an environment shrinks, the advantage of simple heuristics over complex algorithms grows” (Hertwig and Hoffrage, “The Research Agenda,” Simple Heuristics in a Social World, 23). With complicated problems yielding little data, adding parameters to a solution can compound the chances of making mistakes. Low dimensionality, in other words, need not be a bad thing, so long as the information consumed is information enabling the solution of some problem set. This is why evolution so regularly makes use of it.

Given this broad-stroke picture, human metacognition can be likened to a toolbox containing multiple, special-purpose tools, each possessing specific ‘problem-ecologies,’ narrow, but solvable domains that trigger their application frequently and decisively enough to have once assured the tool’s generational selection. The problem with heuristics, of course, lies in the narrowness of their respective domains. If we grant the brain any flexibility in the application of its metacognitive tools, then the potential for heuristic misapplication is always a possibility. If we deny the brain any decisive capacity to cognize these misapplications outside their consequences (if the brain suffers ‘tool agnosia’), then we can assume these misapplications will be indistinguishable from successful applications short of those consequences.

In other words, this picture of human metacognition (which is entirely consistent with contemporary research) provides an elegant (if sobering) recapitulation and explanation of what Bayne calls the ‘epistemic landscape of introspection.’ Metacognition is fractionate because of the heuristic specialization required to decant behaviourally relevant information from the brain. The ‘peaks of security’ correspond to the application of metacognitive heuristics to matching problem-ecologies, while the ‘troughs of insecurity’ correspond to the application of metacognitive heuristics to problem-ecologies they could never hope to solve.

Since those matching problem-ecologies are practical (as we might expect, given the cultural basis of regimented theoretical thinking), it makes sense that practical introspection is quite effective, whereas theoretical introspection, which attempts to intuit the general nature of experience, is anything but. The reason the latter strike us as so convincing—to the point of seeming impossible to doubt, no less—is simply that doubt is expensive: there’s no reason to presume we should happily discover the required error-signalling machinery awaiting any exaptation of our deliberative introspective capacity, let alone one so unsuccessful as philosophy. As I mentioned above, the experience of epistemic insufficiency always requires more information. Sufficiency is the default simply because the system has no way of anticipating novel applications, no decisive way of suddenly flagging information that was entirely sufficient for ancestral problem-ecologies and so required no flagging.

Remember how Bayne offered what I termed ‘information information’ provided by vision as a possible analogue of introspection? Visual experience cues us to the unreliability or absence of information in a number of ways, such as darkness, blurring, faintness, and so on. Why shouldn’t we presume that deliberative introspection likewise flags what can and cannot be trusted? Because deliberative introspection exapts information sufficient for one kind of practical problem-solving (Did I leave my keys in the car? Am I being obnoxious? Did I read the test instructions carefully enough?) for the solution of utterly unprecedented ontological problems. Why should repurposing introspective deliverances in this way renovate the thoughtless assumption of ‘default sufficiency’ belonging to their original purposes?

This is the sense in which Blind Brain Theory, in the course of explaining the epistemic profile of introspection, also explodes Bayne’s case for introspective optimism. By tying the contemplative question of deliberative introspection to the empirical question of the brain’s metacognitive access and capacity, BBT makes plain the exorbitant biological cost of the optimistic case. Exhaustive, reliable intuition of anything involves a long evolutionary history, tractable targets, and flexible information access—that is, all the things that deliberative introspection does not possess.

Does this mean that deliberative introspection is a lost cause, something possessing no theoretical utility whatsoever? Not necessarily. Accidents happen. There’s always a chance that some instance of introspective deliberation could prove valuable in some way. But we should expect such solutions to be both adventitious and local, something that stubbornly resists systematic incorporation into any more global understanding.

But there’s another way, I think, in which deliberative introspection can play a genuine role in theoretical cognition—a way that involves looking at Schwitzgebel’s skeptical project as a constructive, rather than critical, theoretical exercise.

To show what I mean, it’s worth recapitulating one of the quotes Bayne selects from Perplexities of Consciousness for sustained attention:

How much of the scene are you able vividly to visualize at once? Can you keep the image of your chimney vividly in mind at the same time you vividly imagine (or “image”) your front door? Or does the image of your chimney fade as your attention shifts to the door? If there is a focal part of your image, how much detail does it have? How stable is it? Suppose that you are not able to image the entire front of your house with equal clarity at once, does your image gradually fade away towards the periphery, or does it do so abruptly? Is there any imagery at all outside the immediate region of focus? If the image fades gradually away toward the periphery, does one lose colours before shapes? Do the peripheral elements of the image have color at all before you think to assign color to them? Do any parts of the image? If some parts of the image have indeterminate colour before a colour is assigned, how is that indeterminacy experienced—as grey?—or is it not experienced at all? If images fade from the centre and it is not a matter of the color fading, what exactly are the half-faded images like? Perplexities, 36

Questions in general are powerful insofar as they allow us to cognize the yet-to-be-cognized. The slogan feels ancient to me now, but no less important: Questions are how we make ignorance visible, how we become conscious of cognitive incapacity. In effect, then, each and every question in this quote brings to light a specific inability to answer. Granting that this inability indicates either a lack of information access and/or metacognitive incapacity, we can presume these questions enumerate various cognitive dimensions missing from visual imagery. Each question functions as an interrogative ‘ping,’ you could say, showing us another direction that (for many people at least) introspective inquiry cannot go—another missing dimension.

So even though Bayne and Schwitzgebel draw negative conclusions from the ‘dumbfounding’ that generally accompanies these questions, each instance actually tells us something potentially important about the limits of our introspective capacities. If Schwitzgebel had been asking these questions of a painting—Las Meninas, say—then dumbfounding wouldn’t be a problem at all. The information available, given the cognitive capacity possessed, would make answering them relatively straightforward. But even though ‘visual imagery’ is apparently ‘visual’ the same as a painting, the selfsame questions stop us in our tracks. Each question, you could say, closes down a different ‘degree of cognitive freedom,’ reveals how few degrees of cognitive freedom human deliberative introspection possesses for the purposes of solving visual imagery. Not much at all, as it turns out.

Note this is precisely what we should expect on a ‘blind brain’ account. Once again, simply given the developmental and structural obstacles confronting metacognition, it almost certainly consists of an ‘adaptive toolbox’ (to use Gerd Gigerenzer’s phrase), a suite of heuristic devices adapted to solve a restricted set of problems given only low-dimensional information. The brain possesses a fixed set of metacognitive channels available for broadcast, but no real ‘channel channel,’ so that it systematically neglects metacognition’s own fractionate, heuristic structure.

And this clearly seems to be what Schwitzgebel’s interrogative barrage reveals: the low dimensionality of visual imagery (relative to vision), the specialized problem-solving nature of visual imagery, and our profound inability to simply intuit as much. For some mysterious reason we can ask visual questions that for some mysterious reason do not apply to visual imagery. The ability of language to retask cognitive resources for introspective purposes seems to catch the system as a whole by surprise, confronts us with what had been hitherto relegated to neglect. We find ourselves ‘dumbfounded.’

So long as we assume that cognition requires work, we must assume that metacognition trades in low dimensional information to solve specific kinds of problems. To the degree that introspection counts as metacognition, we should expect it to trade in low-dimensional information geared to solve particular kinds of practical problems. We should also expect it to be blind to introspection, to possess neither the access nor the capacity required to intuit its own structure. Short of interrogative exercises such as Schwitzgebel’s, deliberative introspection has no inkling of how many degrees of cognitive freedom it possesses in any given context. We have to figure out what information is for what inferentially.

And this provides the basis for a provocative diagnosis of a good many debates in contemporary psychology and philosophy of mind. So for instance, a blind brain account implies that our relation to something like ‘qualia’ is almost certainly one possessing relatively few degrees of cognitive freedom—a simple heuristic. Deliberative introspection neglects this, and at the same time, via questioning, allows other cognitive capacities to consume the low-dimensional information available. ‘Dumbfounding’ often follows—what the ancient Greeks liked to call, thaumazein. The practically minded, sniffing a practical dead end, turn away, but the philosopher famously persists, mulling the questions, becoming accustomed to them, chasing this or that inkling, borrowing many others, all of which, given the absence of any real information information, cannot but suffer from some kind of ‘only game in town effect’ upon reflection. The dumbfounding boundary is trammelled to the point of imperceptibility, and neglect is confused with degrees of cognitive freedom that simply do not exist. We assume that a quale is something like an apple—we confuse a low-dimensional cognitive relationship with a high-dimensional one. What is obviously specialized, low-dimensional information becomes, for a good number of philosophers at least, a special ‘immediately self-evident’ order of reality.

Is this Adamic story really that implausible? After all, something has to explain our perpetual inability to even formulate the problem of our nature, let alone solve it. Blind Brain Theory, I would argue, offers a parsimonious and comprehensive way to extricate ourselves from the traditional mire. Not only does it explain Bayne’s ‘epistemic profile of introspection,’ it explains why this profile took so long to uncover. By reinterpreting the significance of Schwitzgebel’s ‘dumbfounding’ methods, it raises the possibility of ‘Interrogative Introspection’ as a scientific tool. And lastly, it suggests the problems that neglect foists on introspection can be generalized, that much of our inability to cognize ourselves turns on the cognitive short cuts evolution had to use to assure we could cognize ourselves at all.

The Philosopher, the Drunk, and the Lamppost

by rsbakker

A crucial variable of interest is the accuracy of metacognitive reports with respect to their object-level targets: in other words, how well do we know our own minds? We now understand metacognition to be under segregated neural control, a conclusion that might have surprised Comte, and one that runs counter to an intuition that we have veridical access to the accuracy of our perceptions, memories and decisions. A detailed, and eventually mechanistic, account of metacognition at the neural level is a necessary first step to understanding the failures of metacognition that occur following brain damage and psychiatric disorder. Stephen M. Fleming and Raymond J. Dolan, “The neural basis of metacognitive ability,” Phil. Trans. R. Soc. B (2012) 367, 1338–1349doi:10.1098/rstb.2011.0417

As well as the degree to which we should accept the deliverances of philosophical reflection.

Philosophical reflection is a cultural achievement, an exaptation of pre-existing cognitive capacities. It is entirely possible that philosophical reflection, as an exaptation of pre-existing biocognitive capacities, suffers any number of cognitive short-circuits. And this could very well explain why philosophy suffers the perennial problems it does.

In other words, the empirical possibility of Blind Brain Theory cannot be doubted—no matter how disquieting its consequences seem to be. What I would like to assess here is the probability of the account being empirically substantiated.

The thesis is that traditional philosophical problem-solving continually runs afoul illusions falling out of metacognitive neglect. The idea is that intentional philosophy has been the butt of the old joke about the police officer who stops to help a drunk searching for his keys beneath a lamppost. The punch-line, of course, is that even though the drunk lost his keys in the parking lot, he’s searching beneath the lamppost because that’s the only place he can see. The twist for the philosopher lies in the way neglect consigns the parking lot—the drunk’s whole world in fact—to oblivion, generating the illusion that the light and the lamppost comprise an independent order of existence. For the philosopher, the keys to understanding what we are essentially can be found nowhere else because they exhaust everything that is within that order. Of course the keys that this or that philosopher claims to have found take wildly different forms—they all but shout profound theoretical underdetermination—but this seems to trouble only the skeptical spoil-sports.

Now I personally think the skeptics have always possessed far and away the better position, but since they could only articulate their critiques in the same speculative idiom as philosophy, they have been every bit as easy to ignore as philosophers. But times, I hope to show, have changed—dramatically so. Intentional philosophy is simply another family of prescientific discourses. Now that science has firmly established itself within its traditional domains, we should expect it to be progressively delegitimized the way all prescientific discourses have delegitimized.

To begin with, it is simply an empirical fact that philosophical reflection on the nature of human cognition suffers massive neglect. To be honest, I sometimes find myself amazed that I even need to make this argument to people. Our blindness to our own cognitive makeup is the whole reason we require cognitive science in the first place. Every single fact that the sciences of cognition and the brain have discovered is another fact that philosophical reflection is all but blind to, another ‘dreaded unknown unknown’ that has always structured our cognitive activity without our knowledge.

As Keith Frankish and Jonathan Evans write:

The idea that we have ‘two minds’ only one of which corresponds to personal, volitional cognition, has also wide implications beyond cognitive science. The fact that much of our thought and behaviour is controlled by automatic, subpersonal, and inaccessible cognitive processes challenges our most fundamental and cherished notions about personal and legal responsibility. This has major ramifications for social sciences such as economics, sociology, and social policy. As implied by some contemporary researchers … dual process theory also has enormous implications for educational theory and practice. As the theory becomes better understood and more widely disseminated, its implications for many aspects of society and academia will need to be thoroughly explored. In terms of its wider significance, the story of dual-process theorizing is just beginning.  “The Duality of Mind: An Historical Perspective, In Two Minds: Dual Processes and Beyond, 25

We are standing on the cusp of a revolution in self-understanding unlike any in human history. As they note, the process of digesting the implications of these discoveries is just getting underway—news of the revolution has just hit the streets of capital, and the provinces will likely be a long time in hearing it. As a result, the old ways still enjoy what might be called the ‘Only-game-in-town Effect,’ but not for very long.

The deliverances of theoretical metacognition just cannot be trusted. This is simply an empirical fact. Stanslaus Dehaene even goes so far as to state it as a law: “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79).

As I mentioned, I think this is a deathblow, but philosophers have devised a number of cunning ways to immunize themselves from this fact—philosophy is the art of rationalization, after all! If the brain (for some pretty obvious reasons) is horrible at metacognizing brain functions, then one need only insist that something more than the brain is at work. Since souls will no longer do, the philosopher switches to functions, but not any old functions. The fact that the functions of a system look different depending on the grain of investigation is no surprise: of course neurocellular level descriptions will differ from neural-network level descriptions. The intentional philosopher, however, wants to argue for a special, emergent order of intentional functions, one that happens to correspond to the deliverances of philosophical reflection. Aside from this happy correspondence, what makes these special functions so special is their incompatibility with biomechanical functions—an incompatibility so profound that biomechanical explanation renders them all but unintelligible.

Call this the ‘apples and oranges’ strategy. Now I think the sheer convenience of this view should set off alarm bells: If the science of a domain contradicts the findings of philosophical reflection, then that science must be exploring a different domain. But the picture is far more complicated, of course. One does not overthrow more than two thousand years of (apparent) self-understanding on the back of two decades of scientific research. And even absent this institutional sanction, there remains something profoundly compelling about the intentional deliverances of philosophical reflection, despite all the manifest problems. The intentionalist need only bid you to theoretically reflect, and lo, there are the oranges… Something has to explain them!

In other words, pointing out the mountain of unknown unknowns revealed by cognitive science is simply not enough to decisively undermine the conceits of intentional philosophy. I think it should be, but then I think the ancient skeptics had the better of things from the outset. What we really need, if we want to put an end to this vast squandering of intellectual resources, is to explain the oranges. So long as oranges exist, some kind of abductive case can be made for intentional philosophy. Doing this requires we take a closer look at what cognitive science can teach us about philosophical reflection and its capacity to generate self-understanding.

The fact is the intentionalist is in something of a dilemma. Their functions, they admit, are naturalistically inscrutable. Since they can’t abide dualism, they need their functions to be natural (or whatever it is the sciences are conjuring miracles out of) somehow, so whatever functions they posit, say as one realized in the scorekeeping attitudes of communities, they have to track brain function somehow. This responsibility to cognitive scientific finding regarding their object is matched by a responsibility to cognitive scientific finding regarding their cognitive capacity. Oranges or no oranges, both their domain and their capacity to cognize that domain answer to what cognitive science ultimately reveals. Some kind of emergent order has to be discovered within the order of nature, and we have to have to somehow possess the capacity to reliably metacognize that emergent order. Given what we already know, I think a strong case can be made that this latter, at least, is almost certainly impossible.

Consider Dehaene’s Global Neuronal Workspace Theory of Consciousness (GNW). On his account, at any given moment the information available for conscious report has been selected from parallel swarms of nonconscious processes, stabilized, and broadcast across the brain for consumption by other swarms of other nonconscious processes. As Dehaene writes:

The brain must contain a ‘router’ that allows it to flexibly broadcast information to and from its internal routines. This seems to be a major function of consciousness: to collect the information from various processors, synthesize it, and then broadcast the result—a conscious symbol—to other, arbitrarily selected processors. These processors, in turn, apply their unconscious skills to this symbol, and the entire process may repeat a number of times. The outcome is a hybrid serial-parallel machine, in which stages of massively parallel computation are interleaved with a serial stage of conscious decision making and information routing. Consciousness and the Brain, 105

Whatever philosophical reflection amounts to, insofar as it involves conscious report it involves this ‘hybrid serial-parallel machine’ described by Dehaene and his colleagues, a model which is entirely consistent with the ‘adaptive unconscious’ (See Tim Wilson’s A Stranger to Ourselves for a somewhat dated, yet still excellent overview) described in cognitive psychology. Whatever a philosopher can say regarding ‘intentional functions’ must in some way depend on the deliverances of this system.

One of the key claims of the theory, confirmed via a number of different experimental paradigms, is that access (or promotion) to the GNW is all or nothing. The insight is old: psychologists have long studied what is known as the ‘psychological refractory period,’ the way attending to one task tends to blot out or severely impair our ability to perform other tasks simultaneously. But recent research is revealing more of the radical ‘cortical bottleneck’ that marks the boundary between the massively parallel processing of multiple precepts (or interpretations thereof) and the serial stage of conscious cognition. [Marti, S., et al., A shared cortical bottleneck underlying Attentional Blink and Psychological Refractory Period, NeuroImage (2011), doi:10.1016/j.neuroimage.2011.09.063]

This is important because it means that the deliverances the intentional philosopher depend on when reflecting on problems involving intentionality or ‘experience’ more generally are limited to what makes the ‘conscious access cut.’ You could say the situation is actually far worse, since conscious deliberation on conscious phenomena requires the philosopher use the very apparatus they’re attempting to solve. In a sense they’re not only wagering that the information they require actually reaches consciousness in the first place, but that it can be recalled for subsequent conscious deliberation. The same way the scientist cannot incorporate information that doesn’t, either via direct observation or indirect observation via instrumentation, find its way to conscious awareness, the philosopher likewise cannot hazard ‘educated’ guesses regarding information that does not somehow make the conscious access cut, only twice over. In a sense, they’re peering at the remaindered deliverances of a serial straw through a serial straw–one that appears as wide as the sky for neglect! So there is a very real question of whether philosophical reflection, an artifactual form of deliberative cognition, has anything approaching access to the information it needs to solve the kinds of problems it purports to solve. Given the role that information scarcity plays in theoretical underdetermination, the perpetually underdetermined theories posed by intentional philosophers strongly suggest that the answer is no.

But if the science suggests that philosophical reflection may not have access to enough information to answer the questions in its bailiwick, it also raises real questions of whether it has access to the right kind of information. Recent research has focussed on attempting to isolate the mechanisms in the brain responsible for mediating metacognition. The findings seem to be converging on the rostrolateral prefrontal cortex (rlPFC) as playing a pivotal role in the metacognitive accuracy of retrospective reports. As Fleming and Dolan write:

A role for rlPFC in metacognition is consistent with its anatomical position at the top of the cognitive hierarchy, receiving information from other prefrontal cortical regions, cingulate and anterior temporal cortex. Further, compared with non-human primates, rlPFC has a sparser spatial organization that may support greater interconnectivity. The contribution of rlPFC to metacognitive commentary may be to represent task uncertainty in a format suitable for communication to others, consistent with activation here being associated with evaluating self-generated information, and attention to internal representations. Such a conclusion is supported by recent evidence from structural brain imaging that ‘reality monitoring’ and metacognitive accuracy share a common neural substrate in anterior PFC.  Italics added, “The neural basis of metacognitive ability,” Phil. Trans. R. Soc. B (2012) 367, 1343. doi:10.1098/rstb.2011.0417

As far as I can tell, the rlPFC is perhaps the best candidate we presently have for something like a ‘philosopher module’ [See Badre, et al. “Frontal cortex and the discovery of abstract action rules.” Neuron (2010) 66:315–326.] though the functional organization of the PFC more generally remains a mystery. [Kalina Christoff’s site and Steve Fleming’s site are great places to track research developments in this area of cognitive neuroscience] It primarily seems to be engaged by abstract relational and semantic tasks, and plays some kind of role mediating verbal and spatial information. Mapping evidence also shows that its patterns of communication to other brain regions varies as tasks vary; in particular, it seems to engage regions thought to involve visuospatial and semantic processes. [Wendelken et al., “Rostrolateral Prefrontal Cortex: Domain-General or Domain-Sensitive?” Human Brain Mapping, 000:00-00, 2011 1-12.]

Cognitive neuroscience is nowhere close to any decisive picture of abstract metacognition, but hopefully the philosophical moral of the research should be clear: whatever theoretical metacognition is, it is neurobiological. And this is just to say that the nature of philosophical reflection—in the form of say, ‘making things explicit,’ or what have you—is not something that philosophical reflection on ‘conscious experience’ can solve! Dehaene’s law applies as much to metacognition as to any other metacognitive process—as we should expect, given the cortical bottleneck and what we know of the rlPFC. Information is promoted for stabilization and broadcast from nonconscious parallel swarms to be consumed by nonconscious parallel swarms, which include the rlPFC, which in turn somehow informs further stabilizations and broadcasts. What we presently ‘experience,’ the well from which our intentional claims are drawn, somehow comprises the serial ‘stabilization and broadcast’ portion of this process—and nothing else.

The rlPFC is an evolutionary artifact, something our ancestors developed over generations of practical problem-solving. It is part and parcel of the most complicated (not to mention expensive) organ known. Assume, for the moment, that the rlPFC is the place where the magic happens, the part of the ruminating philosopher’s brain where ‘accurate intuitions’ of the ‘nature of mind and thought’ arise allowing for verbal report. (The situation is without a doubt far more complicated, but since complication is precisely the problem the philosopher faces, this example actually does them a favour). There’s no way the rlPFC could assist in accurately cognizing its own function—another rlPFC would be required to do that, requiring a third rlPFC, and so on and so on. In fact, there’s no way the brain could directly cognize its own activities in any high-dimensionally accurate way. What the rlPFC does instead—obviously one would think—is process information for behaviour. It has to earn its keep after all! Given this, one should expect that it is adapted to process information that is itself adapted to solve the kinds of behaviourally related problems faced by our ancestors, that it consists of ad hoc structures processing ad hoc information.

Philosophy is quite obviously an exaptation of the capacities possessed by the rlPFC (and the systems of which it is part), the learned application of metacognitive capacities originally adapted to solve practical behavioural problems to theoretical problems possessing radically different requirements—such as accuracy, the ability to not simply use a cognitive tool, but to be able to reliably determine what that cognitive tool is.

Even granting the intentionalist their spooky functional order, are we to suppose, given everything considered, that we just happened to have evolved the capacity to accurately intuit this elusive functional order? Seems a stretch. The far more plausible answer is that this exaptation, relying as it does on scarce and specialized information, was doomed from the outset to get far more things wrong than right (as the ancient skeptics insisted!). The far more plausible answer is that our metacognitive capacity is as radically heuristic as cognitive science suggests. Think of the scholastic jungle that is analytic and continental philosophy. Or think of the yawning legitimacy gap between mathematics (exaptation gone right) versus the philosophy of mathematics (exaptation gone wrong). The oh so familiar criticisms of philosophy, that it is impractical, disconnected from reality, incapable of arbitrating its controversies—in short, that it does not decisively solve—are precisely the kinds of problems we might expect, were philosophical reflection an artifact of an exaptation gone wrong.

On my account it is wildly implausible that any design paradigm like evolution could deliver the kind of cognition intentionalism requires. Evolution solves difficult problems heuristically: opportunistic fixes are gradually sculpted by various contingent frequencies in its environment, which in our case, were thoroughly social. Since the brain is the most difficult problem any brain could possibly face, we can assume the heuristics our brain relies on to cognize other brains will be specialized, and that the heuristics it uses to cognize itself will be even more specialized still. Part of this specialization will involve the ability to solve problems absent any causal information: there is simply no way the human brain can cognize itself the way it cognizes its natural environment. Is it really any surprise that causal information would scuttle problem-solving adapted to solve in its absence? And given our blindness to the heuristic nature of the systems involved, is it any surprise that we would be confounded by this incompatibility for as long as we have?

The problem, of course, it that it so doesn’t seem that way. I was a Heideggerean once. I was also a Wittgensteinian. I’ve spent months parsing Husserl’s torturous attempts to discipline philosophical reflection. That version of myself would have scoffed at these kinds of criticisms. ‘Scientism!’ would have been my first cry; ‘Performative contradiction!’ my second. I was so certain of the intrinsic intentionality of human things that the kind of argument I’m making here would have struck me as self-evident nonsense. ‘Not only are these intentional oranges real,’ I would have argued, ‘they are the only thing that makes scientific apples possible.’

It’s not enough to show the intentionalist philosopher that, by the light of cognitive science, it’s more than likely their oranges do not exist. Dialectically, at least, one needs to explain how, intuitively, it could seem so obvious that they do exist. Why do the philosopher’s ‘feelings of knowing,’ as murky and inexplicable as they are, have the capacity to convince them of anything, let alone monumental speculative systems?

As it turns out, cognitive psychology has already begun interrogating the general mechanism that is likely responsible, and the curious ways it impacts our retrospective assessments: neglect. In Thinking, Fast and Slow, Daniel Kahneman cites the difficulty we have distinguishing experience from memory as the reason why we retrospectively underrate our suffering in a variety of contexts. Given the same painful medical procedure, one would expect an individual suffering for twenty minutes to report a far greater amount than an individual suffering for half that time or less. Such is not the case. As it turns out duration has “no effect whatsoever on the ratings of total pain” (380). Retrospective assessments, rather, seem determined by the average of the pain’s peak and its coda. Absent intellectual effort, you could say the default is to remove the band-aid slowly.

Far from being academic, this ‘duration neglect,’ as Kahneman calls it, places the therapist in something of a bind. What should the physician’s goal be? The reduction of the pain actually experienced, or the reduction of the pain remembered. Kahneman provocatively frames the problem as a question of choosing between selves, the ‘experiencing self’ that actually suffers the pain and the ‘remembering self’ that walks out of the clinic. Which ‘self’ should the therapist serve? Kahneman sides with the latter. “Memories,” he writes, “are all we get to keep from our experience of living, and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self” (381). If the drunk has no recollection of the parking lot, then as far as his decision making is concerned, the parking lot simply does not exist. Kahneman writes:

Confusing experience with the memory of it is a compelling cognitive illusion—and it is the substitution that makes us believe a past experience can be ruined. The experiencing self does not have a voice. The remembering self is sometimes wrong, but it is the one that keeps score and governs what we learn from living, and it is the one that makes decisions. What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experience. This is the tyranny of the remembering self. 381

Could it be that this is what philosophers are doing? Could they, in the course of defining and arranging their oranges, simply be confusing their memory of experience with experience itself? So in the case of duration neglect, information regarding the duration of suffering makes no difference in the subject’s decision making because that information is nowhere to be found. Given the ubiquity of similar effects, Kahneman generalizes the insight into what he calls WYSIATI, or What-You-See-Is-All-There-Is:

An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. [Our nonconscious cognitive system] excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have. 85

Kahneman’s WYSIATI, you could say, provides a way to explain Dehaene’s Law regarding the chronic overestimation of awareness. The cortical bottleneck renders conscious access captive to the facts as they are given. If information regarding things like the duration of suffering in an experimental context isn’t available, then that information simply makes no difference for subsequent behaviour. Likewise, if information regarding the reliability of an intuition or ‘feeling of knowing’ (aptly abbreviated as ‘FOK’ in the literature!) isn’t available, then that information simply makes no difference—at all.

Thus the illusion of what I’ve been calling cognitive sufficiency these past few years. Kahneman lavishes the reader in Thinking, Fast and Slow with example after example of how subjects perennially confuse the information they do have with all the information they need:

You cannot help dealing with the limited information you have as if it were all there is to know. You build the best possible story from the information available to you, and if it is a good story, you believe it. Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance. 201

You could say what his research has isolated the cognitive conceit that lies at the heart of Plato’s cave: absent information regarding the low-dimensionality of the information they have available, shadows become everything. Like the parking lot, the cave, the chains, the fire, even the possibility of looking from side-to-side simply do not exist for the captives.

As the WYSIATI rule implies, neither the quantity nor the quality of the evidence counts for much in subjective confidence. The confidence that individuals have in their beliefs depends mostly on the quality of the story they can tell about what they see, even if they see little. We often fail to allow for the possibility that evidence that should be critical to our judgment is missing—what we see is all there is. Furthermore, our associative system tends to settle on a coherent pattern of activation and suppresses doubt and ambiguity. 87-88

Could the whole of intentional philosophy amount to varieties of story-telling, ‘theory-narratives’ that are compelling to their authors precisely to the degree they are underdetermined? The problem as Kahneman outlines it is twofold. For one, “[t]he human mind does not deal well with nonevents” (200) simply because unavailable information is information that makes no difference. This is why deception, or any instance of controlling information availability, allows us to manipulate our fellow drunks so easily. For another, “[c]onfidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it,” and “not a reasoned evaluation of the probability that this judgment is correct” (212). So all that time I was reading Heidegger nodding, certain that I was getting close to finding the key, I was simply confirming parochial assumptions. Once I had bought in, coherence was automatic, and the inferences came easy. Heidegger had to be right—the key had to be beneath his lamppost—simply because it all made so much remembered sense ‘upon reflection.’

Could it really be as simple as this? Now given philosophers’ continued insistence on making claims despite their manifest institutional incapacity to decisively arbitrate any of them, neglect is certainly a plausible possibility. But the fact is this is precisely the kind of problem we should expect given that philosophical reflection is an exaptation of pre-existing cognitive capacities.

Why? Because what researchers term ‘error awareness,’ like every other human cognitive capacity, does not come cheap. To be sure, the evolutionary premium on error-detection is high to the extent that adaptive behaviour is impossible otherwise. It is part and parcel of cognition. But philosophical reflection is, once again, an exaptation of pre-existing metacognitive capacities, a form of problem-solving that has no evolutionary precedent. Research has shown that metacognitive error-awareness is often problematic even when applied to problems, such as assessing memory accuracy or behavioural competence in retrospect, that it has likely evolved to solve. [See, Wessel, “Error awareness and the error-related negativity: evaluating the first decade of evidence,” Front Hum Neurosci. 2012; 6: 88. doi: 10.3389/fnhum.2012.00088, for a GNW related review] So if conscious error-awareness is hit or miss regarding adaptive activities, we should expect that, barring some cosmic stroke of evolutionary good fortune, it pretty much eludes philosophical reflection altogether. Is it really surprising that the only erroneous intuitions philosophers seem to detect with any regularity are those belonging to their peers?

We’re used to thinking of deficits in self-awareness in pathological terms, as something pertaining to brain trauma. But the picture emerging from cognitive science is positively filled with instances of non-pathological neglect, metacognitive deficits that exist by virtue of our constitution. The same way researchers can game the heuristic components of vision to generate any number of different visual illusions, experimentalists are learning how to game the heuristic components of cognition to isolate any number of cognitive illusions, ways in which our problem-solving goes awry without the least conscious awareness. In each of these cases, neglect plays a central role in explaining the behaviour of the subjects under scrutiny, the same way clinicians use neglect to explain the behaviour of their impaired patients.

Pathological neglect strikes us as so catastrophically consequential in clinical settings simply because of the behavioural aberrations of those suffering it. Not only does it make a profoundly visible difference, it makes a difference that we can only understand mechanistically. It quite literally knocks individuals from the problem-ecology belonging to socio-cognition into the problem-ecologies belonging to natural cognition. Socio-cognition, as radically heuristic, leans heavily on access to certain environmental information to function properly. Pathological neglect denies us that information.

Non-pathological neglect, on the other hand, completely eludes us because, insofar as we share the same neurophysiology, we share the same ‘neglect structure.’ The neglect suffered is both collective and adaptive. As a result, we only glimpse it here and there, and are more cued to resolve the problems it generates than ponder the deficits in self-awareness responsible. We require elaborate experimental contexts to draw it into sharp focus.

All Blind Brain Theory does is provide a general theoretical framework for these disparate findings, one that can be extended to a great number of traditional philosophical problems—including the holy grail, the naturalization of intentionality. As of yet, the possibility of such a framework remains at most an inkling to those at the forefront of the field (something that only speculative fiction authors dare consider!) but it is a growing one. Non-pathological neglect is not only a fact, it is ubiquitous. Conceptualized the proper way, it possesses a very parsimonious means of dispatching with a great number of ancient and new conundrums…

At some point, I think all these mad ramblings will seem painfully obvious, and the thought of going back to tackling issues of cognition neglecting neglect will seem all but unimaginable. But for the nonce, it remains very difficult to see—it is neglect we’re talking about, after-all!—and the various researchers struggling with its implications lie so far apart in terms of expertise and idiom that none can see the larger landscape.

And what is this larger landscape? If you swivel human cognitive capacity across the continuum of human interrogation you find a drastic plunge in the dimensionality and an according spike in the specialization of the information we can access for the purposes of theorization as soon as brains are involved. Metacognitive neglect means that things like ‘person’ or ‘rule’ or what have you seem as real as anything else in the world when you ponder them, but in point of fact, we have only our intuitions to go on, the most meagre deliverances lacking provenance or criteria. And this is precisely what we should expect given the rank inability of the human brain to cognize itself or others in the high-dimensional manner it cognizes its environments.

This is the picture that traditional, intentional philosophy, if it is to maintain any shred of cognitive legitimacy moving forward, must somehow accommodate. Since I see traditional philosophy as largely an unwitting artifact of this landscape, I think such an accommodation will result in dissolution, the realization that philosophy has largely been a painting class for the blind. Some useful works have been produced here and there to be sure, but not for any reason the artists responsible suppose. So I would like to leave you with a suggestive parallel, a way to compare the philosopher with the sufferer of Anton’s Syndrome, the notorious form of anosognosia that leaves blind patients completely convinced they can see. So consider:

First, the patient is completely blind secondary to cortical damage in the occipital regionsof the brain. Second, these lesions are bilateral. Third, the patient is not only unaware of her blindness; she rejects any objective evidence of her blindness. Fourth, the patient offers plausible, but at times confabulatory responses to explain away any possible evidence of her failure to see (e.g., “The room is dark,” or “I don’t have my glasses,therefore how can I see?”). Fifth, the patient has an apparent lack of concern (or anosodiaphoria) over her neurological condition. Prigatano and Wolf, “Anton’s Syndrome and Unawareness of Partial or Complete Blindness,” The Study of Anosognosia, 456.

And compare to:

First, the philosopher is metacognitively blind secondary to various developmental and structural constraints. Second, the philosopher is not aware of his metacognitive blindness, and is prone to reject objective evidence of it. Third, the philosopher offers plausible, but at times confabulatory responses to explain away evidence of his metacognitive incapacity. And fourth, the philosopher often exhibits an apparent lack of concern for his less than ideal neurological constitution.

The Introspective Peepshow: Consciousness and the ‘Dreaded Unknown Unknowns’

by rsbakker

On February 12th, 2002, Secretary of Defence Donald Rumsfeld was famously asked in a DoD press conference about the American government’s failure to provide evidence regarding Iraq’s alleged provision of weapons of mass destruction to terrorist groups. His reply, which was lampooned in the media at the time, has since become something of a linguistic icon:

[T]here are known knowns; there are things we know that we know. There are known unknowns; that is to say there are things that we know we don’t know. But there are also unknown unknowns; there are things we don’t know we don’t know.

In 2003, this comment earned Rumsfeld the ‘Foot in Mouth Award’ from the British-based Plain English Campaign. Despite the scorn and hilarity it occasioned in mainstream culture at the time, the concept of unknown unknowns, or ‘unk-unk’ as it is sometimes called, has enjoyed long-standing currency in military and engineering circles. Only recently has it found its way to business and economics (in large part due to the work of Daniel Kahneman), where it is often referred to as the ‘dreaded unknown unknown.’ For enterprises involving risk, the reason for this dread is quite clear. Even in daily life, we speak of being ‘blind-sided,’ of things happening ‘out of the blue’ or coming ‘out of left field.’ Our institutions, like our brains, have evolved to manage and exploit environmental regularities. Since knowing everything is impossible, we have at our disposal any number of rehearsed responses, precooked ways to deal with ‘known unknowns,’ or irregularities that are regular enough to be anticipated. Unknown unknowns refer to those events that find us entirely unprepared–often with catastrophic consequences.

Given that few human activities are quite so sedate or ‘risk free,’ unk-unk might seem out of place in the context of consciousness research and the philosophy of mind. But as I hope to show, such is not the case. The unknown unknown, I want to argue, has a profound role to play in developing our understanding of consciousness. Unfortunately, since the unknown unknown itself constitutes an unknown unknown within cognitive science, let alone consciousness research, the route required to make my case is necessarily circuitous. As John Dewey (1958) observed, “We cannot lay hold of the new, we cannot even keep it before our minds, much less understand it, save by the use of ideas and knowledge we already possess” (viii-ix).

Blind-siding readers rarely pays. With this in mind, I begin with a critical consideration of Peter Carruthers (forthcoming, 2011, 2009a, 2009b, 2008) ‘innate self-transparency thesis,’ the account of introspection entailed by his more encompassing ‘mindreading first thesis’ (or as he calls it in The Opacity of the Mind (2011), Interpretative Sensory-Access Theory (ISA)). I hope to accomplish two things with this reading: 1) illustrate the way explanations in the cognitive sciences so often turn on issues of informatic tracking; and 2) elaborate an alternative to Carruthers’ innate self-transparency thesis that makes, in a preliminary fashion at least, the positive role played of the unknown unknown clear.

Since what I propose subsequent to this first leg of the article can only sound preposterous short of this preliminary, I will commit the essayistic sin (and rhetorical virtue) of leaving my final conclusions unstated–as a known unknown, worth mere curiosity, perhaps, but certainly not dread.

.

Follow the Information

Explanations in cognitive science generally adhere to the explanatory paradigm found in the life sciences: various operations are ‘identified’ and a variety of mechanisms, understood as systems of components or ‘working parts,’ are posited to discharge them (Bechtel and Abrahamson 2005, Bechtel 2008). In cognitive science in particular, the operations tend to be various cognitive capacities or conscious phenomena, and the components tend to be representations embedded in computational procedures that produce more representations. Theorists continually tear down and rebuild what are in effect virtual ‘explanatory machines,’ using research drawn from as many related fields as possible to warrant their formulations. Whether the operational outputs are behavioural, epistemic, or phenomenal, these virtual machines inevitably involve asking what information is available for what component system or process.

Let’s call this process of information tracking the ‘Follow the Information Game’ (FIG).

In a superficial sense, playing FIG is not all that different from playing detective. In the case of criminal investigations, evidence is assembled and assessed, possible motives are considered, various parties to the crime are identified, and an overarching narrative account of who did what to whom is devised and, ideally, tested. In the case of cognitive investigations, evidence is likewise assembled and assessed, possible evolutionary ‘motives’ are considered, a number of contributing component mechanisms are posited, and an overarching mechanistic account what does what for what is devised for possible experimental testing. The ‘doing’ invariably involves discharging some computational function, processing and disseminating information for subsequent computation. The theorist quite literally ‘follows the information’ from mechanism to mechanism, using a complex stew of evolutionary rationales, experimental results, and neuropathological case studies to warrant the various specifics of the resulting theoretical account.

We see this quite clearly in the mindreading versus metacognition debate, where the driving question is one of how we attribute propositional attitudes to ourselves as opposed to others. Do we have direct ‘metacognitive’ access to our beliefs and desires? Is mindreading a function of metacognition? Is metacognition a function of mindreading? Or are they simply different channels of a singular mechanism? Any answer to these questions requires mapping the flow of information, which is to say, playing FIG. This is why, for example, Peter Carruthers’ “How we know our own minds” and the following Open Peer Commentary read like transcripts of the diplomatic feuding behind the Treaty of Versailles. It’s an issue of mapping, but instead of arguing coal mines in Silesia and ports on the Baltic, the question is one of how the brain’s informatic spoils are divided.

Carruthers holds forth a ‘mindreading first’ account, arguing that our self-attributions of PAs rely on the same interpretative mechanisms we use to ‘mind read’ the PAs of others:

There is just a single metarepresentational faculty, which probably evolved in the first instance for purposes of mindreading… In order to do its work, it needs to have access to perceptions of the environment. For if it is to interpret the actions of others, it plainly requires access to perceptual representations of those actions. Indeed, I suggest that, like most other conceptual systems, the mindreading system can receive as input any sensory or quasi-sensory (eg., imagistic or somatosensory) state that gets “globally broadcast” to all judgment-forming, memory-forming, desire-forming, and decision-making systems. (2009b, 3-4)

In this article, he provides a preliminary draft of the informatic map he subsequently fleshes out in The Opacity of the Mind. He takes Baars (1988) Global Workspace Theory of Consciousness as a primary assumption, which requires him to distinguish between information that is and is not ‘globally broadcast.’ Consistent with the massive modularity endorsed in The Architecture of the Mind (2006), he posits a variety of informatically ‘encapsulated’ mechanisms operating ‘subpersonally’ or outside conscious access. The ‘mindreading system,’ not surprisingly, is accorded the most attention. Other mechanisms, when not directly recruited from preexisting cognitive scientific sources, are posited to explain various folk-psychological categories, such as belief. The tenability of these mechanisms turns on what might be called the ‘Accomplishment Assumption,’ the notion that all aspects of mental life that can be (or as in the case of folk psychology, already are) individuated are the accomplishments of various discrete neural mechanisms.

Given these mechanisms, Carruthers makes a number of ‘access inferences,’ each turning on the kinds of information required for each mechanism to discharge its function. To interpret the actions of others, the mindreading system needs access to information regarding those actions, which means it needs access to those systems dedicated to gathering that information. Given the apparently radical difference between self and other interpretation, Carruthers needs to delineate the kind of access characteristic of each:

Although the mindreading system has access to perceptual states, the proposal is that it lacks any access to the outputs of the belief-forming and decision-making mechanisms that feed off those states. Hence, self-attributions of propositional attitude events like judging and deciding are always the result of a swift (and unconscious) process of self-interpretation. However, it isn’t just the subject’s overt behavior and physical circumstances that provide the basis for the interpretation. Data about perceptions, visual and auditory imagery (including sentences rehearsed in “inner speech”), patterns of attention, and emotional feelings can all be grist for the self-interpretative view. (2009b, 4)

So the brain does possess belief mechanisms and the like, but they are informatically segregated from the suite of mechanisms responsible for generating the self-attribution of PAs. The former, it seems, do not ‘globally broadcast,’ and so their machinations must be gleaned the same way our brains glean the machinations of other brains, via their interpretative mindreading systems. Since, however, the mindreading system has no access to any information globally broadcast by other brains, he has to concede that the mindreading system is privy to additional information in instances of self-attribution, just not any involving direct access to the mechanisms responsible for PAs. So he lists what he presumes is available.

The problem, of course, is that it just doesn’t feel that way. Assumptions of unmediated access or self-transparency, Carruthers writes, “seem to be almost universal across times and cultures” (2011 15), not to mention “widespread in philosophy.” If we are forced to rely on our environmentally-oriented mindreading systems to interpret, as opposed to intuit, the function of our own brains, then why should we have any notion of introspective access to our PAs, let alone the presumption of unmediated access? Why presume an incorrigible introspective access that we simply do not have?

Carruthers offers what might be called a ‘less is more account.’ The mindreading system, he proposes, represents its self-application as direct rather than interpretative,. Our sense of self-transparency is the product of a mechanism. Once we have a mechanism, however, we require some kind of evolutionary story warranting its development. Carruthers argues that the presumption of incorrigible introspective access spares the brain a complicated series of computations pertaining to reliability without any real gain in reliability. “The transparency of our minds to ourselves,” he explains in an interview, “is a simplifying but false heuristic…” Citing Gigarenzer and Todd (1999), he points out that heuristics, even deceptive ones, regularly out-perform more fine-grained computational processes simply because of the relation between complexity and error. So long as self-interpretation via the mindreading system is generally reliable, this ‘Cartesian assumption’ or ‘self-transparency thesis’ (Carruthers 2008) possesses the advantage of simplicity to the extent that it relieves the need for computational estimations of interpretative reliability. The functional adequacy of a direct access model, in other words, more than compensates for its epistemic inadequacy, once one considers the metabolic cost and ‘robustness,’ as they say in ecological rationality circles, of the former versus the latter.

This explanation provides us with a clear-cut example of what I called the Accomplishment Assumption above. Given that ‘direct introspective access’ seems to be a discrete feature of mental life, it seems plausible to suppose that some discrete neural mechanism must be responsible for producing it. But there is a simpler explanation, one that draws out some of the problematic consequences of the ‘Follow the Information Game’ as it is presently played in cognitive science. A clue to this explanation can be found when Eric Schwitzgebel (2011) considers the selfsame problem:

Why, then, do people tend to be so confident in their introspective judgments, especially when queried in a casual and trusting way? Here is my guess: Because no one ever scolds us for getting it wrong about our experience and we never see decisive evidence of our error, we become cavalier. This lack of corrective feedback encourages a hypertrophy of confidence. [emphasis added] 130

Given his skepticism of ‘boxological’ mechanistic explanation (2011, 2012), Schwitzgebel can circumvent Carruthers’ dilemma (the mindreading system represents agent access either as direct or as interpretative) and simply pose the question in a far less structured way. Why do we possess unwarranted confidence in our introspective judgements? Well, no one tells us otherwise. But this simply begs the question of why. Why should we require ‘social scolding’ to ‘see decisive evidence of our error’? Why can’t we just see it on our own?

The easy answer is that, short of different perspectives, the requisite information is simply not available to us. The problem, in Schwitzgebel’s characterization, is that we have only a single perspective on our conscious experience, one lacking access to information regarding the limitations of introspection. In other words, the near universal presumption of self-transparency is an artifact of the near universal lack of any information otherwise. On this account, you could say the traditional, prescientific assumption of self-transparency is not so different from the traditional, prescientific assumption of geocentrism. We experience ‘vection,’ a sense of bodily displacement, whenever a large portion of our visual field moves. Short of that perceived motion (or other vestibular effects), a sense of motionless is the cognitive default. This was why the accumulation of so much (otherwise inaccessible) scientific knowledge was required to overturn geocentrism: not because we possessed an ‘innate representation’ of a motionless earth, but because of the interplay between our sensory limitations and our evolved capacity to detect motion.

The self-transparency assumption, on this account, is simply a kind of ‘noocentrism,’ the result of a certain limiting relationship between the information available and the cognitive systems utilized. The problem with geocentrism was that we were all earthbound, literally limited to what meagre extraterrestrial information our native senses could provide. That information, given our cognitive capacities, made geocentrism intuitively obvious. Thus the revolutionary significance of Galileo and his Dutch Spyglass. The problem with noocentrism, on the other hand, is that we are all brainbound, literally limited to what neural information our introspective ‘sense’ can provide. As it turns out that information, given our cognitive capacities, makes noocentrism intuitively obvious. Why? Because short of any Neural Spyglass, we lack any information regarding the insufficiency of the information at our disposal. We assume self-transparency because there is literally no other assumption to make.

One need only follow the information. Adopting a dual process perspective (Stanovich, 1999; Stanovich and Toplak, 2011), the globally broadcast information accessed for System 2 deliberation contains no information regarding its interpretative (and thus limited) status. Given that global broadcasting or integration operates within fixed bounds, System 2 has no way of testing, let alone sourcing, the information it provides. Thus, one cannot know whether the information available for introspection is insufficient in this or that respect. But since the information accessed is never flagged for insufficiencies (and why should it be, when it is generally reliable?) this suggests sufficiency will always be the assumptive default.

Given that Carruthers’ innate self-transparency account is one that he has developed with great care and ingenuity over the course of several years, a full rebuttal of the position would require an article in its own right. It’s worth noting, however, that many of the advantages that he attributes to his self-transparency mechanism also fall out of the default self-transparency account proposed here, with the added advantage of exacting no metabolic or computational cost whatsoever. You could say it’s a ‘more for even less’ account.

But despite its parsimony, there’s something decidedly strange about the notion of default self-transparency. Carruthers himself briefly entertains the possibility in The Opacity of the Mind, stating that “[a] universal or near-universal commitment to transparency may then result from nothing more than the basic principle or ‘law’ that when something appears to be the case one is disposed to form the belief that it is the case, in the absence of countervailing considerations or contrary evidence” (15). How might this ‘basic principle or law’ be characterized? Carruthers, I think, shies from pursuing this line of questioning simply because it presses FIG into hitherto unexplored territory.

Parsimony alone motivates a sustained consideration of what lies behind default self-transparency. Emily Pronin (2009), for instance, in her consideration of the ‘introspection illusion,’ draws an important connection between the assumption of self transparency and the so-called ‘bias blind spot,’ the fact that biases we find obvious in others are almost entirely invisible to ourselves. She details a number of studies where subjects were even more prone to exhibit this ‘blindness’ when provided opportunities to introspect. Now why are these biases invisible to us? Should we assume, as Carruthers does in the case of self-transparency, that some mechanism or mechanisms are required to represent our intuitions as unbiased in each case? Or should we exercise thrift and suppose that something structural is implicit in each?

In what follows, I propose to pursue the latter possibility, to argue that what I called ‘default sufficiency’ above is an inevitable consequence of mechanistic explanation, or FIG, once we appreciate the systematic role informatic neglect plays in human cognition.

.

The Invisibility of Ignorance

Which brings us to Daniel Kahneman. In a New York Times (2011, October 19) piece entitled “Don’t Blink! The Hazards of Confidence,” he writes of his time in the Psychology Branch of the Israeli Army, where he was tasked with evaluating candidates for officer training by observing them in a variety of tests designed to isolate soldiers’ leadership skills. His evaluations, as it turned out, were almost entirely useless. But what surprised him was the way knowing this seemed to have little or no impact on the confidence with which he and his fellows submitted their subsequent evaluations, time and again. He was so struck by the phenomenon that he would go on to study it as the ‘illusion of validity,’ a specific instance of the general role the availability of information seems to plays in human cognition–or as he later terms it, What-You-See-Is-All-There-Is, or WYSIATI.

The idea, quite simply, is that because you don’t know what you don’t know, you tend, in many contexts, to think you know all that you need to know. As he puts it in Thinking, Fast and Slow:

An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. [Our automatic cognitive system] excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have. (2011, 85)

As Kahneman shows, this leads to myriad errors in reasoning, including our peculiar tendency in certain contexts to be more certain about our interpretations the less information we have available. The idea is so simple as to be platitudinal: only the information available for cognition can be cognized. Other information, as Kahneman says, “might as well not exist” for the systems involved. Human cognition, it seems, abhors a vacuum.

The problem with platitudes, however, is that they are all too often overlooked, even when, as I shall argue in this case, their consequences are spectacularly profound. In the case of informatic availability, one need only look to clinical cases of anosognosia to see the impact of what might be called domain specific informatic neglect, the neuropathological loss of specific forms of information. Given a certain, complex pattern of neural damage, many patients suffering deficits as profound as lateralized paralysis, deafness, even complete blindness, appear to be entirely unaware of the deficit. Perhaps because of the informatic bandwidth of vision, visual anosognosia, or ‘Anton’s Syndrome,’ is generally regarded as the most dramatic instance of the malady. Prigatano (2010) enumerates the essential features of the syndrome as following:

First, the patient is completely blind secondary to cortical damage in the occipital regions of the brain. Second, these lesions are bilateral. Third, the patient is not only unaware of her blindness; she rejects any objective evidence of her blindness. Fourth, the patient offers plausible, but at times confabulatory responses to explain away any possible evidence of her failure to see (e.g., “The room is dark,” or “I don’t have my glasses, therefore how can I see?”). Fifth, the patient has an apparent lack of concern (or anosodiaphoria) over her neurological condition. (456)

These symptoms are almost tailor-made for FIG. Obviously, the blindness stems from the occlusion of raw visual information. The second-order ‘blindness,’ the patient’s inability to ‘see’ that they cannot see, turns, one might suppose, on the unavailability of information regarding the unavailability of visual information. At some crucial juncture, the information required to process the lack of visual information has gone missing. As Kahneman might say, since System 1 is dedicated to the construction of ‘the best possible story’ given only the information it has, the patient confabulates, utterly convinced they can see even though they are quite blind.

Anton’s Syndrome, in other words, can be seen as a neuropathological instance of WYSIATI. And WYSIATI, conversely, can be seen as a non-neuropathological version of anosognosia. And both, I want to argue, are analogous to the default self-transparency thesis I offered in lieu of Carruthers’ innate self-transparency thesis above. Consider the following ‘translation’ of Prigatano’s symptoms, only applied to what might be called ‘Carruthers’ Syndrome’:

First, the philosopher is introspectively blind to his PAs secondary to various developmental and structural constraints. Second, the philosopher is not aware of his introspective blindness, and is prone to reject objective evidence of it. Third, the philosopher offers plausible, but at times confabulatory responses to explain away evidence of his inability to introspectively access his PAs. And fourth, the philosopher often exhibits an apparent lack of concern for his less than ideal neurological constitution.

Here we see how the default self-transparency thesis I offered above is capable of filling the explanatory shoes of Carruthers’ innate self-transparency thesis: it simply falls out of the structure of cognition. In FIG terms, what philosophers call ‘introspection’ possibly provides some combination of impoverished information, skewed information, or (what amounts to the same) information matched to cognitive systems other than those employed in deliberative cognition, without–and here’s the crucial twist–providing information to this effect. Our sense of self-transparency, in other words, is a kind of ‘unk-unk effect,’ what happens when we can’t see that we can’t see. In the absence of information to the contrary, what is globally broadcast (or integrated) for System 2 deliberative uptake, no matter how attenuated, seems become everything there is to apprehend.

But what does it mean to that say that default self-transparency ‘falls out of the structure of cognition’? Isn’t this, for instance, a version of ‘belief perseverance’? Prima facie, at least, something like Keith Stanovich’s (1999) ‘knowledge projection argument’ might seem to offer an explanation, the notion that “in a natural ecology where most of our prior beliefs are true, projecting our beliefs onto new data will lead to faster accumulation of knowledge” (Sa, 1999, 506). But as the analogy to Kahneman’s WYSIATI and Anton’s Syndrome should make clear, something considerably more profound than the ‘projection of prior beliefs’ seems to be at work here. The question is what.

Consider the following: On Carruthers’ innate self-transparency account, the assumption seems to be that short of the mindreading system telling us otherwise, we would know that something hinky is afoot. But how? To paraphrase Plato, how could we, having never seen otherwise, know that we were simply guessing at a parade of shadows? What kind of cognitive resources could we draw on? We couldn’t source the information back to the mindreading system. Neither could we compare it with some baseline–some introspective yardstick of informatic sufficiency. In fact, it’s actually difficult to imagine how we might come to doubt introspectively accessed information at all, short of regimented, deliberative inquiry.

So then why does Carruthers seem to make the opposite assumption? Why does he assume that we would know short of some representational device telling us otherwise?

To answer this question we first need to appreciate the ubiquity of ‘unk-unk effects’ in the natural world. The exploitation of cognitive scotoma or blind spots has shaped the evolution of entire species, including our own. Consider the apparently instinctive nature of human censoriousness, the implicit understanding that managing the behaviour of others requires managing the information they have available. Consider mimicry or camouflage. Or consider ‘obligate brood parasites’ such as the cuckoo, which lays its eggs in the nests of other birds to be raised to maturity by them. Looked at in purely biomechanical terms, these are all examples of certain organic systems exploiting (by operating outside) the detection/response thresholds of other organic systems. Certainly the details of these interactions remain a work in progress, but the principle is not at all mysterious. One might say the same of Anton’s syndrome or anosognosia more generally: disabling certain devices systematically impacts the capacities of the system in some dramatic ways, including deficit detection. The lack of information constrains computation, constrains cognition, period. It seems pretty straightforward, mechanically speaking.

So why, then, does Anton’s jar against our epistemic intuitions the way it does? Why do we want to assume that somehow, even if we experienced the precise pattern of neural damage, we would be the magical exception, we would say, “Aha! I only think I see!”

Because when we are blind to our blindnesses, we think we see, either actually or potentially, all that there is to be seen. Or as Kahneman would put it, because of WYSIATI. We think we would be the one Anton’s patient who would actually cognize their loss of sight, in other words, for the very same reason the Anton’s patient is convinced he can still see! The lack of information not only constrains cognition, it constrains cognition in ways that escape cognition. We possess, not a representational presumption of introspective omniscience, but a structural inability to cognize the limits of metacognition.

You might say introspection is a kind of anosognosiac.

So why does Carruthers assume the mindreading system needs an incorrigibility device? The Accomplishment Assumption forces his hand, certainly. He thinks he has an apparently discrete intuition–self-transparency–that has to be generated somehow. But in explaining away the intuition he is also paradoxically serving it, because even if we agree with Carruthers, we nonetheless assume we would know something is up if incorrigibility wasn’t somehow signalled. There’s a sense, in other words, in which Carruthers’ argument against self-transparency appeals to it!

Now this broaches the question of how informatic neglect bears on our epistemic intuitions more generally. My goal here, however, is to simply illustrate that informatic neglect has to have a pivotal role to play in our understanding of cognition through an account of the role it plays in introspection. Suffice to say the ‘basic principle or law’ that Carruthers considers in passing is actually more basic than the ‘disposition to believe in the absence of countervailing considerations.’ Our cognitive systems simply cannot allow, to use Kahneman’s terms, for information they do not have. This is a brute fact of natural information processing systems.

Sufficiency is the default because information, understood as systematic differences making systematic differences, is effective. This is why, for instance, unknowns must be known, to effect changes in behaviour. And this is what makes research on cognitive biases and the neuropathologies of neglect so unsettling: they clearly show the way we are mere mechanisms, cognitive systems causally bound to the information available. If the informatic and cognitive limits of introspection are not available for introspection (and how could they be?), then introspection will seem, curiously, limitless, no matter how severe the actual limits may be.

The potential severity of those limits remains to be seen.

.

Introspection and the Bayesian Brain

Since unknown unknowns offer FIG nothing to follow, it should perhaps come as no surprise that the potential relevance of unk-unks has itself remained an unknown unknown in cognitive science. The idea proposed here is that ‘naive introspection’ be viewed as a kind of natural anosognosia, as a case where we think we see, even though we are largely blind. It stands, therefore, squarely in the ‘introspective unreliability’ camp most forcefully defended by Eric Schwitzgebel (2007, 2008, 2011a, 2011b, 2012). Jacob Hohwy (2011, 2012), however, has offered a novel defence of introspective reliability via a sustained consideration of Karl Friston’s (2006, 2012, for an overview) free energy elaboration of the Bayesian brain hypothesis, an approach which has been recently been making inroads due to the apparent comprehensiveness of its explanatory power.

Hohwy (2011) argues that the introspective unreliability suggested by Schwitzgebel is in fact better explained by phenomenological variability. Introspection only appears as unreliable as it does on Schwitzgebel’s account because it assumes a relatively stable phenomenology. “The evidence,” Hohwy writes, “can be summarized like this: everyday or ‘naive’ introspection tells us that our phenomenology is stable and certain but, surprisingly, calm and attentive introspection tells us our phenomenology is not stable and certain, rather it is variable and uncertain” (265). In other words, either ‘attentive introspection’ is unreliable and phenomenology is stable, or ‘naive introspection’ is unreliable and phenomenology is in fact variable.

Hohwy identifies at least three sources of potential phenomenological variability on Friston’s free energy account: 1) attenuation of the ‘prediction error landscape’ through ‘inferences’ that cancel out predictive success and allow unpredicted input to ascend; 2) change through ‘agency’ and movement; and 3) increase in precision and gain via attention. Thus, he argues “[i]f the brain is this kind of inference-machine, then it is a fundamental expectation that there is variability in the phenomenology engendered by perceptual inferences, and to which introspection in turn has access” (270).

The problem with saving introspective reliability by arguing phenomenal variability, however, is that it becomes difficult to understand what in operational terms is exactly being saved. Is the target too quick? Or is the tracking too slow? Hohwy can adduce evidence and arguments for the variability of conscious experience, and Schwitzgebel can adduce evidence and arguments for the unreliability of introspection, but there is a curious sense in which their conclusions are the same: in a number of respects conscious experience eludes introspective cognition.

Setting aside this argument, the real value in Hohwy’s account lies in his consideration of what might be called introspective applicability and introspective interference. Regarding the first, applicability, Hohwy is concerned with distinguishing those instances where the researcher’s request, ‘Please, introspect,’ is warranted and where it is ‘suboptimal.’ He discusses the so-called ‘default mode network,’ the systems of the brain engaged when the subject’s thoughts and imagery are detached from the world, as opposed to the systems engaged when the subject is directly involved with his or her environment. He then argues that the variance in introspective reliability one finds between experiments can be explained by whether the mental tasks involve the default mode as opposed to mental tasks involving the environmental mode. Tasks involving the default mode evince greater reliability when compared to tasks involving the environmental mode, he suggests, simply because the request to introspect is profoundly artificial in the latter.

His argument, in other words, is that introspection, as an adaptive, evolutionary artifact, is not a universally applicable form of cognition, and that the apparent unreliability of introspection is potentially a product of researchers asking subjects to apply introspection ‘out of bounds,’ in ways that it simply was not designed to be used. In ecological rationality terms (Todd and Gigarenzer, 2012), one might say introspection is a specialized cognitive tool (or collection of tools), a heuristic like any other, and as such will only properly function the degree to which it is properly matched to its ‘ecology.’ This possibility raises a host of questions. If introspection, far from being the monolithic, information-maximizing faculty assumed by the tradition, is actually a kind of cognitive tool box, a collection of heuristics adapted to discharge specific functions, then we seem to be faced with the onerous task of identifying the tools and matching them to the appropriate tasks.

Regarding introspective interference, the question, to paraphrase Hohwy is whether introspection changes or leaves phenomenal states as they are (262). In the course of discussing the likelihood that introspection involves a plurality of processes pertaining to different domains, he provides the following footnote:

Another tier can potentially be added to this account, directed specifically at the cognitive mechanisms underpinning introspection itself. If introspection is itself a type of internal predictive inference taking phenomenal states as input, then introspective inference would be subject to the similar types of prediction error dynamics as perceptual inference itself. In this way introspective inference about phenomenality would add variability to the already variable phenomenality. This sketch of an approach to introspection is attractive because it treats introspection as also a type of unconscious inference; however, it remains to be seen if it can be worked out in satisfactory detail and I do not here want to defend introspection by subscribing to a particular theory about it. 270

By ascribing to Friston’s free energy account, Hohwy is committed to an account that conceives the brain as a mechanism that extracts information regarding the causal structure of its environment via the sensory effects of that environment. As Hohwy (2012) puts it, a ‘problem of representation’ follows from this, since the brain is stranded with sensory effects and so has no direct access to causes. As a result it needs to establish causal relations de novo, as he puts it. Sensory input contains patterns as well as noise, the repetition of which allows the formation of predictions, which can be ‘tested’ against further repetitions. Prediction error minimization (PEM) allows the system to automatically adapt to real causal patterns in the environment, which can then be said to ‘supervise’ the system. The idea is that the brain contains a hierarchy of ascending PEM levels, beginning with basic sensory and causal regularities, and with the ‘harder to predict’ signals being passed upward, ultimately producing representations of the world possessing ‘causal depth.’ All these levels exhibit ‘lateral connectivity,’ allowing the refinement of prediction via ‘contextual information.’

Although the free energy account is not an account of consciousness, it does seem to explain what Floridi (2011) calls the ‘one dimensionality of experience,’ the way, as he writes, “experience is experience, only experience, and nothing but experience” (296). If the brain is a certain kind of Bayesian causal inference engine, then one might expect the generative models it produces to be utterly lacking any explicit neurofunctional information, given the dedication of neural structure and function to minimizing environmental surprise. One might expect, in other words, that the causal structure of the brain will be utterly invisible to the brain, that it will remain, out of structural necessity, a dreaded unknown unknown–or unk-unk.

The brain, on this kind of prediction error minimization account, simply has to be ‘blind’ to itself. And this is where, far from ‘attractive’ as Hohwy suggests, the mere notion of ‘introspection’ modelled on prediction error minimization becomes exceedingly difficult to understand. Does introspection (or the plurality of processes we label as such) proceed via hierarchical prediction error minimization from sensory effects to build generative models of the causal structure of the human brain? Almost certainly not. Why? Because as a free energy minimizing mechanism (or suite of mechanisms), introspection would seem to be thoroughly hobbled for at least four different reasons:

  • 1) Functional dependence: On the free energy account, the human brain distills the causal structure of its environments from the sensory effects of that causal structure. One might, on this model, isolate two distinct vectors of causality, one, which might be called the ‘lateral,’ pertaining to the causal structure of the environment, and another, which might be call the ‘medial,’ pertaining to the causal structure of sensory inputs and the brain. As mentioned above, the brain can only model the lateral vector of environmental causal structure by neglecting the medial vector of its own causal structure. This neglect requires that the brain enjoy a certain degree of functional independence from the causal structure of its environment, simply because ‘medial interference’ will necessarily generate ‘lateral noise,’ thus rendering the causal structure of the environment more difficult, if not impossible, to model. The sheer interconnectivity of the brain, however, would likely render substantial medial interference difficult for any introspective device (or suite of devices) to avoid.
  • 2) Structural immobility: Proximity complicates cognition. To get an idea of the kind of modelling constraints any neurally embedded introspective device would suffer, think of the difference between two anthropologists trying to understand a preliterate tribesman from the Amazon, the one ranging freely with her subject in the field, gathering information from a plurality of sources, the other locked with him in a coffin. Since it is functionally implicated–or brainbound–relative to its target, the ability of any introspective device (or suite of devices) to engage in the ‘active inferences’ would be severely restricted. On Friston’s free energy account the passive reception of sensory input is complemented by behavioural outputs geared to maximizing information from a variety of positions within the organism’s environment, thus minimizing the likelihood of ‘perspectival’ or angular illusions, false inferences due to the inability to test predictions from alternate angles and positions. Geocentrism is perhaps the most notorious example of such an illusion. Given structural immobility, one might suppose, any introspective device (or suite of devices) would suffer ‘phenomenal’ analogues to this and other illusions pertaining to limits placed on exploratory information-gathering.
  • 3) Cognitive resources: If we assume that human introspective capacity is a relatively recent evolutionary adaptation, we might expect any introspective device (or suite of devices) to exploit preexisting cognitive resources, which is to say, cognitive systems primarily adapted to environmental prediction error minimization. For instance, one might argue that both (1) and (2) fairly necessitate the truth of something like Carruther’s mindreading account, particularly if (as seems to be the case) mindreading antedates introspection. Functional dependence and structural immobility suggest that we are actually in a better position mechanically to accurately predict the behaviour of others than ourselves, as indeed a growing body of evidence indicates (Carruthers (2009) provides an excellent overview). Otherwise, given our apparent ability to attend to the whole of experience, does it make sense, short of severe evolutionary pressure, to presume the evolution of entirely novel cognitive systems adapted to the accurate modelling second-order, medial information? It seems far more likely that access to this information was incremental across generations, and that it was initially selected for the degree to which it proved advantageous given our preexisting suite of environmentally oriented cognitive abilities.
  • 4) Target complexity: Any introspective device (or suite of devices) modelled on the PEM (or, for that matter, any other mechanistic) account must also cope with the sheer functional complexity of the human brain. It is difficult to imagine, particularly given (1), (2), and (3) above, how the tracking that results could avoid suffering out-and-out astronomical ‘resolution deficits’ and distortions of various kinds.

The picture these complicating factors paint is sobering. Any introspective device (or suite of devices) modelled on free energy Bayesian principles would be almost fantastically crippled: neurofunctionally embedded (which is to say, functionally entangled and structurally imprisoned) in the most complicated machinery known, accessing information for environmentally biased cognitive systems. Far from what Hohwy supposes, the problems of applicability and interference, when pursued through a free energy lens, at least, would seem to preclude introspection as a possibility.

But there is another option, one that would be unthinkable were it not for the pervasiveness and profundity of the unk-unk effect: that this is simply what introspection is, a kind of near blindness that we confuse for brilliant vision, simply because it’s the only vision we know.

The problem facing any mechanistic account of introspection can be generalized as the question of information rendered and cognitive system applied: to what extent is the information rendered insufficient, and to what extent is the cognitive system activated misapplied? This, I would argue, is the great fork in the FIG road. On the ‘information rendered’ side of the issue, informatic neglect means the assumption of sufficiency. We have no idea, as a rule, whether we have the information we need for effective deliberation or not. One need only consider the staggering complexity of the brain–complex enough to stymy a science that has puzzled through the origins of the universe in the meantime–to realize the astronomical amounts of information occluded by metacognition. On the ‘cognitive system applied’ side, informatic neglect means the assumption of universality. We have no idea, as a rule, whether we’re misapplying ‘introspection’ or not. One need only consider the heuristic nature of human cognition, the fact that heuristics are adaptive and so matched to specific sets of problems, to realize that introspective misapplications, such as those argued by Hohwy, are likely an inevitability.

This is the turn where unknown unknowns earn their reputation for dread. Given the informatic straits of introspection, what are the chances that we, blind as we are, have anything approaching the kind of information we require to make accurate introspective judgments regarding the ‘nature’ of mind and consciousness? Given the heuristic limitations of introspection, what are the chances that we, blind as we are, somehow manage to avoid colouring far outside the cognitive lines? Is it fair to assume that the answer is, ‘Not good’?

Before continuing to consider this question in more detail, it’s worth noting how this issue of informatic availability and cognitive applicability becomes out-and-out unavoidable once you acknowledge the problem of the ‘dreaded unknown unknowns.’ If the primary symptom of patients suffering neuropathological neglect is the inability to cognize their cognitive deficits, then how do we know that we don’t suffer from any number of ‘natural’ forms of metacognitive neglect? The obvious answer is, We don’t. Could what we call ‘philosophical introspection’ simply be a kind of mitigated version of Anton’s Syndrome? Could this be the reason why we find consciousness so stupendously difficult to understand? Given millennia of assuming the best of introspection and finding only perplexity, perhaps, finally, the time has come to assume the worst, and to reconceptualize the problematic of consciousness in terms of privation, distortion, and neglect.

.

Conclusion: Introspection, Tangled and Blind

Cognitive science and philosophy of mind suffer from a profound scotoma, a blindness to the structural role blindness plays in our intuitive assumptions. As we saw in passing, FIG actually plays into this blindness, encouraging theorists and researchers to conceive the relationship between information and experience exclusively in what I called Accomplishment terms. If self-transparency is the ubiquitous assumption, then it follows that some mechanism possessing some ‘self-transparency representation’ must be responsible. Informatic neglect, however, allows us to see it in more parsimonious, structural terms, as a positive, discrete feature of human cognition possessing no discrete neurofunctional correlate. And this, I would argue, counts as a game-changer as far as FIG is concerned. The possibility that certain, various discrete features of cognition and consciousness could be structural expressions of various kinds of informatic neglect not only rewrites the rules of FIG, it drastically changes the field of play.

That FIG needs to be sensitive to informatic neglect I take as uncontroversial. Informatic neglect seems to be one of those peculiar issues that everyone acknowledges but never quite sees, one that goes without saying because it goes unseen. Schwitzgebel (2012), for instance, provides a number of examples of the complications and ambiguities attending ‘acts of introspection’ to call attention to the artificial division of introspective and non-introspective processes, and in particular, to what might be called the ‘transparency problem,’ the way judgments about experience effortlessly slip into judgments about the objects/contents of experience. Given this welter of obscurities, complicating factors, not to mention the “massive interconnection of the brain,” he advocates what might be called a ‘tangled’ account of introspective cognitive processes:

What we have, or seem to have, is a cognitive confluence of crazy spaghetti, with aspects of self-detection, self-shaping, self-fulfilment, spontaneous expression, priming and association, categorical assumptions, outward perception, memory, inference, hypothesis testing, bodily activity, and who only knows what else, all feeding into our judgments about current states of mind. To attempt to isolate a piece of this confluence as the introspective process – the one true introspective process, though influenced by, interfered with, supported by, launched or halted by, all the others – is, I suggest, like trying to find the one way in which a person makes her parenting decisions… 19

Given that you accept his conclusion as a mere possibility (or as I would argue, a distinct probability), you implicitly accept much of what I’m saying here regarding informatic neglect. You accept that introspection could be massively plural while appearing to be unitary. You accept that introspection could be skewed and distorted while appearing to be the very rule. How could this be, short of informatic neglect? Recall Pronin’s (2009) ‘bias blind spots,’ or Hohwy’s (2011) mismatched ‘plurality of processes.’ How could it be that we swap between cognitive systems oblivious, with nothing, no intuition, no feel, to demarcate any transitions, let alone their applicability? As I hope should be clear, this question is simply a version of Carruthers’ question from above: How could it be we once unanimously thought that introspection was incorrigible? Both questions ask the same thing of introspection, namely, To what extent are the various limits of introspection available to introspection?

The answer, quite simply, is that they are not. Introspection is out-and-out blind to its internal structure, its cognitive applicability, and its informatic insufficiencies–let alone to its neurofunctionality. To the extent that we fail to recognize these blindnesses, we are effectively introspective anosognosiacs, simply hoping that things are ‘just so.’ And this is just to say that informatic neglect, once acknowledged, constitutes a genuine theoretical crisis, for philosophy of mind as well as for cognitive science, insofar as their operational assumptions turn on interpretations of information gleaned, by hook or by crook, from ‘introspection.’

Of course, the ‘problem of introspection’ is nothing new (in certain circles, at least). The literature abounds with attempts to ‘sanitize’ introspective data for scientific consumption. Given this, one might wonder what distinguishes informatic neglect from the growing army of experimental confounds already identified. Perhaps the appropriate methodological precautions will allow us to quarantine the problem. Schooler and Schreiber (2004), for instance, offer one such attempt to ‘massage’ FIG in such a way to preserve the empirical utility of introspection. After considering a variety of ‘introspective failures,’ they pin the bulk of the blame on what they call ‘translation dissociations’ between consciousness and meta-consciousness, the idea being that the researcher’s demand, ‘Please, introspect,’ forces the subject to translate information available for introspection into action. They categorize three kinds of translation dissociations: 1) detection, where the ‘signal’ to be introspected is too weak or ambiguous; 2) transformation, where tasks “require intervening operations for which the system is ill-equipped” (32); and 3) substitution, where the information rendered has no connection to the information experimentally targeted. Once these ‘myopias’ are identified, the assumption is, methodologies can be designed to act as corrective lenses.

The problem that informatic neglect poses for FIG, however, is far and away more profound. To see this, one need only consider the dichotomy of ‘consciousness versus metaconsciousness,’ and the assumption that there is some fact of the matter pertaining to the first that is in principle accessible to the latter. The point isn’t that no principled distinction can be made between the two, but rather that even if it can, the putative target, consciousness, is every bit as susceptible to informatic neglect as any metaconscious attempt to cognize it. The assumption is simply this: Information that finds itself globally broadcast or integrated will not, as a rule, include information regarding its ‘limits.’ Insofar as we can assume this, we can assume that informatic neglect isn’t so much a ‘problem of introspection’ as it is a problem afflicting consciousness as whole.

Our sketch of Friston’s Bayesian brain above demonstrated why this must be the case. Simply ask: What would the brain require to accurately model itself from within itself? On the PEM account, the brain is a dedicated causal inference engine, as it must be, given the difficulties of isolating the causal structure of its environment from sensory effects. This means that the brain has no means of modelling its own causal structure, short of either 1) analogizing from brains found in its environment, or 2) developing some kind of onboard ‘secondary inference’ system, one which, as was argued above, we should expect would face a number of dramatic informatic and cognitive obstacles. Functionally entangled with, structurally immured in, and heuristically mismatched to the most complicated machinery known, such a secondary inference system, one might expect, would suffer any number of deficits, all the while assuming itself incorrigible simply because it lacks any direct means of detecting otherwise.

Consciousness could very well be a cuckoo, an imposter with ends or functions all its own, and we would never be able to intuit otherwise. As we have seen, from the mechanistic standpoint this has to be a possibility. And given this possibility, informatic neglect plainly threatens all our assumptions. Once again: What would the brain require to model itself from within itself? What evolutionary demands were answered how? Bracket, as best you can, your introspective assumptions, and ask yourself how many ways these questions can be cogently answered. Far more than is friendly to our intuitive assumptions–these little blind men who wander out of the darkness telling fantastic and incomprehensible tales.

Even apparent boilerplate intuitions like efficacy become moot. The argument that the brain is generally efficacious is trivial. Given that the targets of introspective tracking are systematically related to the function of the brain, informatic neglect (and the illusion of sufficiency in particular) suggests that what we introspect or intuit will evince practical efficacy no matter how drastically its actual neural functions differ or even contradict our manifest assumptions. Neurofunctional dissociations, as unknown unknowns, simply do not exist for metacognition. “[T]he absence of representation,” as Dennett (1991) famously writes, “is not the same as the representation of absence” (359). Since the ‘unk-unk effect’ has no effect, cognition is stranded with assumptive sufficiency on the one hand, and the efficacy of our practices on the other. Informatic neglect, in other words, means that our manifest intuitions (not to mention our traditional assumptions) of efficacy are all but worthless. The question of the efficacy of what philosophers think they intuit or introspect is what it has always been: a question that only a mature neuroscience can resolve. And given that nothing biases intuition or introspection ‘friendly’ outcomes over unfriendly outcomes, we need to grapple with the fact that any future neuroscience is far more likely to be antagonistic to our intuitive, introspective assumptions than otherwise. There are far more ways for neurofunctionality to contradict our manifest and traditional assumptions than to rescue them. And perhaps this is precisely what we should expect, given the dismal history of traditional discourses once science colonizes their domain.

It is worth noting that a priori arguments simply beg the question, since it is entirely possible (likely probable given the free energy account) that evolution stranded us with suboptimal metacognitive capacities. One might simply ask, for instance, from where do our intuitions regarding the a priori come?

Evolutionary arguments, on the other hand, cut both ways. Everyone agrees that our general metacognitive capacities are adaptations of some kind, but adaptations for what? The accurate second-order appraisals of cognitive structure or ‘mind’ more generally? Seems unlikely. As far as we know, our introspective capacities could be the result of very specific evolutionary demands that required only gross distortions to be discharged. What need did our ancestors have for ‘theoretical descriptions of the mental’? Given informatic neglect (and the spectre of ‘Carruthers’ Syndrome’), evolutionary appeals would actually seem to count against the introspectionist, insofar as any story told would count as ‘just so,’ and thus serve to underscore the improbability of that story.

Again, the two question to be asked are: What would the brain require to model itself from within itself? What evolutionary demands were answered how? Informatic neglect, the dreaded unknown unknown, allows us to see how many ways these questions can be answered. By doing so, it makes plain the dramatic extent of our anosognosia, to think that we had won the magical introspection lottery.

Short of default self-transparency, why would anyone trust in any intuitions incompatible with those that underwrite the life sciences? If it is the case that evolution stranded us with just enough second-order information and cognitive resources to discharge a relatively limited repertoire of processes, then perhaps the last two millennia of second-order philosophical perplexity should not surprise us. Maybe we should expect that science, when it finally provides a detailed picture of informatic availability and cognitive applicability, will be able to diagnose most traditional philosophical problematics as the result of various, unavoidable cognitive illusions pertaining to informatic depletion, distortion and neglect. Then, perhaps, we will at last be able to see the terrain of perennial philosophical problems as a kind of ‘free energy landscape’ sustained by the misapplication of various, parochial cognitive systems to insufficient information. Perhaps noocentrism, like biocentrism and geocentrism before it, will become the purview of historians, a third and final ‘narcissistic wound.’

.

References

Armor, D., Taylor, S. (1998). Situated optimism: specific outcome expectancies and self-regulation. In M. P. Zanna (ed.), Advances in Experimental Social Psychology. 30. 309-379. New York, NY: Academic Press.

Baars, B. (1988). A Cognitive Theory of Consciousness. Cambridge, MA: Cambridge University Press.

Bakker, S. (2012). The last magic show: a blind brain theory of the appearance of consciousness. Retrieved from http://www.academia.edu/1502945/The_Last_Magic_Show_A_Blind_Brain_Theory_of_the_Appearance_of_Consciousness

Bechtel, W, and Abrahamson, A. (2005). Explanation: a mechanist alternative. Studies in the History of Biological Biomedical Sciences. 36. 421-441.

Bechtel, W. (2008). Mental Mechanisms: Philosophical Perspectives on Cognitive Neuroscience. New York, NY: Psychology Press.

Carruthers, P. (forthcoming). On knowing your own beliefs: a representationalist account. Retrieved from http://www.philosophy.umd.edu/Faculty/pcarruthers/On%20knowing%20your%20own%20beliefs.pdf * [In Nottelman (ed.). New Essays on Belief: Structure, Constitution and Content. Palgrave MacMillan]

Carruthers, P. (2011). The Opacity of Mind: An Integrative Theory of Self-Knowledge. Oxford: Oxford University Press.

Carruthers, P. (2009a). Introspection: divided and partly eliminated. Philosophy and Phenomenological Research. 80(1). 76-111.

Carruthers, P. (2009b). How we know our own minds: the relationship between mindreading and metacognition. Behavioral and Brain Sciences. 1-65. doi:10.1017/S0140525X09000545

Carruthers, P. (2008). Cartesian epistemology: is the theory of the self-transparent mind innate? Journal of Consciousness Studies. 15(4). 28-53.

Carruthers, P. (2006). The Architecture of the Mind: Massive Modularity and the Flexibility of Thought. Oxford: Clarendon Press.

Dennett, D. C. (2002). How could I be wrong? How wrong could I be? Journal of Consciousness Studies. 9. 1-4.

Dennett, D. C. (1991). Consciousness Explained. Boston, MA: Little Brown.

Dewey, J. (1958). Experience and Nature. New York, NY: Dover Publications.

Ehrlinger, J., Gilovich, T., and Ross, L. (2005). Peering into the bias blind spot: people’s assessments of bias in themselves and others. Personality and Social Psychology Bulletin, 31. 680-692.

Floridi, L. (2011). The Philosophy of Information. Oxford: Oxford University Press.

Friston, K. (2012). A free energy principle for biological systems. Entropy, 14. doi: 10.3390/e14112100.

Friston, K., Kilner, J., and Harrison, L. (2006). A free energy principle for the brain. Journal of Physiology – Paris, 100(1-3). 70-87.

Gigarenzer, G., Todd, P. and the ABC Research Group. (1999). Simple Heuristics that Make Us Smart. Oxford: Oxford University Press.

Heilman, K. and Harciarek, M. (2010). Anosognosia and anosodiaphoria of weakness. In G. P. Prigatano (ed.), The Study of Anosognosia. 89-112. Oxford: Oxford University Press.

Helweg-Larsen, M. and Shepperd, J. (2001). Do moderators of the optimistic bias affect personal or target risk estimates? A review of the literature. Personality and Social Psychology Review, 5. 74-95.

Hohwy, J. (2012). Attention and conscious perception in the hypothesis testing brain. Frontiers in Psychology, 3(96) 1-14. doi: 10.3389/fpsyg.201200096.

Hohwy, J. (2011). Phenomenal variability and introspective reliability. Mind & Language, 26(3). 261-286.

Huang, G. T. (2008). Is this a unified theory of the brain? The New Scientist. (2658). 30-33.

Hurlburt, R. T. and Schwitzgebel, E. (2007). Describing Inner Experience? Proponent Meets Skeptic. Cambridge, MA: MIT Press.

Irvine, E. (2012). Consciousness as a Scientific Concept: A Philosophy of Science Perspective. New York, NY: Springer.

Kahneman, D. (2011, October 19). Don’t blink! The hazards of confidence. The New York Times. Retrieved from http://www.nytimes.com/2011/10/23/magazine/don’t-blink-the-hazards-of-confidence.html?pagewanted=all&_r=0

Kahneman, Daniel (2011). Thinking, Fast and Slow. Toronto, ON: Doubleday Canada.

Lopez, J. K., and Fuxjager, M. J. (2012). Self-deception’s adaptive value: effects of positive thinking and the winner effect. Consciousness and Cognition. 21. 315-324.

Prigatano, G. and Wolf, T. (2010). Anton’s Syndrome and unawareness of partial or complete blindness. In G. P. Prigatano (ed.), The Study of Anosognosia. 455-467. Oxford: Oxford University Press.

Pronin, E. (2009). The introspection illusion. In M. P. Zanna (ed.), Advances in Experimental Social Psychology, 41. 1-68. Burlington: Academic Press.

Sa, W. C., West, R. F. and Stanovich, K. E. (1999). The domain specificity and generality of belief bias. Journal of Educational Psychology, 91(3). 497-510.

Schooler, J. W., and Schreiber, C. A. (2004). Experience, meta-consciousness, and the paradox of introspection. Journal of Consciousness Studies. 11. 17-39.

Schwitzgebel, E. (2012). Introspection, what? In D. Smithies & D. Stoljar (eds.), Introspection and Consciousness. Oxford: Oxford University Press.

Schwitzgebel, E. (2011a). Perplexities of Consciousness. Cambridge, MA: MIT Press.

Schwitzgebel, E. (2011b). Self-Ignorance. In J. Liu and J. Perry (eds.), Consciousness and the Self. Cambridge, MA: Cambridge University Press.

Schwitzgebel, E. (2008). The unreliability of naive introspection. Philosophical Review, 117(2). 245-273.

Sklar, A. Y., Levy, N., Goldstein, A., Mandel, R., Maril, A., and Hassin, R. R. (2012). Reading and doing arithmetic nonconsciously. Proceedings of the National Academy of Sciences. 1-6. doi: 10.1073/pnas.1211645109.

Stanovich, K. E. (1999). Who is Rational? Studies of Individual Differences in Reasoning. Mahwah, NJ: Lawrence Erlbaum Associates.

Stanovich, K. E. and Toplak, M. E. (2012). Defining features versus incidental correlates of Type 1 and Type 2 processing. Mind and Society. 11(1). 3-13.

Taylor, S. and Brown, J. (1988). Illusion and well-being: a social psychological perspective on mental health. Psychological Bulletin, 103. 193-210.

There are known knowns. (2012, November 7). In Wikipedia. Retrieved from http://en.wikipedia.org/wiki/There_are_known_knowns

Todd, P., Gigarenzer, G., and the ABC Research Group. (2012). What is ecological rationality? Ecological Rationality: Intelligence in the World. 3-30. Oxford: Oxford University Press.

von Hippel, W., & Trivers, R. (2011). The evolution and psychology of self-deception. Behavioral and Brain Sciences, 34, 1–56.

Weinstein, E. A. and Kahn, R. L. (1955). Denial of Illness: Symbolic and Physiological Aspects. Springfield, IL: Charles C. Thomas.

Weinstein, N. (1980). Unrealistic optimism about future life events. Journal of Personality and Social Psychology, 39. 806-820.

Wigner, E. (1960). The unreasonable effectiveness of mathematics in the natural sciences. Richard Courant lecture in mathematical sciences delivered at New York University, May 11, 1959. Communication on Pure and Applied Mathematics. 13. 1-14. doi: 10.1002

Lamps Instead of Ladies: The Hard Problem Explained

by rsbakker

This is another repost, this one from 2012/07/04. I like it I think because of the way it makes the informatic stakes of the hard problem so vivid. I do have some new posts in the works, but Golgotterath has been gobbling up more and more of my creative energy of late. For those of you sending off-topic comments asking about a publication date for The Unholy Consult, all I can do is repeat what I’ve been saying for quite some time now: You’ll know when I know! The book is pretty much writing itself through me at this point, and from the standpoint of making good on the promise of this series, I think this is far and away the best way to proceed. It will be done when it tells me it’s done. I would rather frustrate you all with an extended wait than betray the series. If you want me to write faster, cut me cheques, shame illegal downloaders, or simply thump the tub as loud as you can online and in print. So long as The Second Apocalypse remains a cult enterprise, I simply have to continue working on completing my PhD.

.

The so-called “hard problem” is generally understood as the problem consciousness researchers face closing Joseph Levine’s “explanatory gap,” the question of how mere physical systems can generate conscious experience. The problem is that, as Descartes noted centuries ago, consciousness is so damned peculiar when compared to the natural world that it reveals. On the one hand you have qualia, or the raw feel, the ‘what-it-is-like’ of conscious experiences. How could meat generate such bizarre things? On the other hand you have intentionality, the aboutness of consciousness, as well as the related structural staples of the mental, the normative and the purposive.

In one sense, my position is a mainstream one: consciousness is another natural phenomena that will be explained naturalistically. But it is not just another natural phenomenon: it is the natural phenonmenon that is attempting to explain itself naturalistically. And this is where the problem becomes an epistemological nightmare – or very, very hard.

This is why I espouse what might be called a “Dual Explanation Account of Consciousness.” Any one of the myriad theories of consciousness out there could be entirely correct, but we will never know this because we disagree about just what must be explained for an explanation of consciousness to count as ‘adequate.’ The Blind Brain Theory explains the hardness of the hard problem in terms of the information we should expect the conscious systems of the brain to lack. The consciousness we think we cognize, I want to argue, is the product of a variety of ‘natural anosognosias.’ The reason everyone seems to be barking up the wrong explanatory tree is simply that we don’t have the consciousness we think we do.

Personally, I’m convinced this has to be case to some degree. Let’s call the cognitive system involved in natural explanation the ‘NE system.’ The NE system, we might suppose, originally evolved to cognize external environments: this is what it does best. (We can think of scientific explanation as a ‘training up’ of this system, pressing it to its peak performance). At some point, the human brain found it more and more reproductively efficacious to cognize onboard information – data from itself – as well. In addition to continually sampling and updating environmental information, it began doing the same with its own neural information.

Now if this marks the genesis of human self-consciousness, the confusions we collectively call the ‘hard problem’ become the very thing we should expect. We have an NE system exquisitely adapted over hundreds of millions of years to cognize environmental information suddenly forced to cognize 1) the most complicated machinery we know of in the universe (itself); 2) from a fixed (hardwired) ‘perspective’; and 3) with nary more than a million years of evolutionary tuning.

Given this (and it seems fairly airtight to me), we should expect that the NE system would have enormous difficulty cognizing consciously available information. (1) suggests that the information gleaned will be drastically fractional. (2) suggests that the information accessed will be thoroughly parochial, but also, entirely ‘sufficient,’ given the NE’s rank inability to ‘take another perspective’ relative the gut brain the way it can relative its external environments. (3) suggests the information provided will be haphazard and distorted, the product of kluge-type mutations. [See “Reengineering Dennett” for a more recent consideration of this in terms of ‘dimensionality.’]

In other words, (1) implies ‘depletion,’ (2) implies ‘truncation’ (since we can’t access the causal provenance of what we access), and (3) implies a motley of distortions. Your NE is quite literally restricted to informatic scraps.

This is the point I keep hammering in my discussions with consciousness researchers: our attempts to cognize experience utilize the same machinery that we use to cognize our environments – evolution is too fond of ‘twofers’ to assume otherwise, too cheap. Given this, the “hard problem” not only begins to seem inevitable, but something that probably every other biologically conscious species in the universe suffers. The million dollar question is this: If information privation generates confusion and illusion regarding phenomena within consciousness, why should it not generate confusion and illusion when regarding consciousness itself?

Think of the myriad mistakes the brain makes: just recently, while partying with my brother-in-law on the front porch, we became convinced that my neighbour from across the street was standing at her window glaring at us – I mean, convinced. It wasn’t until I walked up to her house to ask whether we were being too noisy (or noisome!) that I realized it was her lamp glaring at us (it never liked us anyway), that it was a kooky effect of light and curtains. What I’m saying is that peering at consciousness is no different than peering at my neighbour’s window, except that we are wired to the porch, and so have no way of seeing lamps instead of ladies. Whether we are deliberating over consicousness or deliberating over neighbours, we are limited to the same cognitive systems. As such, it simply follows that the kinds of distortions information privation causes in the one also pertain to the other. It only seems otherwise with consciousness because we are hardwired to the neural porch and have no way of taking a different informatic perspective. And so, for us, it just is the neighbour lady glaring at you through the window, even though it’s not.

Before we can begin explaining consciousness, we have to understand the severity of our informatic straits. We’re stranded: both with the patchy, parochial neural information provided, and with our ancient, environmentally oriented cognitive systems. The result is what we call ‘consciousness.’

The argument in sum is pretty damn strong: Consciousness (as it is) evolved on the back of existing, environmentally oriented cognitive systems. Therefore, we should assume that the kinds of information privation effects pertaining to environmental cognition also apply to our attempts to cognize consciousness. (1), (2), and (3) give us good reason to assume that consciousness suffers radical information privation. Therefore, odds are we’re mistaking a good number of lamps for ladies – that consciousness is literally not what we think it is.

Given the breathtaking explanatory successes of the natural sciences, then, it stands to reason that our gut antipathy to naturalistic explanations of consciousness are primarily an artifact of our ‘brain blindess.’

What we are trying to explain, in effect, is information that has to be depleted, truncated, and distorted – a lady that quite literally does not exist. And so when science rattles on about ‘lamps,’ we wave our hands and cry, “No-no-no! It’s the lady I’m talking about.”

Now I think this is a pretty novel, robust, and nifty dissection of the Hard Problem. Has anyone encountered anything similar anywhere? Does anyone see any obvious assumptive or inferential flaws?

Technocracy, Buddhism, and Technoscientific Enlightenment (by Benjamin Cain)

by rsbakker

In “Homelessness and the Transhuman” I used some analogies to imagine what life without the naive and illusory self-image would be like. The problem of imagining that enlightenment should be divided into two parts. One is the relatively uninteresting issue of which labels we want to use to describe something. Would an impersonal, amoral, meaningless, and purposeless posthuman, with no consciousness or values as we usually conceive of them “think” at all? Would she be “alive”? Would she have a “mind”? Even if there are objective answers to such questions, the answers don’t really matter since however far our use of labels can be stretched, we can always create a new label. So if the posthuman doesn’t think, maybe she “shminks,” where shminking is only in some ways similar to thinking. This gets at the second, conceptual issue here, though. The interesting question is whether we can conceive of the contents of posthuman life. For example, just what would be the similarities and differences between thinking and shminking? What could we mean by “thought” if we put aside the naive, folk psychological notions of intentionality, truth, and value? We can use ideas of information and function to start to answer that sort of question, but the problem is that this taxes our imagination because we’re typically committed to the naive, exoteric way of understanding ourselves, as R. Scott Bakker explains.

One way to get clearer about what the transformation from confused human to enlightened posthuman would entail is to consider an example that’s relatively easy to understand. So take the Netflix practice described by Andrew Leonard in “How Netflix is Turning Viewers into Puppets.” Apparently, more Americans now watch movies legally streamed over the internet than they do on DVD or Blu-Ray, and this allows the stream providers to accumulate all sorts of data that indicate our movie preferences. When we pause, fast forward or stop watching streamed content, we supply companies like Netflix with enormous quantities of information which their number crunchers explain with a theory about our viewing choices. For example, according to Leonard, Netflix recently spent $100 million to remake the BBC series House of Cards, based on that detailed knowledge of viewers’ habits. Moreover, Netflix learned that the same subscribers who liked that earlier TV show also tend to like Kevin Spacey, and so the company hired Kevin Spacey to star in the remake.

So the point isn’t just that entertainment providers can now amass huge quantities of information about us, but that they can use that information to tailor their products to maximize their profits. In other words, companies can now come much closer to giving us exactly what we objectively want, as indicated by scientific explanations of our behaviour. As Leonard says, “The interesting and potentially troubling question is how a reliance on Big Data [all the data that’s now available about our viewing habits] might funnel craftsmanship in particular directions. What happens when directors approach the editing room armed with the knowledge that a certain subset of subscribers are opposed to jump cuts or get off on gruesome torture scenes or just want to see blow jobs. Is that all we’ll be offered? We’ve seen what happens when news publications specialize in just delivering online content that maximizes page views. It isn’t always the most edifying spectacle.”

So here we have an example not just of how technocrats depersonalize consumers, but of the emerging social effects of that technocratic perspective. There are numerous other fields in which the fig leaf of our crude self-conception is stripped away and people are regarded as machines. In the military, there are units, targets, assets, and so forth, not free, conscious, precious souls. Likewise, in politics and public relations, there are demographics, constituents, and special interests, and such categories are typically defined in highly cynical ways. Again, in business there are consumers and functionaries in bureaucracies, not to mention whatever exotic categories come to the fore in Wall Street’s mathematics of financing. Again, though, it’s one thing to depersonalize people in your thoughts, but it’s another to apply that sophisticated conception to some professional task of engineering. In other words, we need to distinguish between fantasy- and reality-driven depersonalization. Military, political, and business professionals, for example, may resort to fashionable vocabularies to flatter themselves as insiders or to rationalize the vices they must master to succeed in their jobs. Then again, perhaps those vocabularies aren’t entirely subjective; maybe soldiers can’t psych themselves up to kill their opponents unless they’re trained to depersonalize and even to demonize them. And perhaps public relations, marketing, and advertising are even now becoming more scientific.

.

The Double Standard of Technocracy

Be that as it may, I’d like to begin with just the one, pretty straightforward example of creating art to appeal to the consumer, based on inferences about patterns in mountains of data acquired from observations of the consumer’s behaviour. As Leonard says, we don’t have to merely speculate on what will likely happen to art once it’s left in the hands of bean counters. For decades, producers of content have researched what people want so that they could fulfill that demand. It turns out that the majority of people in most societies have bad taste owing to their pedestrian level of intelligence. Thus, when an artist is interested in selling to the largest possible audience to make a short-term profit, that is, when the artist thinks purely in such utilitarian terms, she must give those people what they want, which is drivel. And if all artists come to think that way, the standard of art (of movies, music, paintings, novels, sports, and so on) is lowered. Leonard points out that this happens in online news as well. The stories that make it to the front page are stories about sex or violence, because that’s what most people currently want to see.

So entertainment companies that will use this technoscience (the technology that accumulates data about viewing habits plus the scientific way of drawing inferences to explain patterns in those data) have some assumptions I’d like to highlight. First, these content producers are interested in short-term profits. If they were interested in long-term ones and were faced with depressing evidence of the majority’s infantile preferences, the producers could conceivably raise the bar by selling not to the current state of consumers but to what consumers could become if exposed to a more constructive, challenging environment. In other words, the producers could educate or otherwise improve the majority, suffering the consumers’ hostility in the short-term but helping to shape viewers’ preferences for the better and betting on that long-term approval. Presumably, this altruistic strategy would tend to fail because free-riders would come along and lower the bar again, tempting consumers with cheap thrills. In any case, this engineering of entertainment is capitalistic, meaning that the producers are motivated to earn short-term profit.

Second, the producers are interested in exploiting consumers’ weaknesses. That is, the producers themselves behave as parasites or predators. Again, we can conclude that this is so because of what the producers choose to observe. Granted, the technology offers only so many windows into the consumer’s preferences; at best, the data show only what consumers currently like to watch, not the potential of what they could learn to prefer if given the chance. Thus, these producers don’t think in a paternalistic way about their relationship with consumers. A good parent offers her child broccoli, pickles, and spinach rather than just cookies and macaroni and cheese, to introduce the child to a variety of foods. A good parent wants the child to grow into an adult with a mature taste. By contrast, an exploitative parent would feed her daughter, say, only what she prefers at the moment, in her current low point of development, ensuring that the youngster will suffer from obesity-related health problems when she grows up. Likewise, content producers are uninterested in polling to discern people’s potential for greatness, by asking about their wishes, dreams, or ideals. No, the technology in question scrutinizes what people do when they vegetate in front of the TV after a long, hard day on the job. The content producers thus learn what we like when we’re effectively infantilized by television, when the TV literally affects our brain waves, making us more relaxed and open to suggestion, and the producers mean to exploit that limited sample of information, as large as it may be. Thus, the producers mean to cash in by exploiting us when we’re at our weakest, to profit by creating an environment that tempts us to remain in a childlike state and that caters to our basest impulses, to our penchant for fallacies and biases, and so on. So not only are the content producers thinking as capitalists, they’re predators/parasites to boot.

Finally, this engineering of content depends on the technoscience in question. Acquiring huge stores of data is useless without a way of interpreting the data. The companies must look for patterns and then infer the consumer’s mindset in a way that’s testable. That is, the inferences must follow logically from a hypothesis that’s eventually explained by a scientific theory. That theory then supports technological applications. If the theory is wrong, the technology won’t work; for example, the streamed movies won’t sell.

The upshot is that this scientific engineering of entertainment is based on only a partial depersonalization: the producers depersonalize the consumers while leaving their own personal self-image intact. That is, the content producers ignore how the consumers naively think of themselves, reducing them to robots that can be configured or contained by technology, but the producers don’t similarly give up their image of themselves as people in the naive sense. Implicitly, the consumers lose their moral, in not their legal, rights when they’re reduced to robots, to passive streamers of content that’s been carefully designed to appeal to the weakest part of them, whereas the producers will be the first to trumpet their moral and not just their legal right to private property. The consumers consent to purchase the entertainment, but the producers don’t respect them as dignified beings; otherwise, again, the producers would think more about lifting these consumers up instead of just exploiting their weaknesses for immediate returns. Still, the producers think of themselves, surely, as normatively superior. Even if the producers style themselves as Nietzschean insiders who reject altruistic morality and prefer a supposedly more naturalistic, Ayn Randian value system, they still likely glorify themselves at the expense of their victims. And even if some of those who profit from the technocracy are literally sociopathic, that means only that they don’t feel the value of those they exploit; nevertheless, a sociopath acts as an egotist, which means she presupposes a double standard, one for herself and one for everyone else.

.

From Capitalistic Predator to Buddhist Monk

What interests me about this inchoate technocracy, this business of using technoscience to design and manage society, is that it functions as a bridge to imagining a possible posthuman state. To cross over in our minds to the truly alien, we need stepping stones. Netflix is analogous to enlightened posthumanity in that Netflix is part of the way toward that destination. So when we consider Netflix we stand closer to the precipice and we can ask ourselves what giving up the rest of the personal self-image would be like. So suppose a content provider depersonalizes everyone, viewing herself as well as just a manipulable robot. On this supposition, the provider becomes something like a Buddhist who can observe her impulses and preferences without being attached to them. She can see the old self-image still operating in her mind, sustained as it is by certain neural circuits, but she’s trained not to be mesmerized by that image. She’s learned to see the reality behind the illusion, the code that renders the matrix. So she may still be inclined in certain directions, but she won’t reflexively go anywhere. She has the capacity to exploit the weak and to enrich herself, and she may even be inclined to do so, but because she doesn’t identify with the crudely-depicted self, she may not actually proceed down that expected path. In fact, the mystery remains as to why any enlightened person does whatever she does.

This calls for a comparison between the posthuman’s science-centered enlightenment and the Buddhist kind. The sort of posthuman self I’m trying to imagine transcends the traditional categories of the self, on the assumption that these categories rest on ignorance owing to the brain’s native limitations in learning about itself. The folk categories are replaced with scientific ones and we’re left wondering what we’d become were we to see ourselves strictly in those scientific terms. What would we do with ourselves and with each other? The emerging technocratic entertainment industry gives us some indication, but I’ve tried to show that that example provides us with only one stepping stone. We need another, so let’s try that of the Buddhist.

Now, Buddhist enlightenment is supposed to consist of a peaceful state of mind that doesn’t turn into any sort of suffering, because the Buddhist has learned to stop desiring any outcome. You only suffer when you don’t get what you want, and if you stop wanting anything, or more precisely if you stop identifying with your desires, you can’t be made to suffer. The lack of any craving for an outcome entails a discarding of the egoistic pretense of your personal independence, since it’s only when you identify narrowly with some set of goals that you create an illusion that’s bound to make you suffer, because the illusion is out of alignment with reality. In reality, everything is interconnected and so you’re not merely your body or your mind. When you assume you are, the world punishes you in a thousand degrees and dimensions, and so you suffer because your deluded expectations are dashed.

Here are a couple of analogies to clarify how this Buddhist frame of mind works, according to my understanding of it. Once you’ve learned to drive a car, driving becomes second nature to you, meaning that you come to identify with the car as your extended body. Prior to that identification, when you’re just starting to drive, the car feels awkward and new because you experience it as a foreign body. When you’ve familiarized yourself with the car’s functions, with the rules of the road, and with the experience of driving, sitting in the driver’s seat feels like slipping on an old pair of shoes. Every once in a while, though, you may snap out of that familiarity. When you’re in the middle of an intersection, in a left turn lane, you may find yourself looking at cars anew and being amazed and even a little scared about your current situation on the road: you’re in a powerful vehicle, surrounded by many more such vehicles, following all of these signs to avoid being slammed by those tons of steel. In a similar way, a native speaker of a language becomes very familiar with the shapes of the symbols in that language, but every now and again, when you’re distracted perhaps, you can slip out of that familiarity and stare in wonder at a word you’ve used a thousand times, like a child who’s never seen it before.

What I’m trying to get at here is the difference between having a mental state and identifying with it, which difference I take to be central to Buddhism. Being in a car is one thing, identifying with it is literally something else, meaning that there’s a real change that happens when driving becomes second nature to you. Likewise, having the desire for fame or fortune is one thing, identifying with either desire is something else. A Buddhist watches her thoughts come and go in her mind, detaching from them so that the world can’t upset her. But this raises a puzzle for me. Once enlightened, why should a Buddhist prefer a peaceful state of mind to one of suffering? The Buddhist may still have the desire to avoid pain and to seek peace, but she’ll no longer identify with either of those or with any other desire. So assuming she acts to lessen suffering in the world, how are those actions caused? If an enlightened Buddhist is just a passive observer, how can she be made to do anything at all? How can she lean in one direction or another, or favour one course of action rather than another? Why peace rather than suffering?

Now, there’s a difference between a bodhisattva and a Buddha: the former harbours a selfless preference to help others achieve enlightenment, whereas the latter gives up on the rest of the world and lives in a state of nirvana, which is passive, metaphysical selflessness. So a bodhisattva still has an interest in social engagement and merely learns not to identify so strongly with that interest, to avoid suffering if the interest doesn’t work out and the world slams the door in her face, whereas a Buddha may extinguish all of her mental states, effectively lobotomizing herself. Either way, though, it’s hard to see how the Buddhist could act intelligently, which is to say exhibit some pattern in her activities that reflects a pattern in her mind and acts at least as the last step in the chain of interconnected causes of her actions. A bodhisattva has desires but doesn’t identify with them and so can’t favor any of them. How, then, could this Buddhist put any morality into practice? Indeed, how could she prefer Buddhism to some other religion or worldview? And a Buddha may no longer have any distinguishable mental states in the first place, so she would have no interests to tempt her with the potential for mental attachments. Thus, we might expect full enlightenment in the Buddhist sense to be a form of suicide, in which the Buddhist neglects all aspects of her body because she’s literally lost her mind and thus her ability to care or to choose to control herself or even to manage her vital functions. (In Hinduism, an elderly Brahmin may choose this form of suicide for the sake of moksha, which is supposed to be liberation from nature, and Buddhism may explain how this suicide becomes possible for the enlightened person.)

The best explanation I have of how a Buddhist could act at all is the Taoist one that the world acts through her. The paradox of how the Buddhist’s mind could control her body even when the Buddhist dispenses with that mind is resolved if we accept the monist ontology in which everything is interconnected and so unified. Even if an enlightened Buddha loses personal self-control, this doesn’t mean that nothing happens to her, since the Buddhist’s body is part of the cosmic whole, and so the world flows in through her senses and out through her actions. The Buddhist doesn’t egoistically decide what to do with herself, but the world causes her to act in one way or another. Her behaviour, then, shouldn’t reflect any private mental pattern, such as a personal character or ego, since she’s learned to see through that illusion, but her actions will reflect the whole world’s character, as it were.

.

From Buddhist Monk to Avatar of Nature

Returning to the posthuman, the question raised by the Buddhist stepping stone is whether we can learn what it would be like to experience the death of the manifest image, the absence of the naive, dualistic and otherwise self-glorifying conception of the self, by imagining what it would be like to be the sun, the moon, the ocean, or just a robot. That’s how a scientifically enlightened posthuman would conceive of “herself”: she’d understand that she has no independent self but is part of some natural process, and if she’d identify with anything it would be with that larger process. Which process? Any selection would betray a preference and thus at least a partial resurrection of the ghostly, illusory self. The Buddhist gets around this with metaphysical monism: if everything is interconnected, the universe is one and there’s no need to choose what you are, since you’re metaphysically everything at once. So if all natural processes feed into each other, nature is a cosmic whole, and the posthuman sees very far and wide, sampling enough of nature to understand the universe’s character so that she’d presumably understand her actions to flow from that broader character.

And just here we reach a difference between Eastern (if not specifically Buddhist) and technoscientific enlightenment. Strictly speaking, Buddhism is atheistic, I think, but some forms of Buddhism are pantheistic, meaning that some Buddhists personify the interconnected whole. If we suppose that technoscience will remain staunchly atheistic, we must assume only that there are patterns in nature and not any character or ghostly Force or anything like that. Thus, if a posthuman can’t identify with the traditional myth of the self, with the conscious, rational, self-controlling soul, and yet the posthuman is to remain some distinct entity, I’m led to imagine this posthuman entity as an avatar of lifeless nature. What does nature do with its forces? It evolves molecules, galaxies, solar systems, and living species. The posthuman would be a new force of nature that would serve those processes of complexification and evolution, creating new orders of being. The posthuman would have no illusion of personal identity, because she’d understand too well the natural forces at work in her body to identify so narrowly and desperately with any mere subset of their handiwork. Certainly, the posthuman wouldn’t cling to any byproduct of the brain, but would more likely identify with the underlying, microphysical patterns and processes.

So would this kind of posthumanity be a force for good or evil? Surely, the posthuman would be beyond good or evil, like any natural force. Moral rules are conventions to manage deluded robots like us who are hypnotized by our brain’s daydream of our identity. Values derive from preferences of some things as better than others, which in turn depend on some understanding of The Good. In the technoscientific picture of nature, though, goodness and badness are illusions, but this doesn’t imply anything like the Satanist’s exhortation to do whatever you want. The posthuman would have as many wants as the rain when the rain falls from the sky. She’d have no ego to flatter, no will to express. Nevertheless, the posthuman would be caused to act, to further what the universe has already been doing for billions of years. I have only a worm’s understanding of that cosmic pattern. I speak of evolution and complexification, but those are just placeholders, like an empty five-line staff in modern musical notation. If we’re imagining a super-intelligent species that succeeds us, I take it we’re thinking of a species that can read the music of the spheres and that’s compelled to sing along.

Metaphilosophical Reflections III: The Skeptical Dialectic

by reichorn

“Human reason is a two-edged and dangerous sword.”

– Montaigne, “Of Presumption”

—————————————————–

This is the third in a series of guest-blogger posts by me, Roger Eichorn.  The first two posts can be found here and here.

I’m also a would-be fantasy author.  The first three chapters of my novel, The House of Yesteryear, can be found here.  I’ve also recently uploaded the first of what will be two ‘Bonus Scenes’ from later in the book.  You can find it here, if you’re into that sort of thing.

—————————————————–

In my previous post, I argued that skepticism and philosophy are inextricably entwined.  Following Hegel, Michael Forster has made a similar argument, and I’ve benefited a great deal (and cribbed) from his discussion.  But whereas Forster stops with the claim that an engagement (direct or indirect) with skepticism is a defining feature of philosophy, I’ve gone farther and tried to develop a conceptual framework for understanding why this is the case.  My explanation turns on the notion of presuppositions.  The view, in short, is this:

  1. Intellectual inquiry can make determinate progress only against a background of unquestioned fundamental premises, propositions, or assumptions (what I call ‘presuppositions’).
  2. These fundamental presuppositions provide contexts for inquiry; they are like boundary-markers or the rules of a game, in that overstepping or questioning them entails ceasing to play the ‘discursive game’ they enclose or constitute.
  3. Calling into question context-constitutive presuppositions involves a kind of skepticism.
  4. Stepping outside of a presupposition context entails ‘going meta,’ i.e., it entails transitioning into a more abstract domain of inquiry.
  5. Given (3) and (4), it is skepticism that pushes us to ever-greater levels of discursive–epistemological abstraction.
  6. In ‘going meta,’ we end up—either immediately or after some intermediary steps—within the domain of philosophy.
  7. Given (5) and (6), it is skepticism that leads us to philosophy, i.e., philosophy begins in skepticism.
  8. There is no uncontroversial rationale that is both global and principled for forestalling the possibility of ‘going meta,’ i.e., of calling into question any presupposition.  (Principled rationales are always context-specific or ‘local.’  The claim I’m making here, then, is that there are no principled meta-contextual, i.e., global, rationales for forestalling the questioning of a presupposition or set of presuppositions.)
  9. Given (8), according to which any presupposition can be called into question, and (6), according to which philosophy is the domain of inquiry one occupies (sooner or later) in calling presuppositions into question, it follows that philosophy as such possesses no definitive presupposition-set of its own.
  10. Given (1) and (9), philosophy can make no determinate progress.
  11. Given (10), philosophy ends in skepticism.

This argument can, of course, be challenged on any number of fronts.  I have not, for instance, made a sufficient case for (1).  I touched on it in my previous post (where I mentioned Stalnaker and Wittgenstein), but I did not attempt to defend the view in any detail.  Nor, in the interests of space, am I going to do so here.  It should be enough for now to note (1)’s extreme plausibility.  If we visualize intellectual progress as involving forward movement, and the act of questioning presuppositions as involving backward movement, then it’s easy to see that we can make progress only if we’re not calling presuppositions into question: we have to stop moving backward before we can move forward.  Given (8)—which is itself a plausible view, though with its own complications—these presuppositions-of-inquiry must remain unquestioned, either in the sense of (a) never having been thematized or (b) being set aside, “apart from the route travelled by enquiry” (Wittgenstein, On Certainty, §88), whether (i) they are recognized as questionable though necessarily unquestioned (just as the rules of a game are questionable, but cannot be questioned from within the game itself) or (ii) they are (mis)taken as lying beyond all question (as in the form of indubitable first principles, the supposedly self-evident, etc.).

In this post, I want to elaborate—and with any luck buttress—my case for (3), (4), and (6).  I want, in other words, to get clearer on the dialectical relations among presuppositions, skepticism, and philosophy.

—————————————————–

In earlier posts, I introduced the idea of ‘common life,’ which I’m conceptualizing here as the general, usually invisible presupposition context that frames our everyday sayings and doings.  Common life is our twofold inheritance as beings who are both embodied in nature and embedded in a society; it is our natural medium, the subcognitive water for us cognitive fishes.  When we are, as Hubert Dreyfus or Richard Rorty (influenced by Heidegger and pragmatism) would put it, smoothly and effortlessly ‘coping with the world,’ the fact of common life’s inherent questionability—its possible contingency—never presents itself.  At such times, common life is (to borrow some Heideggerian terminology) ‘inconspicuous’ (see: Being and Time, §§15–6).  Common life becomes ‘conspicuous’ only as a result of disruptions in the orderly flow of our everyday lives.  Such disruptions can be relatively minor (what Heidegger called the mode of ‘obtrusiveness’).  But they can also be more significant (what Heidegger called the mode of ‘obstinacy’).  The deeper the disruption, the more the presuppositional structure of common life comes into view.  The more the presuppositional structure of common life comes into view, the higher its ‘index of questionability’ climbs (cf., Luciano Floridi, Scepticism and the Foundation of Epistemology, Ch. 4).

Initially, then, we occupy the standpoint of common life as what I call ‘everyday dogmatists.’  This means that we acquiesce, usually unconsciously, in everyday dogmatisms: we (mis)take (again, usually only implicitly) the presuppositions of common life for known truths.

Slide1

Michel de Montaigne wrote that “[p]resumption is our natural and original malady” (Apology for Raymond Sebond).  Everyday dogmatism is, in his terms, ‘everyday presumption.’  In her book on Montaigne, Ann Hartle characterizes everyday presumption as “the unreflective milieu of prephilosophical certitude, the sea of opinion in which we are immersed” (Montaigne: Accidental Philosopher, p. 106).  Human beings are, as I like to put it, natural-born dogmatists.

Common life provides us not only with first-order beliefs, but also with more or less established means of adjudicating many, even most, sorts of dispute.  For instance, authoritative scriptures belong to the presupposition-framework of the common life into which many people are born.  For such people, appeal to scripture is capable of settling certain kinds of dispute: in these cases, common life itself provides the resources that allow for the resolution of conflicts that arise within common life.

An initial challenge to an everyday dogmatism is issued.  Here we encounter the most rudimentary form of skepticism.  The skeptical challenge gives rise to a state of dissatisfaction: there is a felt need to resolve the conflict, to ‘refute’ the skeptic and restore our earlier confidence in the dogmatisms of common life.  In many cases of such skeptical challenges, the dissatisfaction in question can be resolved simply by drawing more water from the well of everyday dogmatisms.  In more extreme cases, the skeptical challenges can be resolved only by appealing to the context-constitutive presuppositions of common life.  Either way, what we have is a kind of circular dialectic of skepticism and dogmatism.

 Slide2

In time, though, the skeptical challenges grow more sophisticated.  They reach their apogee when they call into question not just intracontextual everyday dogmatisms, nor just one or another context-constitutive presupposition of common life, but rather common life as a whole.  When that happens, it becomes clear that no appeal to everyday dogmatisms can satisfactorily answer the skeptical challenge, for the skeptical challenge now calls into question the entire domain of everyday dogmatisms.

Consider a simple case of perceptual skepticism.  You see a tree.  You think you know it’s a tree, precisely because you can see it (and you know what trees are, what they look like, etc.).  This is an entirely acceptable everyday judgment, accompanied by an entirely acceptable everyday justification.  Then a skeptic comes along and asks you how you know that what you think you see is actually a tree.  At this point, no dissatisfaction arises, since you have to hand your everyday justification.  But the skeptic presses the point: “How do you know it’s not an extraordinarily lifelike papier-mâché tree?”  This might be enough to give rise to dissatisfaction; if not, then imagine that the skeptic has some further story to tell about how the city in which you both live has funded an art project that involves the creation of amazingly lifelike papier-mâché trees.  Now you’re prepared to call into question your belief that it’s a tree (along with the sufficiency of your everyday justification).  What do you do now?  Obviously, you walk up to the tree and inspect it.  The skeptic has hardly deprived you of all your everyday means of settling disputes.  You poke the tree, peel back its bark, pluck off a leaf, and conclude that, clearly, this is not a papier-mâché tree.  But what do you do when the skeptic smiles and asks, “Fair enough.  But how do you know you’re not dreaming?”

Now, most of us would, most of the time, simply dismiss this question as nonsense.  We’d say, “‘O, rubbish!’ to someone who wanted to make objections to the propositions that are beyond doubt.  That is, not reply to him but admonish him” (Wittgenstein, On Certainty, §495).  But the problem of justification remains.  Most of us are going to believe that we’re justified in claiming to know that we’re not dreaming (even more so that we’re not dreaming all the time) and that we therefore know all sorts of things about the world as a result of our present and past experiences.  Nothing is easier, in the course of our everyday lives, than to dismiss this sort of worry.  But if it nags at us—if it persists as a source of dissatisfaction—then we’re going to want to find an answer to the skeptic.  But, ex hypothesi, we’ve accepted the fact that we cannot answer the skeptical challenge by appealing to our experience (in the broader case: to common life or its presuppositions), since the skeptical challenge has called into question the veridicality of our experience in toto (in the broader case: the veridicality of common life and its presuppositions in toto).  What do we do?

Bearing in mind that this whole process is animated by a commitment to truth and rationality (by what Nietzsche called our ‘intellectual conscience’), without which our capacity for epistemico-existential crises would be severely limited, there seems only one path open to us: that is, to repudiate the inherent authority of common life in favor of what I call autonomous reason.

 Slide3

I borrow the phrase ‘autonomous reason’ from Donald Livingston’s book on Hume (Hume’s Philosophy of Common Life).  Livingston claims that, for Hume, philosophy is committed to autonomous reason, according to which “it is philosophically irrational to accept any standard, principle, custom, or tradition of common life unless it has withstood the fires of critical philosophical reflection” (23).  We can quibble about whether or not this applies to every philosopher or even every philosophical tradition; but that’s beside the point if the claim is correct in the main—and I think it is.  Moreover, I think it’s not just superficially correct (‘in the main’), but that it illuminates a deep and important feature of philosophy that goes back to its very earliest manifestations.

Philosophy is, at least initially, predicated on skepticism regarding common life.  Thus, it seeks autonomy.  The philosophy–common life distinction can be understood in terms of the familiar dichotomy between reason and tradition.  Reason’s autonomy from tradition is often taken to be a necessary feature of any properly critical enterprise.  As Kenneth Westphal has noted in referring to a “dichotomy, pervasive since the Enlightenment, that reason and tradition are distinct and independent resources”: “because tradition is a social phenomenon, reason must be an independent, individualistic phenomenon.  Otherwise it could not assess or critique tradition, because criticizing tradition requires an independent, ‘external’ standpoint and standards” (Hegel’s Epistemology, p. 77).  Westphal rejects this view, but it is common enough.  Nicholas Wolterstorff, for example, gives voice to it when he writes, “Traditions are still a source of benightedness, chicanery, hostility, and oppression…  In this situation, examining our traditions remains for many of us a deep obligation, and for all of us together, a desperate need” (John Locke and the Ethics of Belief, p. 246).  Enlightened reason, in other words, must be able to rise above the soup of prejudices that is common life; otherwise, it will be unable to establish the distance needed to criticize those traditions.

These metatheoretical concerns are usually articulated without any reference to skepticism.  Even when it is separated from the Kantian project, however, critique is best understood as a response to skepticism, an attempt to forge a middle way between skepticism and dogmatism.  The repudiation of the inherent authority of common life and the subsequent commitment to autonomous reason is predicated on a kind of skepticism.  And this is not, as is commonly claimed or implied, unique (whether as a whole or just in character) to the modern period.  Rather, this kind of skepticism was a precondition of the emergence of philosophical thought itself, 2,500 years ago.  The motto for this transition is von Mythos zum Logos—from myth to reason.

—————————————————–

In his fascinating book The Discovery of the Mind—a study of conceptions of the self in archaic and ancient Greece—Bruno Snell refers to the emergence of a “social scepticism” that opened up a space within which individuals could call into question the epistemic and practical authority of the traditions into which they’d been born.  Given this sort of social skepticism, according to Snell, “[r]eality is no longer something that is simply given.  The meaningful no longer impresses itself as an incontrovertible fact, and appearances have ceased to reveal their significance directly to man.  All this really means that myth has come to an end” (p. 24).  The repudiation of myth was, on my picture, a repudiation by philosophers of common life, of the world of their fathers.  Malcolm Schofield has written that “[t]he transition from myths to philosophy… entails, and is the product of, a change that is political, social and religious rather than sheerly intellectual, away from the closed traditional society… and toward an open society in which the values of the past become relatively unimportant and radically fresh opinions can be formed both of the community itself and of its expanding environment…  It is this kind of change that took place in Greece between the ninth and sixth centuries B.C.” (The Presocratic Philosophers, pp. 73–4).

Going beyond the Eurocentrism of Snell and Schofield, Karl Jaspers developed the idea of what he calls ‘the Axial Age,’ a period of sudden social, political, and philosophical enlightenment that, he claimed, occurred nearly simultaneously and yet independently in Greece (with the Presocratics), India (with the Buddha), and China (with Confucianism and Daoism).  In this period, Jaspers writes, “hitherto unconsciously accepted ideas, customs and conditions were subjected to examination, questioned and liquidated.  Everything was swept into the vortex.  In so far as the traditional substance still possessed vitality and reality, its manifestations were clarified and thereby transmuted” (The Origin and Goal of History, p. 2).  As though to confirm Jaspers’s theory—though he was writing decades earlier—S. Radhakrishnan tells us that

[t]he age of the Buddha represents the great springtide of philosophical spirit in India.  The progress of philosophy is generally due to a powerful attack on a historical tradition when men feel themselves compelled to go back on their steps and raise once more the fundamental questions which their fathers had disposed of by the older schemes.  The revolt of Buddhism and Jainism… finally exploded the method of dogmatism and helped to bring about a critical point of view…  Buddhism served as a cathartic in clearing the mind of the cramping effects of ancient obstructions.  Scepticism, when it is honest, helps to reorganise belief.  (Indian Philosophy, Vol. 2, p. 18)

The notion of a clear-cut transition ‘from myth to reason’ is deeply entrenched in our cultural narrative, yet it is clearly problematic if understood in an overly simplistic way.  Just as Aristotle was not the first person to use logic, so the presocratic philosophers were not the first Greeks to use reason or to think reasonably.  Still, I think it is clear that something important occurred during the Axial Age.  It may not have been unprecedented, as some commentators want to claim, but its effects were, for (it seems to me) we are still feeling those effects today.  The fundamental transition, I want to argue, is best understood not as being from myth to reason, but as being from common life to autonomous reason.

The ability of reasoning to call into question—to radically disrupt—common life was recognized very early.  Plato worries about it in the Republic

We all have strongly held beliefs, I take it, going back to our childhood [i.e., our pretheoretical certainties], about things which are just and things which are fine and beautiful…  When someone… encounters the question ‘What is the beautiful?’, and gives the answer he used to hear from the lawgiver [i.e., from tradition], and argument shows it to be incorrect, what happens to him?  He may have many of his answers refuted, in many different ways, and be reduced to thinking that the beautiful is no more beautiful or fine than it is ugly or shameful.  The same with ‘just’, ‘good’, and the things he used to have more respect for.  At the end of this, what do you think his attitude to these strongly held beliefs will be, when it comes to respect for them and obedience to their authority?…  I imagine he’ll be thought to have changed from a law-abiding citizen into a criminal. (538c–539a)

We find the same recognition of the cultural–existential (as opposed to merely epistemological) threat of skepticism in Hegel.

The need to understand logic in a deeper sense than that of the science of mere formal thinking is prompted by the interest we take in religion, the state, the law and ethical life.  In earlier times, people had no misgivings about thought…  But while engaging in thinking… it turned out that the highest relationships of life are thereby compromised.  Through thinking, the positive state of affairs was deprived of its power…  Thus, for example, the Greek philosophers opposed the old religion and destroyed representations of it…  In this way, thinking made its mark on actuality and had the most awe-inspiring effect.  People thus became aware of the power of thinking and started to examine more closely its pretensions.  They professed to finding out that it claimed too much and could not achieve what it undertook.  Instead of coming to understand the essence of God, nature and spirit and in general the truth, thinking had overthrown the state and religion.  (Encyclopedia Logic, §19)

The transition to autonomous reason, then, is in many respects a desperate gamble, an attempt to salvage by way of reason what reason itself has taken away from us, namely, the certainty and stability of common life.

—————————————————–

Thus, the move to autonomous reason gives rise to a new kind of dogmatism, not the simple, inchoate or prereflective dogmatisms of common life, but sophisticated philosophical dogmatisms.  The hope of most developers of philosophical dogmatisms is to refute the skeptical challenges that led to the repudiation of common life, to restore common life on a more solid foundation.  Unfortunately for philosophical dogmatists, skepticism does not obediently remain at the level of common life, waiting to be overthrown; rather, it follows them up to the level of autonomous reason, continuing to attack them where they live.

 Slide4

As at the level of common life, the initial response to skeptical challenges to philosophical dogmas will involve a circular return to those same philosophical dogmas, hoping to marshal more resources with which to overthrow the skeptic.  But, again as at the level of common life, eventually the skeptical challenges will becomes sophisticated enough to call into question the entire epistemological project.  The result is metaepistemological skepticism.  Its most conceptually powerful, and historically influential, expression is found in the Agrippan Trilemma, which I briefly discussed in the previous post.  The fundamental challenge of the Trilemma at the epistemological level is this: How do you justify that which makes justification possible?  Just as the skeptical challenges at the level of common life ended up calling into question the presupposition context of common life as a whole, likewise skeptical challenges at the level of autonomous reason end up calling into question the presupposition context of autonomous reason as a whole.  The question, of course, is where this leaves us.

 Slide5

I’ll take up that question, among others, in my next post.