Meta-problem vs. Scandal of Self-Understanding
by rsbakker
Let’s go back to Square One.
Try to recall what it was like before what it was like became an issue for you. Remember, if you can, a time when you had yet to reflect on the bald fact, let alone the confounding features, of experience. Square One refers to the state of metacognitive naivete, what it was like when experience was an exclusively practical concern, and not at all a theoretical one.
David Chalmers has a new paper examining the ‘meta-problem’ of consciousness, the question of why we find consciousness so difficult to fathom. As in his watershed “Consciousness and Its Place in Nature,” he sets out to exhaustively map the dialectical and evidential terrain before adducing arguments. After cataloguing the kinds of intuitions underwriting the meta-problem he pays particularly close attention to various positions within illusionism, insofar as these theories see the hard problem as an artifact of the meta-problem. He ends by attempting to collapse all illusionisms into strong illusionism—the thesis that consciousness doesn’t exist—which he thinks is an obvious reductio.
As Peter Hankins points out in his canny Conscious Entities post on the article, the relation between problem reports and consciousness is so vexed as to drag meta-problem approaches back into the traditional speculative mire. But there’s a bigger problem with Chalmer’s account of the meta-problem: it’s far too small. The meta-problem, I hope to show, is part and parcel of the scandal of self-knowledge, the fact that every discursive cork in Square Two, no matter how socially or individually indispensable, bobs upon the foam of philosophical disputation. The real question, the one our species takes for granted but alien anthropologists would find fascinating, is why do humans find themselves so dumbfounding? Why does normativity mystify us? Why does meaning stupefy? And, of course, why is phenomenality so inscrutable?
Chalmers, however, wants you to believe the problem is restricted to phenomenality:
I have occasionally heard the suggestion that internal self-models will inevitably produce problem intuitions, but this seem[s] clearly false. We represent our own beliefs (such as my belief that Canberra is in Australia), but these representations do not typically go along with problem intuitions or anything like them. While there are interesting philosophical issues about explaining beliefs, they do not seem to raise the same acute problem intuitions as do experiences.
and yet in the course of cataloguing various aspects of the meta-problem, Chalmers regularly finds himself referring to similarities between beliefs and consciousness.
Likewise, when I introspect my beliefs, they certainly do not seem physical, but they also do not seem nonphysical in the way that consciousness does. Something special is going on in the consciousness case: insofar as consciousness seems nonphysical, this seeming itself needs to be explained.
Both cognition and consciousness seem nonphysical, but not in the same way. Consciousness, Chalmers claims, is especially nonphysical. But if we don’t understand the ‘plain’ nonphysicality of beliefs, then why tackle the special nonphysicality of conscious experience?
Here the familiar problem strikes again: Everything I have said about the case of perception also applies to the case of belief. When a system introspects its own beliefs, it will typically do so directly, without access to further reasons for thinking it has those beliefs. Nevertheless, our beliefs do not generate nearly as strong problem intuitions as our phenomenal experiences do. So more is needed to diagnose what is special about the phenomenal case.
If more is needed, then what sense does it make to begin looking for this ‘more’ in advance, without understanding what knowledge and experience have in common?
Interrogating the problem of intentionality and consciousness in tandem becomes even more imperative when we consider the degree to which Chalmers’ categorizations and evaluations turn on intentional vocabularies. The hard problem of consciousness may trigger more dramatic ‘problem intuitions,’ but it shares with the hard problem of cognition a profound inability to formulate explananda. There’s no more consensus on the nature of belief than there is the nature of consciousness. We remain every bit as stumped, if not quite as agog.
Not only do intentional vocabularies remain every bit as controversial as phenomenal ones in theoretical explanatory contexts, they also share the same apparent incompatibilities with natural explanation. Is it a coincidence that both vocabularies seem irreducible? Is it a coincidence they both seem nonphysical? Is it a coincidence that both seem incompatible with causal explanation? Is it a coincidence that each implicates the other?
Of course not. They implicate each other because they’re adapted to function in concert. Since they function in concert, there’s a good chance their shared antipathy to causal explanation turns on shared mechanisms. The same can be said regarding their apparent irreducible nonphysicality.
And the same can be said of the problem they pose.
Square Two, then, our theoretical self-understanding, is mired in theoretical disputation. Every philosopher (the present one included) will be inclined to think their understanding the exception, but this does nothing to change the fact of disputation. If we characterize the space of theoretical self-understanding—Square Two—as a general controversy space, we see that Chalmers, as an intentionalist, has taken a position in intentional controversy space to explicate phenomenal controversy space.
Consider his preferred account of the meta-problem:
To sum up what I see as the most promising approach: we have introspective models deploying introspective concepts of our internal states that are largely independent of our physical concepts. These concepts are introspectively opaque, not revealing any of the underlying physical or computational mechanisms. We simply find ourselves in certain internal states without having any more basic evidence for this. Our perceptual models perceptually attribute primitive perceptual qualities to the world, and our introspective models attribute primitive mental relations to those qualities. These models produce the sense of acquaintance both with those qualities and with our awareness of those qualities.
While the gist of this picture points in the right direction, the posits used—representations, concepts, beliefs, attributions, acqaintances, awarenesses—doom it to dwell in perpetual underdetermination, which is to say, discursive ground friendly to realists like Chalmers. It structures the meta-problem according to a parochial rationalization of terms no one can decisively formulate, let alone explain. It is assured, in other words, to drag the meta-problem into the greater scandal of self-knowledge.
To understand why Square Two has proven so problematic in general, one needs to take a step back, to relinquish their countless Square Two prejudices, and reconsider things from the standpoint of biology. Why, biologically speaking, should an organism find cognizing itself so difficult? Not only is this the most general form of the question that Chalmer’s takes himself to be asking, it is posed from a position outside the difficulty it interrogates.
The obvious answer is that biology, and cognitive biology especially, is so fiendishly complicated. The complexity of biology all but assures that cognition will neglect biology and fasten on correlations between ‘surface irritations’ and biological behaviours. Why, for instance, should a frog cognize fly biology when it need only strike at black dots?
The same goes for metacognitive capacities: Why metacognize brain biology when we need only hold our tongue at dinner, figure out what went wrong with the ambush, explain what happened to the elders, and so on? On any plausible empirical story, metacognition consists in an opportunistic array of heuristic systems possessing the access and capacity to solve various specialized domains. The complexity of the brain all but assures as much. Given the intractability of the processes monitored, metacognitive consumers remain ‘source insensitive’—they solve absent any sensitivity to underlying systems. As need-to-know consumers adapted to solving practical problems in ancestral contexts, we should expect retasking those capacities to the general problem of ourselves would prove problematic. As indeed it has. Our metacognitive insensitivity, after all, extends to our insensitivity: we are all but oblivious to the source-insensitive, heuristic nature of metacognition.
And this provides biological grounds to predict the kinds of problems such retasking might generate; it provides an elegant, scientifically tractable way to understand a great number of the problems plaguing human self-knowledge.
We should expect metacognitive (and sociocognitive) application problems. Given that metacognition neglects the heuristic limits of metacognition, all novel applications of metacognitive capacities to new problem ecologies (such as those devised by the ancient Greeks) run the risk of misapplication. Imagine rebuilding an engine with invisible tools. Metacognitive neglect assures that trial-and-error provides our only means of sorting between felicitous and infelicitous applications.
We should expect incompatibility with source-sensitive modes of cognition. Source-insensitive cognitive systems are primed to solve via information ecologies that systematically neglect the actual systems responsible. We rely on robust correlations between the signal available and the future behaviour of the system requiring solution–‘clues’ some heuristic researchers call them. The ancestral integration of source-sensitive and source-insensitive cognitive modes (as in narrative, say, which combines intentional and causal cognition) assures at best specialized linkages. Beyond these points of contact, the modes will be incompatible given the specificity of the information consumed in source-insensitive systems.
We should expect to suffer illusions of sufficiency. Given the dependence of all cognitive systems on the sufficiency of upstream processing for downstream success, we should expect insensitivity to metacognitive insufficiency to result in presumptive sufficiency. Systems don’t need a second set of systems monitoring the sufficiency of every primary system to function: sufficiency is the default. Retasking metacognitive capacities to theoretical problems, we can presume, deploys as sufficient despite almost certainly being insufficient. This can be seen as a generalization of WYSIATI, or ‘what-you-see-is-all-there-is,’ the principle Daniel Kahneman uses to illustrate how certain heuristic mechanisms do not discriminate between sufficient and insufficient information.
We should expect to suffer illusions of simplicity (or identity effects). Given metacognitive insensitivity to its insensitivity, it remains blind to artifacts of that insensitivity as artifacts. The absence of distinction will be intuited as simplicity. Flicker-fusion as demonstrated in psychophysics almost certainly possesses cognitive and metacognitive analogues, instances where the lack of distinction reports as identity or simplicity. The history of science is replete with examples of mistaking artifacts of information poverty with properties of nature. The small was simple prior to the microscope and the discovery of endless subvisibilia. The heavens consisted of spheres.
We should expect to suffer illusions of free-floating efficacy. The ancestral integration of source-insensitive and source-sensitive cognition underwrites fetishism, the cognition of sources possessing no proximal sources. In his cognitive development research, Andrei Cimpian calls these ‘inherence heuristics,’ where, in ignorance of extrinsic factors, we impute an intrinsic efficacy to cognize/communicate local effects. We are hardwired to fetishize.
We should expect to suffer entrenched only-game-in-town effects. In countless contexts, ignorance of alternatives fools individuals into thinking their path necessary. This is why Kant, who had no inkling of the interpretive jungle to come, thought he had stumbled across a genuine synthetic a priori science. Given metacognitive insensitivity to its insensitivity, the biological parochialism of source-insensitive cognition is only manifest in applications. Once detected, neglect assures the distinctiveness of source-insensitive cognition will seem absolute, lending itself to reports of autonomy. So where Kant ran afoul the only-game-in-town effect in declaring his discourse apodictic, he also ran afoul a biologically entrenched version of the same effect in declaring cognition transcendental.
We should expect misfires will be systematic. Generally speaking, rules of thumb do not cease being rulish when misapplied. Heuristic breakdowns are generally systematic. Where the system isn’t crashed altogether, the consequences of mistakes will be structured and iterable. This predictability allows certain heuristic breakdowns to become valuable tools. The Pleistocene discovery that applying pigments to surfaces could cue the (cartoon) visual cognition of nearly anything examples one, particularly powerful instrumentalization of heuristic systematicity. Metacognition is no different than visual cognition in this regard: like visual heuristics, cognitive heuristics generate systematic ‘illusions’ admitting, in some cases, genuine instrumentalizations (things like ‘representations’ and functional analyses in empirical psychology), but typically generating only disputation otherwise.
We should expect to suffer performative interference-effects (breakdowns in ‘meta-irrelevance’). The intractability of the enabling axis of cognition, the inevitability of medial neglect, forces the system to presume its cognitive sufficiency. As a result, cognition biomechanically depends on the ‘meta-irrelevance’ of its own systems; it requires that information pertaining to its functioning is not required to solve whatever the problem at hand. Nonhuman cognizers, for instance, are comparatively reliant on the sufficiency of their cognitive apparatus: they can’t, like us, raise a finger and say, ‘On second thought,’ or visit the doctor, or lay off the weed, or argue with their partner. Humans possess a plethora of hacks, heuristic ways to manage cognitive shortcomings. Nevertheless, the closer our metacognitive tools come to ongoing, enabling access—the this-very-moment-now of cognition—the more regularly they will crash, insofar as these too require meta-irrelevance.
We should expect chronic underdetermination. Metacognitive resources adapted to the solution of ancestral practical problems have no hope of solving for the nature of experience or cognition.
We should expect ontological confusion. As mentioned, cognition biomechanically depends on the ‘meta-irrelevance’ of its own systems; it requires that information pertaining to its functioning is not required to solve whatever the problem at hand. Metacognitive resources retasked to solve for these systems flounder, then begin systematically confusing artifacts of medial neglect for the dumbfounding explananda of cognition and experience. Missing dimensions are folded into neglect, and metacognition reports these insufficiencies as sufficient. Source insensitivity becomes source independence. Complexity becomes simplicity. Only a second ‘autonomous’ ontology will do.
perhaps because I was largely spared from analytical philo or b/c I don’t have any spooky inclinations but I never felt the need to take too seriously what folks like Chalmer’s feel/intuit about such things, some folks are just stuck in their ruts and need to be left to it.
Agreed. I attended a talk given by Chalmers not too long ago, and he’s actually quite good when it comes to admitting the difficulties of his position. The problem is that the institution is geared to replicate these positions: you never know how many grad students you might deflect into more productive orbits.
good for him too little of that in the academy, some of us pushed back against his version of the extended-mind in conversations with Clark and he now mostly only pushes the stronger/Chalmers version in terms of ethics (as in it would be unethical to deprive a handicapped person of their enabling devices) perhaps these folks with ties to the experimental community are a bit more flexible/open-ended, Tony Chemero has come to a version of my own instrumentalist pragmatism that I think has real promise:
“Guide our action” for cognitivists is always going to have a semantic dimension: meaning it’s the intentional phenomena that require explanation. So unless he’s explaining intentionality, he’s simply changing the subject, and missing the whole point of symbolic thought/cognition–for the cognitivist.
His stuff is always great. I love the antirepresentationalism, the materialism of thought, but with dynamical systems approaches you’re going to resolve patterns systematically bound to what’s going on, but it’s always going to leave the question of ‘what’s going on?’ It’s powerful, but its heuristic.
i’ll take powerful and heuristic
Kinda like stopping halfway up the mountain because you like the view, isn’t it? 😉
That said, I’m convinced machine learning will generate more and more disincentives to reductive approaches to science. It’ll just be so cheap picking needles out of haystacks that no one will bother mapping the straw.
heh, more like taking care of the tasks at hand while others head off up the mountains seeking thinner air. No doubt that basic research is on the way out but I’m with folks like Gary that AI (well the all too human thinking behind) is faltering
Exceptionalist. Like Pinker, he’s a representationalist. His ‘brute force’ refers to the exploration, training phase, and what he’s complaining about is the domain constraints this places on exploitation. “You can do a lot of things superficially” but we lack deep representations. This the humanist conceit, the notion that comprehension is something special as opposed to the consequences of aggregating superficial guesses. Sparse data learning needs no ‘deeper, abstract, representational’ level, it simply needs strategic sensitivities to environmental ‘tells,’ data structures reliably connected to future behaviours.
Nice to see him talking about the retail implications of AI.
sure as I noted above I’m with the enactivists against the representationalists but not sure there is any real tension at the level of engineering between “deep” and “strategic sensitivities to environmental ‘tells,’” they are pretty jammed up at the moment.
Well, insofar as representations tilt GOFAI way, and tells tilt toward machine learning there’s huge decisions to be made vis a vis research resources. All the data intensity Marcus is talking about as a liability is simply the way the process works. We have 3.8 billion years of information pounded into us, and it shows. Marcus thinks he can bootstrap that 3.8 billion years into systems psychologically, but the actual engineers have been using syntax and neural nets in concert all along. What’s he going to be able to add save more syntax? And what will it have to do with how we actually function (as opposed to break down into a certain functional analysis in certain (often artifactual) contexts)?
With Googles AI designing AI I think we’re going to begin so see how creepy far superficiality can take us. They’ll stack it and integrate it, until at a certain point the ancient conceit of ‘human depth’ will be impossible to maintain.
“…to relinquish their countless Square Two prejudices, and reconsider things from the standpoint of biology.”
If you can get them to reconsider things from the standpoint of biology the battle’s won. As I say all the time, cognition, respiration, digestion… Once you accept cognition as merely biological all the Chalmers/Floridi sorts of arguments go ‘poof.’
They really do. The problem is that short some alternative scheme for explaining intentional phenomena, popping those bubbles simply strands the theorist with nothing to say about their subject matter. This is why I think I’d have a much easier time bending academic ears if I weren’t an outsider. Anyone who’s tackled a different philosophical gestalt, like Kant or Wittgenstein, say, knows the amount of elbow grease and suspension of disbelief that’s required to understand a view on its own terms. The primary motivation for most readers is institutional.
I disagree, and I’m a biologist. Biology definitely gives one great sympathy for a lot of what Scott is saying- our cognition is heuristic and ecologically tailored. Asymptotic limits of information processing definitely exist.
But at best, that gets me to “the problem is unsolvable” and/or “we are utterly incompetent and have no fucking clue what’s going on”, but (crucially) not to “there is no problem, consciousness is an illusion”.
Changing my perspective to the third person looking at myself stubbing my toe and watching all the little biological gears spring into action does not eliminate that final, harrowing truth- pain *hurts*, and no amount of science/reductionism you throw at that fact makes it go away. Maybe we should be glad. If you and your pain don’t exist, then who cares if waterboard you 83 times, and kick your head until your eye falls out?
I consider myself to be a ‘weak illusionist’ on Chalmer’s scheme. I believe consciousness exists (I have my EMF hunches, as you know), just not the way it advertizes itself to metacognition. Consciousness as it appears is an illusion. The ‘hard problem of consciousness’ is so hard because it’s demands we explain this latter–that we demonstrate how the two lines in the Muller-Lyer illusion actually do differ in size. Pain is very real – within the heuristic economy of reportable states/situations. Pain is a very real form of engaging reality via reality avoidance.
As Bakker quotes Chalmers:
“insofar as consciousness seems nonphysical, this seeming itself needs to be explained.”
Does your consciousness seem nonphysical to you? If so, does that seeming convince you that your consciousness actually is nonphysical? I think the answer one chooses to the question of whether consciousness is physical or nonphysical determines where one looks for explanations. I think (just based on how much success physical explanations have had in other areas) conscious is physical (to the extent consciousness exists)) so that’s where I’d look if I were trying to explain consciousness.
You probably know more about this sort of thing:
https://en.wikipedia.org/wiki/Congenital_insensitivity_to_pain
than I do, but it seems to imply that pain as a phenomenal experience depends on pain as a biological event, so to speak. I’ve never experienced my own consciousness as nonphysical. Since I have never experienced the seeming I’ve never felt that need for explanation. Sometimes I wonder if the innate ability or disposition to perceive consciousness as nonphysical, or even as miraculous, is what separates the philosophers from us ordinary folk.
Michael, You could invert the argument and say if we did not have a sense of the physical inside of us we would not be mystified by the seemingly non-physical.
Does this just break down to faith in the end? There is no internal way to confirm any of this. Sure, there’s lots of evidence that suggests stuff about human cognition and the notion of consciousness. But I mean I dunno, maybe I’m an emanation of some kind of supernatural thing and I just don’t know it, but am meanwhile pitching a set of physical explanations? Reminds me of Deckard saying “Suspect? How can it not know what it is?”, when he is supposed to be a replicant himself.
Is it fair to exclusively argue facts on something that can only be taken on faith, really? It’s like some game of headbands where everyone can see what’s written on the card on your head while you can’t and you have to guess, but in the end everyone looks at the card after the game. No one just takes everyone else on faith what they implied was on it was actually on it. But here nobody can look at the card. There’s no internal way to confirm this stuff.
Awesome poster!
Will your car start in the morning? When you turn it on, how do you know you’re not powering slave collars on Alpha Centauri? Are you simply driving away on faith? Egad.
In other words, radical skeptical claims are cheap. And any turn in any theory is vulnerable to death by a thousand qualifications. The power of a theory lies in the explanatory whole.
It’s radical skepticism to say there is no internal way to confirm any of this? Doesn’t BBT say there is no great internal access?
If we were talking about a boardgame instead, it’s possible to look at the whole boardgame in order to argue any claims about it. All the boardgame is accessible to perception/processing of the person you’re putting the claim to. Bishops move on diagonals – they can test this claim. Here with claims about conciousness, half the boardgame (or more) is outside of access of any viewer. They can’t confirm or deny claims about the hidden half. They’d have to run off of faith about any claims as to that half. Sure, the visible half of the board can be drawn upon to suggest what is on the invisible half, but that’s suggestion.
Then again I’m pitching a faith argument there in saying there’s a lack of internal access – it’s just I thought BBT said the same thing. It’s not hard for me to be wrong, so maybe I’m wrong on that.
Okay, I wrote up a long version – it has to be taken as a cartoon and a faith argument though. Ie It has to lack explanatory power in order to have any explanatory power. At worst it’s an outline for a setting where all the beings have hollow minds as a fact of this setting but can only consider they have a hollow mind if the idea is pitched as a fantasy to them. If it’s pitched as a fact to them, they are stuck in a loop, instantly rejecting it as non factual, in part because its a fact of their setting. Here’s the long version.
brilliant as usual. I’m just starting to quibble around the “ancient environments” thing, probably inherited from evo-psych. reading “The 10,000 Year Explosion”, it seems our brains have been evolutionarily keeping up at least marginally with the changing of environs, so that maybe we’re not just ‘retooling’ meta-cognitive resources, but actually using them in problems they are at least partially adapt to. not sure where this line of reasoning would take us, but possibly works as a reason why we can at least “think biologically” now (as opposed to antiquity)
Probably a main issue is how much economic gain is there to be had by advertising to people their cognitive resources are suitable for problems in new technological situations, when they really aren’t at all?
I agree, so far as you look at civilization-specific adaptations in kludgy, granular terms. But short culling philosophical reproducers according to the theoretical accuracy of their intuitions, I think we’re safe when it comes to philosophy!
Isn’t it all Wittgenstein’s fly trapped in he bottle? The world of facts, beliefs, cognition are all there because the fly HAS to trap itself in the bottle? All that is left is the theorizing about how we do it, but if our nature is all biology, how nature does it is still a very open question, which Chalmers keeps open. BTW the paper is the basis for Chalmers TSC2018 Talk next month in Arizona.
The pansychism talk below is interesting. To apply a Bakkerism, “They reach the basement of the reductive house and somehow pass through the concrete floor”.
http://meaningoflife.tv/videos/39927
Well, I think heuristic neglect provides a very parsimonious way to understand the bottle!
https://meanjin.com.au/essays/the-last-days-of-reality/