It Is What It Is (Until Notified Otherwise)
by rsbakker
The thing to always remember when one finds oneself in the middle of some historically intractable philosophical debate is that path-dependency is somehow to blame. This is simply to say that the problem is historical in that squabbles regarding theoretical natures always arises from some background of relatively problem-free practical application. At some point, some turn is taken and things that seem trivially obvious suddenly seem stupendously mysterious. St. Augustine, in addition to giving us one of the most famous quotes in philosophy, gives us a wonderful example of this in The Confessions when he writes:
“What, then, is time? If no one asks of me, I know; if I wish to explain to him who asks, I know not.” XI, XIV, 17
But the rather sobering fact is that this is the case with a great number of the second order questions we can pose. What is mathematics? What’s a rule? What’s meaning? What’s cause? And of course, what is phenomenal consciousness?
So what is it with second order interrogations? Why is ‘time talk’ so easy and effortlessly used even though we find ourselves gobsmacked each and every time someone asks what time qua time is? It seems pretty clear that either we lack the information required or the capacity required or some nefarious combination of both. If framing the problem like this sounds like a no-brainer, that’s because it is a no-brainer. The remarkable thing lies in the way it recasts the issue at stake, because as it turns out, the question of the information and capacity we have available is a biological one, and this provides a cognitive ecological means of tackling the problem. Since practical solving for time (‘timing’) is obviously central to survival, it makes sense that we would possess the information access and cognitive capacity required to solve a wide variety of timing issues. Given that theoretical solving for time (qua-time) isn’t central to survival (no species does it and only our species attempts it), it makes sense that we wouldn’t possess the information access and cognitive capacity required, that we would suffer time-qua-time blindness.
From a cognitive ecological perspective, in other words, St. Augustine’s perplexity should come as no surprise at all. Of course solving time-qua-time is mystifying: we evolved the access and capacity required for solving the practical problems of timing, and not the theoretical problem of time. Now I admit if the cognitive ecological approach ground to a halt here it wouldn’t be terribly illuminating, but there’s quite a bit more to be said: it turns out cognitive ecology is highly suggestive of the different ways we might expect our attempts to solve things like time-qua-time to break down.
What would it be like to reach the problem-solving limits of some practically oriented problem-solving mode? Well, we should expect our assumptions/intuitions to stop delivering answers. My daughter is presently going through a ‘cootie-catcher’ phase and is continually instructing me to ask questions, then upbraiding me when my queries don’t fit the matrix of possible ‘answers’ provided by the cootie-catcher (yes, no, and versions of maybe). Sometimes she catches these ill-posed questions immediately, and sometimes she doesn’t catch them until the cootie-catcher generates a nonsensical response.
Now imagine your child never revealed their cootie-catcher to you: you asked questions, then picked colours or numbers or animals, and it turned out some were intelligibly answered, and some were not. Very quickly you would suss out the kinds of questions that could be asked, and the kinds that could not. Now imagine unbeknownst to you that your child replaced their cootie-catcher with a computer running two separately tasked, distributed AlphaGo type programs, the first trained to provide well-formed (if not necessarily true) answers to basic questions regarding causality and nothing else, the second trained to provide well-formed (if not necessarily true) answers to basic questions regarding goals and intent. What kind of conclusions would you draw, or more importantly, assume? Over time you would come to suss out the questions generating ill-formed answers versus questions generating well-formed ones. But you would have no way of knowing that two functionally distinct systems were responsible for the well-formed answers: causal and purposive modes would seem the product of one cognitive system. In the absence of distinctions you would presume unity.
Think of the difference between Plato likening memory to an aviary in the Theaetetus and the fractionate, generative memory we now know to be the case. The fact that Plato assumed as much, unity and retrieval, shouts something incredibly important once placed in a cognitive ecological context. What it suggests is that purely deliberative attempts to solve second-order problems, to ask questions like what is memory-qua-memory, will almost certainly run afoul the problem of default identity, the identification that comes about for the want of distinctions. To return to our cootie-catcher example, it’s not simply that we would report unity regarding our child’s two AlphaGo type programs the way Plato did with memory, it’s that information involving its dual structure would play no role in our cognitive economy whatsoever. Unity, you could say, is the assumption built into the system. (And this applies as much to AI as it does to human beings. The first ‘driverless fatality’ died because his Tesla Model S failed to distinguish a truck trailer from the sky.)
Default identity, I think, can play havoc with even the most careful philosophical interrogations—such as the one Eric Schwitzgebel gives in the course of rebutting Keith Frankish, both on his blog and in his response in The Journal of Consciousness Studies, “Phenomenal Consciousness, Defined and Defended as Innocently as I Can Manage.”
According to Eric, “Illusionism as a Theory of Consciousness” presents the phenomenal realist with a dilemma: either they commit to puzzling ontological features such as simple, ineffable, intrinsic, or so on, or they commit to explaining those features away, which is to say, some variety of Illusionism. Since Eric both believes that phenomenal consciousness is real, and that the extraordinary properties attributed to it are likely not real, he proposes a third way, a formulation of phenomenal experience that neither inflates it into something untenable, nor deflates into something that is plainly not phenomenal experience. “The best way to meet Frankish’s challenge,” he writes, “is to provide something that the field of consciousness studies in any case needs: a clear definition of phenomenal consciousness, a definition that targets a phenomenon that is both substantively interesting in the way that phenomenal consciousness is widely thought to be interesting but also innocent of problematic metaphysical and epistemological assumptions” (2).
It’s worth noting the upshot of what Eric is saying here: the scientific study of phenomenal consciousness cannot, as yet, even formulate their primary explanandum. The trick, as he sees it, is to find some conceptual way to avoid the baggage, while holding onto some semblance of a wardrobe. And his solution, you might say, is to wear as many outfits as he possibly can. He proposes that definition by example is uniquely suited to anchor an ontologically and epistemologically innocent concept of phenomenal consciousness.
He has but one caveat: any adequate formulation of phenomenal consciousness has to account or allow for what Eric terms its ‘wonderfulness’:
If the reduction of phenomenal consciousness to something physical or functional or “easy” is possible, it should take some work. It should not be obviously so, just on the surface of the definition. We should be able to wonder how consciousness could possibly arise from functional mechanisms and matter in motion. Call this the wonderfulness condition. 3
He concedes the traditional properties ascribed to phenomenal experience outrun naturalistic credulity, but the feature of begging belief remains to be explained. This is the part of Eric’s position to keep an eye on because it means his key defense against eliminativism is abductive. Whatever phenomenal consciousness is, it seems safe to say it is not something easily solved. Any account purporting to solve phenomenal consciousness that leaves the wonderfulness condition unsatisfied is likely missing phenomenal consciousness altogether.
And so Eric provides a list of positive examples including sensory and somatic experiences, conscious imagery, emotional experience, thinking and desiring, dreams, and even other people, insofar as we continually attribute these very same kinds of experiences to them. By way of negative examples, he mentions a variety of intimate, yet obviously not phenomenally conscious processes, such as fingernail growth, intestinal lipid absorption, and so on.
He writes:
Phenomenal consciousness is the most folk psychologically obvious thing or feature that the positive examples possess and that the negative examples lack. I do think that there is one very obvious feature that ties together sensory experiences, imagery experiences, emotional experiences, dream experiences, and conscious thoughts and desires. They’re all conscious experiences. None of the other stuff is experienced (lipid absorption, the tactile smoothness of your desk, etc.). I hope it feels to you like I have belabored an obvious point. Indeed, my argumentative strategy relies upon this obviousness. 8
Intuition, the apparent obviousness of his examples, is what he stresses here. The beauty of definition by example is that offering instances of the phenomenon at issue allows you to remain agnostic regarding the properties possessed by that phenomenon. It actually seems to deliver the very metaphysical and epistemological innocence Eric needs to stave off the charge of inflation. It really does allow him to ditch the baggage and travel wearing all his clothes, or so it seems.
Meanwhile the wonderfulness condition, though determining the phenomenon, does so indirectly, via the obvious impact it has on human attempts to cognize experience-qua-experience. Whatever phenomenal consciousness is, contemplating it provokes wonder.
And so the argument is laid out, as spare and elegant as all of Eric’s arguments. It’s pretty clear these are examples of whatever it is we call phenomenal consciousness. Of course, there’s something about them that we find downright stupefying. Surely, he asks, we can be phenomenal realists in this austere respect?
For all its intuitive appeal, the problem with this approach is that it almost certainly presumes a simplicity that human cognition does not possess. Conceptually, we can bring this out with a single question: Is phenomenal consciousness the most folk psychologically obvious thing or feature the examples share, or is it obvious in some other respect? Eric’s claim amounts to saying the recognition of phenomenal consciousness as such belongs to everyday cognition. But is this the case? Typically, recognition of experience-qua-experience is thought to be an intellectual achievement of some kind, a first step toward the ‘philosophical’ or ‘reflective’ or ‘contemplative’ attitude. Shouldn’t we say, rather, that phenomenal consciousness is the most obvious thing or feature these examples share upon reflection, which is to say, philosophically?
This alternative need only be raised to drag Eric’s formulation back into the mire of conceptual definition, I think. But on a cognitive ecological picture, we can actually reframe this conceptual problematization in path-dependent terms, and so more forcefully insist on a distinction of modes and therefore a distinction in problem-solving ecologies. Recall Augustine, how we understand time without difficulty until we ask the question of time qua time. Our cognitive systems have no serious difficulty with timing, but then abruptly break down when we ask the question of time as such. Even though we had the information and capacity required to solve any number of practical issues involving time, as soon as we pose the question of time-qua-time that fluency evaporates and we find ourselves out-and-out mystified.
Eric’s definition by example, as an explicitly conceptual exercise, clearly involves something more than everyday applications of experience talk. The answer intuitively feels as natural as can be—there must be some property X these instances share or exclude, certainly!—but the question strikes most everyone as exceptional, at least until they grow accustomed to it. Raising the question, as Augustine shows us, is precisely where the problem begins, and as my daughter would be quick to remind Eric, cootie-catchers only work if we ask the right question. Human cognition is fractionate and heuristic, after all.
All organisms are immersed in potential information, difference making differences that could spell the difference between life and death. Given the difficulties involved in the isolation of causes, they often settle for correlations, cues reliably linked to the systems requiring solution. In fact, correlations are the only source of information organisms have, evolved and learned sensitivities to effects systematically correlated to those environmental systems relevant to reproduction. Human beings, like all other living organisms, are shallow information consumers adapted to deep information environments, sensory cherry pickers, bent on deriving as much behaviour from as little information as possible.
We only have access to so much, and we only have so much capacity to derive behaviour from that access (behaviour which in turn leverages capacity). Since the kinds of problems we face outrun access, and since those problems and the resources required to solve them are wildly disparate, not all access is equal.
Information access, I think, divides cognition into two distinct forms, two different families of ‘AlphaGo type’ programs. On the one hand we have what might be called source sensitive cognition, where physical (high-dimensional) constraints can be identified, and on the other we have source insensitive cognition, where they cannot.
Since every cause is an effect, and every effect is a cause, explaining natural phenomena as effects always raises the question of further causes. Source sensitive cognition turns on access to the causal world, and to this extent, remains perpetually open to that world, and thus, to the prospect of more information. This is why it possesses such wide environmental applicability: there are always more sources to be investigated. These may not be immediately obvious to us—think of visible versus invisible light—but they exist nonetheless, which is why once the application of source sensitivity became scientifically institutionalized, hunting sources became a matter of overcoming our ancestral sensory bottlenecks.
Since every natural phenomena has natural constraints, explaining natural phenomena in terms of something other than natural constraints entails neglect of natural constraints. Source insensitive cognition is always a form of heuristic cognition, a system adapted to the solution of systems absent access to what actually makes them tick. Source insensitive cognition exploits cues, accessible information invisibly yet sufficiently correlated to the systems requiring solution to reliably solve those systems. As the distillation of specific, high-impact ancestral problems, source insensitive cognition is domain-specific, a way to cope with systems that cannot be effectively cognized any other way.
(AI approaches turning on recurrent neural networks provide an excellent ex situ example of the necessity, the efficacy, and the limitations of source insensitive (cue correlative) cognition. Andrei Cimpian’s lab and the work of Klaus Fiedler (as well as that of the Adaptive Behaviour and Cognition Research Group more generally) are providing, I think, an evolving empirical picture of source insensitive cognition in humans, albeit, absent the global theoretical framework provided here.)
So what are we to make of Eric’s attempt to innocently (folk psychologically) pose the question of experience-qua-experience in light of this rudimentary distinction?
If one takes the brain’s ability to cognize its own cognitive functions as a condition of ‘experience talk,’ it becomes very clear very quickly that experience talk belongs to a source insensitive cognitive regime, a system adapted to exploit correlations between the information consumed (cues) and the vastly complicated systems (oneself and others) requiring solution. This suggests that Eric’s definition by example is anything but theoretically innocent, assuming, as it does, that our source insensitive, experience-talk systems pick out something in the domain of source sensitive cognition… something ‘real.’ Defining by example cues our experience-talk system, which produces indubitable instances of recognition. Phenomenal consciousness becomes, apparently, an indubitable something. Given our inability to distinguish between our own cognitive systems (given ‘cognition-qua-cognition blindness’), default identity prevails; suddenly it seems obvious that phenomenal experience somehow, minimally, belongs to the order of the real. And once again, we find ourselves attempting to square ‘posits’ belonging to sourceless modes of cognition with a world where everything has a source.
We can now see how the wonderfulness condition, which Eric sees working in concert with his definition by example, actually cuts against it. Experience-qua-experience provokes wonder precisely because it delivers us to crash space, the point where heuristic misapplication leads our intuitions astray. Simply by asking this question, we have taken a component from a source insensitive cognitive system relying (qua heuristic) on strategic correlations to the systems requiring solution to solve, and asked a completely different, source sensitive system to make sense of it. Philosophical reflection is a ‘cultural achievement’ precisely because it involves using our brains in new ways, applying ancient tools to novel questions. Doing so, however, inevitably leaves us stumbling around in a darkness we cannot see, running afoul confounds we have no way of intuiting, simply because they impacted our ancestors not at all. Small wonder ‘phenomenal consciousness’ provokes wonder. How could the most obvious thing possess so few degrees of cognitive freedom? How could light itself deliver us to darkness?
I appreciate the counterintuitive nature of the view I’m presenting here, the way it requires seeing conceptual moves in terms of physical path-dependencies, as belonging to a heuristic gearbox where our numbness to the grinding perpetually convinces us that this time, at long last, we have slipped from neutral into drive. But recall the case of memory, the way blindness to its neurocognitive intricacies led Plato to assume it simple. Only now can we run our (exceedingly dim) metacognitive impressions of memory through the gamut of what we know, see it as a garden of forking paths. The suggestion here is that posing the question of experience-qua-experience poses a crucial fork in the consciousness studies road, the point where a component of source-insensitive cognition, ‘experience,’ finds itself dragged into the court of source sensitivity, and productive inquiry grinds to a general halt.
When I employ experience talk in a practical, first-order way, I have a great deal of confidence in that talk. But when I employ experience talk in a theoretical, second-order way, I have next to no confidence in that talk. Why would I? Why would anyone, given the near-certainty of chronic underdetermination? Even more, I can see of no way (short magic) for our brain to have anything other than radically opportunistic and heuristic contact with its own functions. Either specialized, simple heuristics comprise deliberative metacognition or deliberative metacognition does not exist. In other words, I see no way of avoiding experience-qua-experience blindness.
This flat out means that on a high dimensional view (one open to as much relevant physical information as possible), there is just no such thing as ‘phenomenal consciousness.’ I am forced to rely on experience related talk in theoretical contexts all the time, as do scientists in countless lines of research. There is no doubt whatsoever that experience-talk draws water from far more than just ‘folk psychological’ wells. But this just means that various forms of heuristic cognition can be adapted to various experimentally regimented cognitive ecologies—experience-talk can be operationalized. It would be strange if this weren’t the case, and it does nothing to alleviate the fact that solving for experience-qua-experience delivers us, time and again, to crash space.
One does not have to believe in the reality of phenomenal consciousness to believe in the reality of the systems employing experience-talk. As we are beginning to discover, the puzzle has never been one of figuring out what phenomenal experiences could possibly be, but rather figuring out the biological systems that employ them. The greater our understanding of this, the greater our understanding of the confounds characterizing that perennial crash space we call philosophy.
my family members have uttered the word experience (though almost never the world ‘consciousness’) in ways quite far removed from erics idea of experience. i think metzinger hits on this when he says that phenomenal experience is just transparent to most people. the idea that experience is a medium or interface likely doesnt occur to people in general unless they are steeped in philsophy, even if people might have some implicit idioms with deal with the intentionality of experience (I experiencED so and so)
But I would argue Thomas runs afoul his representationalism here: ‘transparency,’ which seems miraculous, is an inevitable artifact of medial neglect. To say ‘experience is transparent’ is to already run afoul crash space, to misunderstand experience as a thing like a window… representational.
All this provides a very sharp, painfully parsimonious way to naturalize the ‘metaphysics of presence’ in the continental tradition as well, I think. In a very strange sense, Heidegger got the problem right in perhaps the most deceptive way imaginable.
I don’t think Metzinger is falling into a representational trap here. I think he’s making an observation that most people do not immediately apprehend the Hard problem. Not because they are p-zombies, or unintelligent, or anything like that, but because absent the correct meta-cognitive *conditioning* the Hard Problem just doesn’t exist. They are so embedded in default intentional thinking that intentionality/phenomenology being “odd” doesn’t even begin to occur to them- it’s transparent.
The probability of someone failing into our shared predicament is likely due to repeated exposure and pervasive engagement with the scientific (disenchanted) world view.
But then I think this description runs afoul representational thinking here–at least toward the end. It’s not that their experience is ‘transparent’ for the philosophically naïve, but that they neglect the enabling dimension of cognition altogether. Experience would have to ‘be’ something to be ‘transparent,’ and experience, as reflection has it, is not anything at all on a high-dimensional view. As soon as you turn it into a property bearing entity (in this case transparency) you’ve tripped into crash space.
“It’s not that their experience is ‘transparent’ for the philosophically naïve, but that they neglect the enabling dimension of cognition altogether. Experience would have to ‘be’ something to be ‘transparent,’ and experience, as reflection has it, is not anything at all on a high-dimensional view. ”
yep, that’s good nutshelling of these matters, as for Eric’s comments on what is supposedly common I think Wittgenstein on familial resemblances is a helpful antidote.
https://www.youtube.com/user/FQXi/videos
Yeah, I think we’re saying roughly the same and I’m just treating Metzinger’s view a bit more charitably. I mean, I’m still not sure if I buy the “Hard problem is just crash space” upshot of BBT or if I’ve completely understood how to avoid representationalist problems in my own views, but perhaps we can agree on something-
people without the right academic conditioning do not tend to trip into that particular crash space (if it is crash space) regardless of intellect or neurotypicality.
Schwitzgebel in this post reminds me of Dennett from three posts ago. I think that, as you’ve discussed elsewhere, we have all observed ourselves. We think we have/are a self because we believe we perceive this self directly. If a theorist has an observation, he will try to construct a theory that explains his observation, so both men are trying to construct a theory that explains the self-perceived self. In these cases the problem arises when we try to make the theory we’re constructing consistent with the observed self and also consistent with other theories which have successfully explained other observations. Both men have rejected supernatural theories of the self because supernatural theories are inconsistent with other theories (evolution, thermodynamics etc.) which have successfully explained other observations.
The problem they run in to, as it seems to me, is that once they remove the supernatural content from their theories of personhood the theories seem not to explain anything. Then seem to aim more at ‘truthiness’ in the sense of appealing to pre-existing intuitions than in making a coherent argument. It seems to work much the same way that essentialist arguments about why girls are bad a math appeal to pre-existing gender stereotypes. I think the other option they don’t consider is the one you’re recommended elsewhere as well; recheck your observational apparatus.
Arno Penzias and Robert Wilson received the Nobel Prize for discovering the Cosmic Microwave Background. In fairness, they were just trying to troubleshoot noise in a microwave communication system, but before they could claim to have provided the crucial observation that made the big bang the accepted version of the early history of the universe, they had to painstakingly eliminate every possible source of terrestrial or near Earth interference. I think of Blind Brain Theory as in part an attempt to check the observational apparatus by which Schwitzgebel and Dennett (and others from Jesus to Freud) perceived the self that they had to incorporate into their theories. Given that the observed self is so hard to reconcile with other theories that have far better predictive track records (evolution, thermodynamics, the big bang etc.) and the theories that attempt to incorporate that observation (Freudian Psychoanalysis, Christianity etc.) are so hard to reconcile with each other, it seems perfectly reasonable that someone would want to recheck our antenna. I think a Penzias and Wilson type recheck of the apparatus that we used to make the observations that intentional philosophy is trying to incorporate into a theory is due, if not overdue.
I agree with you on the ‘check the antenna’ maxim, but if I am right on the picture I’m sketching, then we’re talking about something between outright anosognosia and good old fashioned oversight. Default identity falls out of the machinery, so the miracle is that we’re able to catch glimpses of the path-dependencies underwriting conceptualization at all. It was easy for Penzias and Wilson to check the antenna for pigeon poop because it was an existing problem for them. If they got these results via ‘intuition,’ if they did even know what an antenna was, they could very stamp their feet and declare their findings a priori. Both Eric and Dennett represent, in different ways, two of the finest antenna checking minds in philosophy today. It’s also worth noting that if I am right all this, then I am almost certainly failing to check certain antennas as well.
Interesting discussion, Scott! I like your analogy to memory. Let’s run with it a bit. “Memory” is such a multi-faceted concept even folk-psychologically, that I’d like to narrow it a bit for clarity: memory for facts when asked quiz-type questions.
So if someone asks, “What was your address in Berkeley?” and “Who was U.S. President in 1970?” and “When was the last time you ate chocolate?” I think it’s clear that a diversity of processes comes together. It’s not like going to look for a bird in the aviary or checking an impression left on a sheet of wax. We’re “blind”, as you say, to the underlying processes that come together in some folk-psychologically incomprehensible way to produce the right answers to those questions.
And yet, there’s also something that all those processes have in common, I think. We can use the folk psychological label “memory” to capture what they have in common — or to be clearer we might jargonize it a bit. There’s something that all those events of successfully answering questions have in common, which we can tag with that label, even if we don’t know how they arise or have some goofy theory about how they arise. So why not say the same about conscious experiences? And if we can say the same about conscious experiences, then there’s no need to be eliminativist, is there?
Shifting example, the problem is with the *theories* of time, not with the reality of time.
To be clear, what the “memories” have in common might be a relational thing or a certain role in social interaction or in constructing a life — it needn’t be a single isolatable brain process.
I thought you might like the memory analogy! With time you have physics, which leads many to suspect that subjective time, the very time Augustine goes on to theorize, is illusory. With memory, mapping memory-talk to sources just turns out to be empirically feasible. It’s an easy problem. Instances of timing have time in common, and instances of memory have memory in common simply because these things can be sourced. There’s facts of the matter, which is precisely why these discourses eventually escaped crash space to the extent they have. It’s also the reason neither of these phenomena require definition by example to bootstrap workable formulations. This just happens to not be the case with phenomenal experience, and for the very reasons I give in the post. The crash space we trip into when we pluck it out of its practical contexts is perpetual. Thus the abiding nature of the apparent wonderfulness, the reason it seems criterial of experience, as opposed to say, memory.
Think of money. Say we wanted to find some epistemically and ontologically innocent way of defining the value of paper money. You can list things, all the actions involving monetary exchange, and claim that ‘monetary value’ is the shared property each of these examples evinces, and for those entirely ignorant of economics, this would seem to be obviously the case: these notes possess some property impelling exchange. The problem here is that ‘value’ is not a real property paper money possesses, it’s just a really easy way to think about something horrifically complicated.
Now we *can* do definition by example for “money” of course. Maybe we don’t need to — or maybe we still do, because it turns out that a rigorous analytic definition is elusive (cf. Wittgenstein on games). But I think you’re right that “wonderfulness” is a difference. There would be nothing dissatisfying about an analytic definition of money if it seemed to capture all the intuitive cases, whereas in our current state of knowledge there would necessarily be something dissatisfying or at least controversial in any analytic (functional, physiological, whatever) definition of “phenomenal consciousness” that is strictly physicalistic. Such a definition would settle, as a matter of definition, whether ants are conscious (given that we know enough about their brains and behavior). It would settle, as a matter of definition, that there could be no consciousness after your body dies (which I take to be empirically unlikely but not definitionally impossible). Those questions don’t seem preposterous in the same way that it would be preposterous to think that there was invisible money that nobody knew about.
So… is the wonderfulness condition objectionable because of this? I tried to make it weak. It is explicitly *not* a claim that consciousness proves irreducible, mysterious, or nonphysical in the long run — just that it has the property of not being *obviously* reducible, unmysterious, and physical in the short run.
With money of course, the trick has to do with intrinsic value. My point is probably the same as Keith’s, that the connection between ‘phenomenal consciousness’ and your examples is underdetermined precisely because it is ontologically committed to something. I’m proposing a biological background to this, claiming that your intuitions are bound to lead you astray because experience cannot be anything high-dimensionally real because it has to be radically heuristic. You can think of it along the lines of Cimpian’s inherence heuristic, only belonging to a system of such devices collectively orienting us toward ourselves and others in typically advantageous ways. Seen this way, it just doesn’t make sense to say it’s ‘real.’ But you can see what makes its appearance wonderful: adopting the philosophical attitude (which amounts to applying idiosyncratically trained metacognitive resources in an ancestrally unprecedented way) immediately makes ‘experience’ a component of a far different cognitive economy without signalling as much in any way. Because we implicitly assume cognition comprises one big happy family, the resulting cognitive illusions strike us as astonishing properties, even so minimally determined as ‘phenomenality.’
I actually saw your argument working down lines parallel to Kriegel’s argument for intentionality in this regard: taking a minimalist approach (introspective smells), he address eliminativism somehow, so he blocks it abductively, the basic baby with the bathwater tack. I think this argument is a genuine dialectical show-stopper (for brands of eliminativism other than my own!). So I saw the wonderfulness condition as your way of working the abductive challenge into your conceptualization by exemplification strategy, and I think I can meet that challenge, and raise you an explanation for why your argument remains trapped in conceptual crash space! Given the conceptual slipperiness of these matters, virtues like explanatory power and parsimony are our only guide I think.
I’ve been thinking about this topic a lot lately: have you checked out “Real Systems” on Dennett’s “Real Patterns” from a few weeks back? There I provide a more thorough argument against ‘mild realism’ regarding beliefs.
“…in our current state of knowledge there would necessarily be something dissatisfying or at least controversial in any analytic (functional, physiological, whatever) definition of “phenomenal consciousness” that is strictly physicalistic.”
Whenever I read something like this I rack my brains trying to think of a way some aspect of ‘phenomenal consciousness’ could be anything other than ‘physicalistic’ without being supernatural. My own sense of the matter is that mentation is something brains do in the same straightforwardly physical way that digestion is something stomachs do and respiration is something that lungs do. As I meant to suggest in my comment about the cosmic background, if you perceive something about phenomenal consciousness that seems to require non-physicalistic explanation it’s possible there is something amiss in your perceptual apparatus. Of course you prefaced your remark with the ‘in our current state of knowledge’ caveat, and I guess we don’t know enough about how brains function to definitively rule out non-physicalistic explanation. The high level of correlation between damage to brains (Alzheimers, strokes, brain tumors, blunt force trauma etc.) and damage to phenomenal consciousness suggests that physicalistic explanation is the place to invest your research dollars.
Back in town after an exhausting trip. I love “Real Patterns”! It will be interesting to see how you push against the idea!
The beginning of AI versus AI conflict:
http://www.forbes.com/sites/thomasbrewster/2016/12/20/methbot-biggest-ad-fraud-busted/#424b997a4ca8
Which makes me wonder what happens when the fraudware no longer needs humans, but adapts on its own… Excellent article.
Just because he mentioned you by name:
http://www.consciousentities.com/2016/12/beyond-words/
More on the clever machine front:
http://www.theverge.com/2016/12/20/14022958/ai-image-manipulation-creation-fakes-audio-video
on my to do list:
https://www.academia.edu/30540358/Pragmatism_Without_Representationalism
“If the reduction of phenomenal consciousness to something physical or functional or “easy” is possible, it should take some work. It should not be obviously so, just on the surface of the definition. We should be able to wonder how consciousness could possibly arise from functional mechanisms and matter in motion. Call this the wonderfulness condition.”
Instead of ‘wonderfulness’ you could call this the ‘anti-duh’ condition. If we ever come up with an adequate physical explanation for phenomenal consciousness that explanation should not make us slap our foreheads and say ‘duh.’ It should not make us feel foolish for not having seen it before. I think the problem with either wonderfulness or anti-duh is that any adequate purely physical explanation of consciousness is going to make anybody who bet their chips on supernatural or transcendental explanations feel a fool or deny the legitimacy of the physical explanation.
I
what’s the deal with these folks?
https://www.theguardian.com/technology/audio/2016/dec/23/constructed-consciousness-are-we-living-in-computer-simulation-tech-podcast
Off-topic, but Overlook have revealed the cover art for THE UNHOLY CONSULT:
http://thewertzone.blogspot.co.uk/2016/12/the-unholy-consult-cover-art-revealed.html
They also have a publication date on Amazon and in their 2017 catalogue: 4 July 2017.
There’s accepted scientific studies where anxiety, addiction and depression are shown to be able to be epigenetic produced inheritable phenomena. Epigenetics involves emergent phemomena like thoughts making changes to gene expression in the chromosomes but not in the DNA sequence itself. How far can thoughts for instance change physical brain structures? think of the affect anxiety, addiction and depression can have on the brain. The hard constraint for all possible changes is the DNA, but how the genes in the chromosomes are expressed is affected by epigenetic emergent phenomena like thoughts. Thoughts are a phenomena of something being greater than the sum of it’s parts, take placing music alongside a video this can make a dramatic new phenomena. It goes even more basic than that, the dimensions, the colours etc etc are all placed alongside each other what emerges out of these are ‘greater’. What they are all are is DNA dependent. What breaks this constraint is us being Beings-within-time. Time is Entropy the highly original constructive and/or destructive state the universe has always gone through and will always go through, emergent phenomena after emergent phenomena. There comes a point where reflective phenomena emerge, who are aware of the changeable emergent nature of the universe the ramifications for this have a high chance of said reflective beings going extinct not before they make a great many things around cease to exist. The originality of the Entropy within the universe is unbound, we got lucky and the Butterfly effect created out universe where it coul;d easily have gone down many other routes and perhaps we aren’t that lucky there’s even more luckier universes out there with even more beneficial physics, chemistry and biology? Perhaps some science we aren’t aware of ? The entropic principle opens the universe up to being changeable so in the long term all universes are equal.