Error Consciousness (Part One): The Smell of Experience
by rsbakker
Aphorism of the Day: Are you giving me a ‘just so’ story here? Saying that introspection, despite all the structural and developmental constraints it faces, gets exactly the information it needs to cognize consciousness as it is? Even without the growing mountain of contrary empirical data, this strikes me as implausible. Or are you giving me a ‘just enough’ story? Saying that introspection gets enough information it needs to cognize what consciousness is more or less. I have a not enough story, and an extreme one. I think we are all but blind, that introspection is nothing but a keyhole glimpse that only seems as wide as the sky because it lacks any information regarding the lock and door. I’m saying that we attribute subjectivity to ourselves as well as to others, not because we actually have subjectivity, but because it’s the best we can manage given the fragmentary information we got.
.
Perplexities of Consciousness is unlike any philosophical text on consciousness you are apt to read, probably because Eric Schwitzgebel is unlike any philosopher of mind you are apt to encounter. In addition to teaching philosophy at the UC Riverside, he’s both an avid SF fan and a long-time gamer. He also runs Splintered Minds, a blog devoted to issues in consciousness studies, cognitive psychology, and experimental ethics.
Did I mention he was also a skeptic?
Perplexities of Consciousness is pretty much unique in its stubborn refusal to provide any positive account of consciousness. Schwitzgebel’s goal, rather, is to turn an entire philosophical tradition on its head: the notion that our conscious experience is the one thing we simply can’t be wrong about. He advances what might be called an Introspective Incompetence Thesis, the claim that, contrary to appearances, introspection is anything but the model of cognitive reliability it so often seems:
“Why did the scientific study of the mind begin with the study of conscious experience? And why, despite that early start, have we made so little progress? The two questions can be answered together if we are victims of an epistemic illusion–if, though the stream of experience seems readily available, though it seems like low-hanging fruit for first science, in fact we are much better equipped to learn about the outside world.” (159)
What Schwitzgebel essentially shows is that when it comes to reports of inner experience, consistent consensus is really, really hard to find. Consider the dated assumption that we dream in black and white: Schwitzgebel shows–quite convincingly, I think–that this particular conceit (once held by specialists and nonspecialists alike) lasted only as long as the cultural predominance of black and white movies. As preposterous as it sounds, there’s a good chance that questions even as rudimentary as this lie beyond our ability to decisively answer.
In lieu of reviewing Perplexities in any traditional sense, I would like to propose a positive account of Schwitzgebel’s negative thesis, an explanation of why consciousness “seems readily available,” at least in its details, even as it remains, in many ways, anything but available. Understanding this pseudo-availability provides a genuinely novel way of understanding the cognitive difficulties consciousness poses more generally. And once we have these difficulties in view, we can finally get down to the business of circumventing them. The fact is I actually think Schwitzgebel is telling a much larger story than he realizes, one that would likely strain even his estimable powers of incredulity.
Perplexities is anything but grandiose. The banality of the examples Schwitzgebel uses–whether we dream in colour, what we sense (aurally or visually) with our eyes closed, how we intuit flatness, whether we generally feel our feet in our shoes–belies, I think, the care he invested in selecting them. These are all questions that most lay readers would think easy to answer, perhaps eminently so. This presumption of ‘ready availability’ has the rhetorical effect of dramatically accentuating his conclusions. You would think we would know whether we dream in colour, immediately and effortlessly.
It turns out we only think we know.
The problem is anything but a new one. Schwitzgebel spends quite some time discussing attempts by various 19th Century introspective psychologists to train their subjects, particularly that of Edward B. Titchener, who wrote a 1600 page laboratory manual on introspective experimentation. Perhaps inner experience does require trained observers to become scientifically tractable–perhaps its truth needs a trained eye to be discerned. Or perhaps, as seems far more likely, psychologists like Titchener, faced with a fundamentally recalcitrant set of phenomena, required consistency for the sake of institutional credibility.
Coming out of the Continental philosophical tradition and its general insistence on the priority of lived experience, I quite literally saw philosophy in small in this narrative. I have suffered, or enjoyed, a number of profound conversions over the course of my philosophical life– from Dennett to Heidegger to Derrida to Wittgenstein–and in each case I have been mightily impressed by how well each of these outlooks ‘captured’ this or that manifold of experience. In fact, it was the degree to which I had identified with each of these perspectives, the fact that I could be so convinced at each and every turn, that led me to my present skeptical naturalism. In each case I was being trained, not simply to think in a certain way, but to perceive. Heidegger, in particular, revolutionized the way I ‘lived life.’ For a span of years, I was a hard-drinking, head-banging Dasein, prone to get all ontological with the ladies.
In a very real sense, Schwitzgebel’s historical account of early introspective psychology offers a kind of microcosm of philosophical speculation on the soul, mind–or whatever term we happen to find fashionable. Short of some kind of training or indoctrination, everyone seems to see something different. Our ‘observations’ are not simply ‘theory-laden,’ in many cases they seem to be out-and-out theory driven–and the question of how to sort the introspection from the conceptualization seems all but impossible to answer. I’ll return to this point later. For the moment I simply want to offer it as more evidence of the problem that Schwitzgebel notes time and again:
Problem One (P1): Conscious experience seems to display a comparatively high degree of ‘observational plasticity.’
As the question of dreaming in colour dramatically illustrates, conscious experience, in some respects at least, has a tendency to ‘meet us halfway,’ to reliably fit our idiosyncratic preconceptions. Now you might object that this is simply the cost of doing theoretical business more generally, that even in the sciences theorization involves the gaming of ambiguities this way or that. Consider cosmology. Theories are foisted on existing data, and then sorted according to their adequacy to the new data that trickles in. The problem with theories of consciousness, however, is that so little–if anything at all–ever seems to get sorted.
What distinguishes science from philosophy is the way it first isolates, then integrates the information required to winnow down the number of available theories. Like any other scientific enterprise, this is precisely what early introspective psychology attempted to do: isolate the requisite information. Titchener’s training manual, you could say, is simply an attempt to retrieve pertinent experimental information from the noise that seemed to plague his results otherwise. And yet, here we are, more than a century afterward, stymied by the very questions he and others raised so long ago. Despite its 1600 pages, his manual simply did not work.
As Kreigal notes in his review of Perplexities (linked above), it could be the case that psychology simply gave up too soon. Maybe training and patience are required. Perhaps introspection, though far more informatically impoverished than vision, is more akin to olfaction, a low resolution modality demanding much, much more time to accumulate the information needed for reliable cognition. Perhaps introspective psychology needed to keep sniffing. Either way it serves to illustrate a second problem that regularly surfaces through Perplexities:
Problem Two (P2): Conscious experience seems to exhibit a comparatively high degree of ‘informatic closure.’
Introspection, you could say, confuses what is actually an ‘inner nose’ with an ‘inner eye,’ which is to say, an impoverished sensory modality with a rich one. ‘Intro-olfaction,’ as it should be called, does access information, only in a way that requires much more training and patience to see results. So even if conscious experience isn’t informatically closed in the long term, it remains so in the short term, particularly when it comes to the information required to successfully arbitrate incompatible claims.
Given these two problems, the dilemma becomes quite clear. A high degree of observational plasticity means a large number of ‘theories,’ naive or philosophical. If you have a theory of consciousness to sell (like I do), you quickly realize that the greatest obstacle you face is the fact that everybody and her uncle also has a theory to sell. A high degree of informatic closure, on the other hand, means that the information required to decisively arbitrate between these countless theories will be hard to come by.
You could say conscious experience is a kind of perspectival trap, one where our cognitive guesses become ‘perceptual realities’ that we quite simply cannot sniff our way around. This characterization has the effect of placing a premium on any information we can get our hands on. And this is precisely what Perplexities of Consciousness does: provide the reader with new historical and empirical facts regarding conscious experience. Though he adheres to the traditional semantic register, Schwitzgebel is furnishing information regarding the availability of information to conscious cognition. In fact, he probes the question of this availability from both sides, showing us how, as in the case of ‘human echolocation,’ we seem to possess more information than we think we do, and how, as in the case of recollecting dreams, we seem to have far less.
And this is what makes the book invaluable. Something smells fishy about our theoretical approaches to consciousness, and I think the primary virtue of Perplexities is the way it points our noses in the right direction: the question of what might be called introspective anosognosia. This, certainly, has to be the cornerstone of all the perplexities that Schwitzgebel considers: not the fact that our introspective reports are so woefully unreliable, but that we so reliably think otherwise. As he writes:
“Why, then, do people tend to be so confident in their introspective judgments, especially when queried in a casual and trusting way? Here is my guess: Because no one ever scolds us for getting it wrong about our experience and we never see decisive evidence of our error, we become cavalier. This lack of corrective feedback encourages a hypertrophy of confidence.” (130)
I don’t so much disagree with this diagnosis as I think it incomplete. One might ask, for instance, why we require ‘social scolding’ to ‘see decisive evidence of our error’? Why can’t we just see it on our own? The easy answer is that, short of different perspectives, the requisite information is simply not available to us. The answer, in other words, is that we have only a single perspective on our conscious experience.
The Invisibility of Ignorance–the cognitive phenomenon Daniel Kahneman (rather cumbersomely) calls What-You-See-Is-All-There-Is, or WYSIATI–is something I’ve spilled many pixels about over many years now. The idea, quite simply, is that because you don’t know what you don’t know, you tend to think you know all that you need to know:
“An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. [Our automatic cognitive system] excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have.” (Thinking Fast and Slow, 85)
As Kahneman shows, this leads to myriad errors in reasoning, including our peculiar tendency to be more certain about our interpretations the less information we have available. But where the instances of WYSIATI studied by Kahneman involve variable information deficits, environmental ignorances or mnemonic failures that we can address by simply seeking out more information (typically by exploring our environments), the information deficits pertaining to conscious experience, as we have seen, are more or less fixed.
Our unwarranted confidence in our introspective judgments, in other words, turns on P2, informatic closure. When it comes to environmental cognition, there is always ‘more than what meets the eye’–as the truism goes. Take a step sideways, consult others standing elsewhere, turn to instrumentation: we literally have countless ways of extracting more information from our natural and social environments. When it comes to introspective cognition, on the other hand, there is only what meets the eye, and precious little else.
This offers a straightforward way to theorize the apparently dismal phenomenological portrait that Schwitzgebel sketches: When it comes to introspective cognition, there is only what meets the eye, and it is insufficient for cognition. Not only do we lack the information required to cognize conscious experience, we lack the information required to cognize this lack, and so are readily fooled into thinking we have cognized conscious experience. We are the victims of a kind of natural introspective anosognosia.
This, for me, constitutes one of the more glaring oversights you find in contemporary philosophy of mind and consciousness research. Conscious experience, whatever it turns out to be, is the product of some subsystem of the greater brain. The question of introspective competence is the question of how effectively that subsystem, the ‘conscious brain,’ accesses and uses information gleaned from the greater brain. When it comes to reflection on conscious experience, what information does the brain make available for what cognitive systems?
What makes this question so important is what I consider the grand inferential upshot of the Schwitzgebel’s Introspective Incompetence argument: the jarring but almost undeniable fact that in certain profound respects we simply do not possess the consciousness we think we do. This is another consequence of observational plasticity and informatic closure. If we assume that consciousness is a natural phenomenon that does not vary between humans, then the wild variety of interpretations of conscious experience, both local and global, means that most everyone has to be wrong about consciousness–at least in some respect.
Let’s coin a category for all these incompatible variants called ‘Error Consciousness.’ Error Consciousness, as defined here, is simply the consciousness we think we have as opposed to the consciousness we do have–and everyone, I think it’s safe to say, is in the grip of some version of it. The combination of informatic closure and observational plasticity, in fact, would seem to make it all but impossible to overcome. Our introspective inability to access the information required to distinguish what we discover from what we devise means that theorists are almost certainly trying to explain a consciousness that simply does not exist. Like blind guru’s groping an elephant, we confuse the trunk for a serpent, the leg for a tree, and the tail for a foul-smelling rope. Each of us thinks their determinations are obvious, but none of us can explain them because they don’t exist.
This is just to say that Error Consciousness provides a compelling way to understand the difficulty of the so-called Hard Problem of consciousness. If we make Error Consciousness our primary explanandum, we will never find a satisfactory neuroscientific explanation, simply because there is no such thing.
And even more importantly, it allows us to ask what kinds of errors we might be prone to make.
Consider Schwitzgebel’s conclusion that “our judgments about the world tend to drive our judgments about our experience. Properly so, since the former are the more secure” (137). This certainly makes evolutionary sense. As a very recent evolutionary development, human consciousness would have inherited the brain’s existing cognitive resources, namely, its ancient and powerful environmentally oriented systems. For me, this raises a question that has the potential to transform consciousness research: What if the kinds of errors we make environmentally are, in some respects, the same errors, perceptual or cognitive, that we make introspectively?
Consider, for instance, the way we sense aggregates as individuals in the absence of information. Astronomers, for instance, once thought quasars were singular objects, rather than a developmental phase of galaxies possessing supermassive blackholes. The ‘heavens’ in general are a good example of how the accumulation of information led us to progressively differentiate the celestial sphere that Aristotle thought he observed. Short of information regarding distinct constituents, we have a pronounced tendency to perceive singular things, a fact that finds its barest psychophysical expression in the phenomena of flicker fusion. For whatever reason, the perceptual and cognitive default is to clump things together for the want of distinctions.
Could something so perplexing as the ‘unity of consciousness’ simply be an introspective version of this? Could consciousness, in other words, be something like a cartoon, a low resolution artifact of constraints on interoceptive informatic availability?
A kind of flicker fusion writ large?
If so, it foregrounds what could be a pervasive and systematic fault in ongoing attempts to puzzle through the riddles of conscious experience. The orthodox approach to the question of conscious unity asks, What could unify conscious? It conceptualizes conscious unity as a kind of accomplishment, one requiring neural devices to be explained. But if the intuition of conscious unity relies on the same cognitive systems that regularly confuse aggregates for individuals in the absence of information, and if the ‘introspective faculty’ responsible for that intuition is, as Schwitzgebel’s arguments imply, ‘low resolution,’ then why should we expect we would intuit a more differentiated consciousness, let alone one approaching the boggling complexity of the brain that makes it possible? In other words, Why not expect that we are simply getting consciousness wrong?
We seem to be using the wrong cognitive equipment after all.
Pressing Schwitzgebel’s findings in this direction, we can readily see the truly radical upshot of Perplexities of Consciousness: the way it systematically undermines the presumption that introspection is a form of ‘vision,’ and so the notion that consciousness is ‘something visible.’ The analogy Kreigal offers to smell in his review is quite instructive here. With olfaction, we are quite comfortable moving between the object of perception and the medium of perception. We smell odours as readily as odorous things. With vision, on the other hand, we typically see things, not the light they reflect. This is probably as much a function of resolution as anything: Since olfaction is so low resolution, we often find ourselves smelling just the smell. Analogizing introspection to olfaction allows us to see consciousness as a special kind of stink rather than a special kind of thing. The visual metaphor, you could say, delivers conscious experience to the ‘object machinery’ of our cognitive system, and has the consequence of rendering consciousness substantival, transforming it into something that we somehow see rather than something that we somehow are. The olfactory metaphor, on the other hand, allows us to sidestep this processing error, and to cognize conscious experience off the traditional inferential grid…
And so conceive consciousness in terms that make hay of the cardinal distinction between perception and cognition. We think the unity of consciousness is something to be explained because we think it is something that is achieved prior to our attentional awareness of it rather than a product of that attentional awareness. Perplexities shows that we have good reason to doubt this happy assumption: if introspection, like vision, simply reveals something independently existing, Schwitzgebel asks, then why the lack of consensus, the endemic confusion, the perpetual second-guessing? Reflection on consciousness is an attenuation of consciousness–as we might expect, given that it’s simply another moment within consciousness. Introspection is an informatic input, a way to deliver neural information to deliberative cognition. If that information is as skewed and as impoverished as Perplexities implies, then we should expect that our concepts will do the perceptual talking. And if our deliberative systems are primarily geared to environmental cognition, we should expect to make the same kinds of mistakes we make in the absence of environmental information.
The conscious unity we think we ‘perceive,’ on this account, is simply the way conscious experience ‘smells’ in attentional awareness. It is simply what happens when inadequate interoceptive neural information is channelled through cognitive systems adapted to managing environmental information. In a strange sense, it could be an illusion no more profound than thinking you see Mary, Mother of God, in a water stain. What makes it seem so profound is that you happen to be that water stain: its false unity becomes your fundamental unity. To make matters worse, you have no way of seeing it any other way–no way of accessing different interoceptive information–simply because you are, quite literally, hardwired to yourself.
Observational plasticity makes it as apparently real as could be. Informatic closure blocks the possibility of seeing around or seeing through the illusion. An aggregate becomes an individual, and you have no way of intuiting things otherwise. Enter the intuition of unity, a possible cornerstone of Error Consciousness.
Schwitzgebel would likely have many problems with the positive account I offer here (for a more complete, and far more baroque version, see here), if only because it changes the rules of engagement so drastically. Unlike me, Schwitzgebel is a careful thinker, which is one of the reasons I found Perplexities such an exciting read. It’s not often that one finds a book so meticulously dedicated to problematizing consciousness research supporting, at almost every point, your own theory of consciousness.
To reiterate the question: Why should interoceptive information privation not have similar cognitive consequences as environmental information privation? This question, when you ponder it, has myriad and far-reaching consequences for consciousness research–particularly in the wake of studies like Schwitzgebel’s. Why? Because once you pull the interoceptive rug out from underneath speculation on consciousness, once you understand that, as evolutionary thrift would suggest, we have no magical ‘inner faculty’ aside from our ancient environmental cognitive systems, then ‘error’ (understood in some exotic sense) has to become, to some extent at least, the very tissue of who we are.
And as bizarre as it sounds, it makes more than a little empirical sense. In natural terms, we have an information processing system–the human brain–that, after hundreds of millions of years of adapting to track the complexities of its natural and social environments, only recently began adapting to track its own complexities. Since our third-person tracking has such an enormous evolutionary pedigree, let’s take it as our cognitive baseline for what would count as ‘empirically accurate’ first-person tracking. In other words, let’s say that our first-person tracking is empirically accurate the degree to which its model is compatible with the brain revealed by third-person tracking. The whole problem, of course, is that this model seems to be thoroughly incompatible with what we know of the brain. Our first-person tracking, in other words, appears to be wildly inaccurate, at least compared to our third-person tracking.
And yet, isn’t this what we should expect? The evolutionary youth of this first-person tracking means that it will likely be an opportunistic assemblage of crude capacities–anything but refined. The sheer complexity of the brain means this first-person tracking system will be woefully overmatched, and so forced to make any number of informatic compromises. And perhaps most importantly, the identity of this first-person tracking system with the brain it tracks means it will be held captive to the information it receives, that it will, in other words, have no way of escaping the inevitable perspectival illusions it will suffer.
Given these developmental and structural constraints, the instances of Introspective Incompetence described in Perplexities are precisely the kinds of problems and peculiarities we should expect (what, in fact, I did expect before reading the book). This includes our introspective anosognosia, our tendency to think our introspective judgments are incorrigible: the insufficiency of the information tracked must itself be tracked to be addressed by our first-person tracking system. Evolution flies coach, unfortunately. Not only should we expect to suffer errors in many of our judgments regarding conscious experience, we should, I think, expect Error Consciousness, the systematic misapprehension of what we are.
Of course, one of the things that makes the notion of Error Consciousness so ‘crazy,’ as Schwitzgebel would literally call it, is the difficulty of making sense of what it means to be an illusion. But this particular berry belongs to a different goose.
EAMD.
?
Ever are men deceived.
“Of course, one of the things that makes the notion of Error Consciousness so ‘crazy,’ as Schwitzgebel would literally call it, is the difficulty of making sense of what it means to be an illusion.”
Can you actually be an illusion? You can believe you are something you are not but you can’t actually be something you are not. This raises two important questions:
Q1. How is that you believe you are what you are not?
Q2. If you are not what you believe yourself to be what are you really?
The answer to Q1 would appear to depend on correctly answering Q2. But there is an assumption in Q2. It assumes that you are an independent entity that has a belief about its own nature. But, what if this is the illusion? What if there is no independent entity to hold this belief? If this were the case neither Q1 nor Q2 make any sense. So if there is no you to hold the belief that you are what you are not (an independent self) how can this belief even exist? This raises the crucial issue of the very nature of belief. We have always assumed that beliefs are ideas held by selves. Obviously if there are, in fact, no independent self entities then beliefs must be something other than ideas held by selves. At the same time there is no denying that beliefs exist. So what are they?
Here’s my guess: Beliefs are patterns (webs of connections/associations) learned by and stored in a brain. The belief in a self exists because the brain has perceived a pattern having the attributes of an independent entity. It has perceived a pattern that does not actually exist. It has done this because of the way it perceives stimuli and attempts to make sense of this input. There is a name for this practice of seeing patterns where not exist. It is called apophenia and the human brain would appear to be predisposed to engage in this kind of misguided pattern recognition.
Part two actually takes this problem as its focus. But like you I see this as a Mary in the waterstain issue, with our cognitive systems knitting whole intentional sweaters out informatic scraps.
It has done this because of the way it perceives stimuli and attempts to make sense of this input.
Probably easier to skip ‘percieves’,’attempts’ and ‘make sense’, into simply ‘those that did not, died’
My guess is that beliefs are provisional structures produced to integrate experience. We sense things, but this sensory data is useless unless related to others. Therefore we create beliefs – provisional relations which allow us to use this data if we need, to create the structures necessary for more complex thought.
This is however where my view of belief departs from that of conventional philosophy. I think that we unconsciously admit to ourselves the provisional and uncertain nature of this belief guestimation, which explains why we are so conservative in acting on beliefs. Most people believe in some form of good and bad. But most people seem quite willing to ignore the bad, and the good, unless they see some significant potential reward in it, because they accept that these beliefs are merely guesses and that there may be significant personal risk in acting on them. Which also explains the resilience of fallacious beliefs, which survive in spite of their spuriousness because we never really test them.
One interesting point is belief and language. Language is one sphere of reality where I’ve found the contrary to be true. People are more than willing to speak of belief as fact rather than as mere guesses. I think that this has to do with the nature of language – we use language as a safe ‘experimental laboratory’ for belief, testing our beliefs against the experiences of others, so that it is in our own benefit to be confident, even obnoxious about our own beliefs when talking about them, however cautious we may actually be in acting on those beliefs. It means that human beings are inherently obnoxious assholes, but it does is justified in the name of self-interest.
A couple things that struck me while reading.
As Kreigal notes in his review of Perplexities (linked above), it could be the case that psychology simply gave up too soon. Maybe training and patience are required. Perhaps introspection, though far more informatically impoverished than vision, is more akin to olfaction, a low resolution modality demanding much, much more time to accumulate the information needed for reliable cognition. Perhaps introspective psychology needed to keep sniffing … he probes the question of this availability from both sides, showing us how, as in the case of ‘human echolocation,’ we seem to possess more information than we think we do.
I think that this is the case. The BBH cannot be biologically addressed until we introspectively hit the actual recursive limits of the BB’s growth, whatever they may be in actuality. Though most of us aren’t even BB but the Blind Blind Brain – countless individuals living their whole lives unaware that they are a head inside a head, to use the common metaphor around TPB.
the question of what might be called introspective anosognosia.
I’m not aware of how to post links as has been done in the post but the conception of umwelt comes to mind: http://en.wikipedia.org/wiki/Umwelt
A kind of flicker fusion writ large? … The conscious unity we think we ‘perceive,’ on this account, is simply the way conscious experience ‘smells’ in attentional awareness.
I think this notion is actually accepted academically – psychology, not philosophy – though your metaphor thankfully tapers the pride usually attached. For instance, there are all sorts of subjectively documented perceptual thresholds that provide a sense of the common BB. With the advent of brain imaging, it gives us a loose metric for defining the bracket of awareness between what we have, BB, and what is biologically perceived, GB.The question for me became what kind of practices and exercises could I adapt to bridge the gap. I’m not a big fan of nootropics and invasive procedures, though I think readers would be surprised to know how many academics currently take “stacks” of synthesized chemicals to augment their cognition – pioneering Neils, no doubt.
Of course, recursions abound.
“Short of information regarding distinct constituents, we have a pronounced tendency to perceive singular things, a fact that finds its barest psychophysical expression in the phenomena of flicker fusion. For whatever reason, the perceptual and cognitive default is to clump things together for the want of distinctions. Could something so perplexing as the ‘unity of consciousness’ simply be an introspective version of this?”
Yes.
I mean, the “division of consciousness” that occurs when the corpus callosum is severed is good experimental evidence of this. The weird part is that no matter how hard you ask your brain to recognize its constituent parts as individuals, it doesn’t let you. I mean, sometimes when I talk to myself I can almost believe I’m a having a geniune DI-alogue, but that makes me uncomfortable because it is so patently crazy.
The weird part is that no matter how hard you ask your brain to recognize its constituent parts as individuals
Maybe it just doesn’t understand the scale of the request? Where does that end…individual synapses?
Regarding severing the corpus callosum, I just saw a repeat of an episode of the Sci channel show Dark Matters:Twisted But True that had a segment on Alien Hand Syndrome. One portion of the segment talked about Roger Sperry and his work with patients who had their corpus callosum surgically severed to control seizures, which resulted in some of them experiencing Alien Hand Syndrome.
This book sounds great. I’m definitely putting it on my (woefully long) reading list.
On face value it reminds of another book I came across many years which was similarly humble at first appearances, but vastly profound in its implications. It is Deborah Tannen’s Talking Voices, a book which sets the humble enterprise of investigating some of the foundational assumptions of theories of language and consciousness, and ends up producing results which deeply unsettle these basic assumptions and by implication (though these are never pursued by the author) the entire architecture of our scientific understanding of language and consciousness. It is very easy reading, based not on complex theories and experiments, but on basic observations about language and the thought processes behind it which any reader can verify out of personal experience. Moreover, it is replete with other curious and fascinating observations that make it a worthwhile read even if you discredit her conclusions.
Immediate thoughts: At first glance I never would have given this book a second look, simply because there are too many scientific mountebanks out there who exploit the problem of consciousness argument simply to promote their own patently ridiculous special exception clause. Indeed, this sadly seems to constitute the majority of what we call science, and one of the main reasons I read so little scientific literature any more.
This again brings me back to a question I’ve been asking myself for what seems like most of my life: why does humanity still insist on the infallibility model of consciousness, if all the mountains of evidence indicates such a project to be impossible. Human beings in most instances seem quite adaptable, but this is one instance where most seem utterly unable to. In its quest for infallible consciousness, humanity is an ape which will rather starve than leave the peanut in the jar. Why is humanity so utterly unable to embrace its natural state of ignorance?
Some people I have posed this question to respond that we do so out of necessity. Certainly there are many motives, but I fail to find any ‘necessity’ which can hold up against the standard of authenticity.
Not to be off-topic but what is your opinion of Buddhism? Also, I love the way your mind works and am a big fan of The Prince of Nothing books and Neuropath. Light, Time and Gravity was brilliant for the niche that will appreciate it.
What if I give you a ‘best that can be done’ story?
A little off-topic, but some moderately bright news from general biology:
http://phys.org/news/2012-01-role-quantum-effects-photosynthesis.html
The interesting thing about it is: photosynthesis is a very old process. If something like that was used by organisms in Archean era, one can only wonder how similar effects could have been utilized in other more recent biological systems. Especially in information managing systems and intelligence demanding applications(since Quantum has such great potential in it).
Its also remakable how such thing can even work in highly decoherent environment. It was thought to be impossible at ambient temperatures. Without some shielding devices at least.
The fact that we thought we knew essentially everything about photosynthesis(its just plain grass after all!) for many years teaches humility too.
I once had a dream which came in just two colours, white and pink. Pixels of pink – and it creaped me out (there was also something else there, a half sound, half feeling there, something entirely organic against the pixel landscape…and preditory, but I digress). It was one of the few dreams where I realised it was a dream to some degree and, creeped out, got out of there.
But it was white and pink – so I judge it in contrasting difference to other dreams, which have a range of colours unlike that. In terms of evaluation via contrast, that’s what I have.
One of my pet inquiries is how loud do you think? Like maybe you read this now and hear the words – at what volume do you think those words? Can you crank your inner thoughts to ear splitting volume? I find I cannot. I think the words I ‘hear’ when I read — I think they are concepts of words, not like actually hearing words. When I try to crank my words to ear splitting volume, I get the impression of ear splitting volume. This further makes me think I’m engaging a concept or ideal, rather than hearing what I’m reading. Indeed it’s probably a process of idealisation – the words I ‘hear’ when reading are closer to the/my ideal of the word. Hmmm, just thinking on it it’s probably part of why books can impact – instead of hearing the words, the reader, particularly if it appears to be a ‘voice from nowhere’ starts to trade in their own ideal of each word. Thus a more intimate version of the word than if the very same word were spoken aloud.
Mighty Bakker, I apologize for not replying on the topic of this post, but a man wonders – do you still intend to provide a sample chapter of TUC to the new forum?
Sorry Jurb – this comment almost fell between the cracks. The first chapter is the only one I think would work from a spoiler standpoint, and I have to actually wait until I’ve completed the final chapter before I can finalize the details. Rest assured, I haven’t forgotten!
“I have to actually wait until I’ve completed the final chapter before I can finalize the details”
And here I thought what comes before determines what comes after! It seems the White-Luck was right.
it is the great irony of literature, Jurble, of all fiction. In fiction, what comes after must always determine what comes before. Tolkien knew that Gollum would fall into the fires of Mount Doom long before he ever wrote a word that led to that outcome, the after determined the before. It is only in our pitiful real world that the before determines the after, so we should ask ourselves, what is the meaning of the Dunyain’s delusions? Because their beliefs are not delusions in our world, but in their world they are assuredly delusions…
WHAT DO YOU SEE?
I don’t understand…
I MUST KNOW WHAT YOU SEE.
Death. Wretched death!
TELL ME
Even you cannot hide from what you don’t know! Even you!
WHAT AM I?
“Doomed”
perhaps dreams are just streams of lower order consciousness without that pesky higher-order consciousnessdisrupting the stream with its false unity…
Probably the gut brain swollowing the upper brain – that’s why you’re always so absorbed in the scenario of the dream and rarely realise it’s a dream.
I was actually giving this more thought last night…so dreams may be what is left of consciousnes without the illusion of agency filling in the gaps and giving us a coherent narrative. There is some interesting research on schizophrenics which treats thought as motor actions, and looks at how they believe thoughts and voices and hallucinations come from outside them (ie outisde their agency) because the part of the brain that ‘prepares’ for the thought (in the same way motor and sensory parts prepare for those actions) does not do so.
I think the same holds in a dream state partcularly, and perhaps other states of consciousness, such that belief in the agency of our thoughts is turned off and so we are no longer ‘conscious’. It functions at a higher level than simply switching off sensory perception or motor action.
so dreams may be what is left of consciousnes without the illusion of agency filling in the gaps and giving us a coherent narrative.
Alternatively it’s the other way around – the filling in the gaps is set on overdrive! Certainly at times when I’ve drifted off to sleep, I’ve been thinking about something and…it becomes somewhat hallucinogenic and starts to have additions and extensions added to it. Further on schizophrenics, I think I saw a doco on sleep deprivation that showed someone with massive sleep deprivation starts to suffer the same effect. Schizophrenia could be explained by a dialing up of narrative gap filling, thus making it ‘all make sense’ in terms of the voices and urges to act.
that’s a good point. I’m reading Edelman’s stuff on consciousness and he discusses how it might work within the massive network of associations in the brain; so in dreams because the sensory input (or perceptual input, more correctly) is always unique and unfamiliar, there are less remembered or recognised sets of associations within which to create the narrative, so more work needs to be done to make something coherent.
I’ve always had problems with the lower-order consciousness/unconscious mind theories of dreaming. This is largely because it contradicts my own experience with lucid dreaming. Personal experience of dreaming more closely resembles reflection/memory recall or imagination/story-telling. Many of my dreams closely resemble the experience of reading fiction, except that I am the story teller.
Something I’ve also often wondered about is the dissociation I experience in dreams. I’m almost never me in my dreams. Nor is it a kind of schizophrenic other me. It is usually some other fully formed personality (at least to appearances). Has anyone else experienced the same?
No, I’m always me in my dreams, as far as I can recall. Now I feel narcissistic!
[…] The mechanics of dreams are explained in a way similar to the mechanics of consciousness: through that idea of self-sufficiency, that, by occluding the horizon and so limiting our possibility to perceive forms and evaluate them, makes the perception of an “elsewhere” impossible, and so traps you there in an undivided space. That’s why in lucid dreams you DO question the logic and reality of your perception: because you have a link back to another world, and so perceive the boundaries of the bubble you’re trapped in. “introspection is nothing but a keyhole glimpse that only seems as wide as the sky because it … […]