Lamps Instead of Ladies: The Hard Problem Explained
by rsbakker
This is another repost, this one from 2012/07/04. I like it I think because of the way it makes the informatic stakes of the hard problem so vivid. I do have some new posts in the works, but Golgotterath has been gobbling up more and more of my creative energy of late. For those of you sending off-topic comments asking about a publication date for The Unholy Consult, all I can do is repeat what I’ve been saying for quite some time now: You’ll know when I know! The book is pretty much writing itself through me at this point, and from the standpoint of making good on the promise of this series, I think this is far and away the best way to proceed. It will be done when it tells me it’s done. I would rather frustrate you all with an extended wait than betray the series. If you want me to write faster, cut me cheques, shame illegal downloaders, or simply thump the tub as loud as you can online and in print. So long as The Second Apocalypse remains a cult enterprise, I simply have to continue working on completing my PhD.
.
The so-called “hard problem” is generally understood as the problem consciousness researchers face closing Joseph Levine’s “explanatory gap,” the question of how mere physical systems can generate conscious experience. The problem is that, as Descartes noted centuries ago, consciousness is so damned peculiar when compared to the natural world that it reveals. On the one hand you have qualia, or the raw feel, the ‘what-it-is-like’ of conscious experiences. How could meat generate such bizarre things? On the other hand you have intentionality, the aboutness of consciousness, as well as the related structural staples of the mental, the normative and the purposive.
In one sense, my position is a mainstream one: consciousness is another natural phenomena that will be explained naturalistically. But it is not just another natural phenomenon: it is the natural phenonmenon that is attempting to explain itself naturalistically. And this is where the problem becomes an epistemological nightmare – or very, very hard.
This is why I espouse what might be called a “Dual Explanation Account of Consciousness.” Any one of the myriad theories of consciousness out there could be entirely correct, but we will never know this because we disagree about just what must be explained for an explanation of consciousness to count as ‘adequate.’ The Blind Brain Theory explains the hardness of the hard problem in terms of the information we should expect the conscious systems of the brain to lack. The consciousness we think we cognize, I want to argue, is the product of a variety of ‘natural anosognosias.’ The reason everyone seems to be barking up the wrong explanatory tree is simply that we don’t have the consciousness we think we do.
Personally, I’m convinced this has to be case to some degree. Let’s call the cognitive system involved in natural explanation the ‘NE system.’ The NE system, we might suppose, originally evolved to cognize external environments: this is what it does best. (We can think of scientific explanation as a ‘training up’ of this system, pressing it to its peak performance). At some point, the human brain found it more and more reproductively efficacious to cognize onboard information – data from itself – as well. In addition to continually sampling and updating environmental information, it began doing the same with its own neural information.
Now if this marks the genesis of human self-consciousness, the confusions we collectively call the ‘hard problem’ become the very thing we should expect. We have an NE system exquisitely adapted over hundreds of millions of years to cognize environmental information suddenly forced to cognize 1) the most complicated machinery we know of in the universe (itself); 2) from a fixed (hardwired) ‘perspective’; and 3) with nary more than a million years of evolutionary tuning.
Given this (and it seems fairly airtight to me), we should expect that the NE system would have enormous difficulty cognizing consciously available information. (1) suggests that the information gleaned will be drastically fractional. (2) suggests that the information accessed will be thoroughly parochial, but also, entirely ‘sufficient,’ given the NE’s rank inability to ‘take another perspective’ relative the gut brain the way it can relative its external environments. (3) suggests the information provided will be haphazard and distorted, the product of kluge-type mutations. [See “Reengineering Dennett” for a more recent consideration of this in terms of ‘dimensionality.’]
In other words, (1) implies ‘depletion,’ (2) implies ‘truncation’ (since we can’t access the causal provenance of what we access), and (3) implies a motley of distortions. Your NE is quite literally restricted to informatic scraps.
This is the point I keep hammering in my discussions with consciousness researchers: our attempts to cognize experience utilize the same machinery that we use to cognize our environments – evolution is too fond of ‘twofers’ to assume otherwise, too cheap. Given this, the “hard problem” not only begins to seem inevitable, but something that probably every other biologically conscious species in the universe suffers. The million dollar question is this: If information privation generates confusion and illusion regarding phenomena within consciousness, why should it not generate confusion and illusion when regarding consciousness itself?
Think of the myriad mistakes the brain makes: just recently, while partying with my brother-in-law on the front porch, we became convinced that my neighbour from across the street was standing at her window glaring at us – I mean, convinced. It wasn’t until I walked up to her house to ask whether we were being too noisy (or noisome!) that I realized it was her lamp glaring at us (it never liked us anyway), that it was a kooky effect of light and curtains. What I’m saying is that peering at consciousness is no different than peering at my neighbour’s window, except that we are wired to the porch, and so have no way of seeing lamps instead of ladies. Whether we are deliberating over consicousness or deliberating over neighbours, we are limited to the same cognitive systems. As such, it simply follows that the kinds of distortions information privation causes in the one also pertain to the other. It only seems otherwise with consciousness because we are hardwired to the neural porch and have no way of taking a different informatic perspective. And so, for us, it just is the neighbour lady glaring at you through the window, even though it’s not.
Before we can begin explaining consciousness, we have to understand the severity of our informatic straits. We’re stranded: both with the patchy, parochial neural information provided, and with our ancient, environmentally oriented cognitive systems. The result is what we call ‘consciousness.’
The argument in sum is pretty damn strong: Consciousness (as it is) evolved on the back of existing, environmentally oriented cognitive systems. Therefore, we should assume that the kinds of information privation effects pertaining to environmental cognition also apply to our attempts to cognize consciousness. (1), (2), and (3) give us good reason to assume that consciousness suffers radical information privation. Therefore, odds are we’re mistaking a good number of lamps for ladies – that consciousness is literally not what we think it is.
Given the breathtaking explanatory successes of the natural sciences, then, it stands to reason that our gut antipathy to naturalistic explanations of consciousness are primarily an artifact of our ‘brain blindess.’
What we are trying to explain, in effect, is information that has to be depleted, truncated, and distorted – a lady that quite literally does not exist. And so when science rattles on about ‘lamps,’ we wave our hands and cry, “No-no-no! It’s the lady I’m talking about.”
Now I think this is a pretty novel, robust, and nifty dissection of the Hard Problem. Has anyone encountered anything similar anywhere? Does anyone see any obvious assumptive or inferential flaws?
“What we are trying to explain, in effect, is information that has to be depleted, truncated, and distorted – a lady that quite literally does not exist. And so when science rattles on about ‘lamps,’ we wave our hands and cry, “No-no-no! It’s the lady I’m talking about.””
What you could conclude Scott is that your brain was performing the task which nature had intended it to. As an agent your primary function is to detect agency. Without previous knowledge of the interior design of the home furniture (Blind Furniture Theory) you were suffering from obvious informatic neglect, and knowing there lived in the house an old lady and old ladies are nosey agents so they look out of windows; there is only one obvious conclusion one could reach.
Also keep in mind that our houses are for the most part opaque (Blind House Theory)except for windows which provide the house with a transparent gap and opening to the world. Because of the complex neocortical layering we see no difference between the world being physicaly only inside of us and metaphysicaly appearing completely outside of us so as Arnold concludes: “Consciousness is a transparent brain representation of the world from a privileged egocentric perspective.”
I just couldn’t resist.
Not strictly related, but I was reading some more of “The Wayward Mind” and thought one of the models works well to explain some of the weird scientific evidence that usually supports the opposite theories. Opposite to the “consciousness as CEO”.
There’s this example of the ocean. All the water is mental activity. There’s no real separation between conscious and unconscious mental activity. They aren’t two things happening in different places. It’s just water. But the difference we see is that “consciousness” is about the waves on the surface, while the tides below represent everything else going on unconsciously.
This model works well because it already reproduces that asymmetrical volume of activity between consciousness and unconscious. We know that consciousness is only a tiny fraction of the mental activity, and that is reflected by the water you see on the surface, compared to all the water deep in the ocean.
The other interesting aspect is about picturing this “emerging onto consciousness” as a matter of “intensity”, which seems confirmed by scientific evidence. So a tide, or a pattern, starts deep down in the ocean, and it can gain strength and volume (filtered by automated heuristics). It grows in intensity. Eventually it becomes so strong that it “can be seen”, “emerging” as a wave, so consciousness.
I was simply thinking that this could easily explain that scientific evidence about the intention of a movement occurring in the unconscious brain BEFORE it actually becomes conscious. But NOT because consciousness is irrelevant in the process.
What may happen is that before a movement can be made, the brain needs to “prepare” the action (which also requires all the work about coordinating the muscles and so on). This happens by giving intensity to the corresponding pattern. At some point this intensity could grow enough to reach the level of consciousness. And right at this point it’s possible that consciousness has the power to INHIBIT the action. It’s as if the brain continuously prepares all sort of potentials, that ultimately reach consciousness for a final, “decisive”, scrutiny.
This would mean that the intentionality certainly starts from those undercurrents, but without subverting the “consciousness as CEO” hierarchic idea (and confirmed by intensity = consciousness, giving consciousness relevancy by simply being “more intense”). Because consciousness has still the power to override the system and take control dynamically. So again this resembles like a chain of command where the periphery of the organized army has a lot of autonomy. But when there’s some sort of crisis, then these patterns become rated for consciousness. Requiring to be submitted to the higher echelons.
And this could explain also why when an action is caused by external intervention, the conscious brain tries to fantasize a motive that wasn’t there: the brain is like adaptable muscle. When a pattern is repeated over and over, it becomes somewhat etched in the brain. Becomes more efficient. Since there’s no real separation between conscious and unconscious activity (but only intensity) then it is likely that if in an experiment you artificially cause a pattern to trigger, then both the conscious and unconscious response come up in the way they are usually [b]linked together[/b]. Which means that you have both the preparation of the action, as the rationalization that typically comes with it.
But this doesn’t mean that the experiment revealed the “truth” of the process (meaning that the rationalization happens ALWAYS later and is not relevant), it simply means that what you see is a common pattern. You simply reveal the link that has been established by that pattern through years of reinforcement. So, ideally, if you kept bombarding the brain with this process, it would eventually recalibrate to counter what is happening. It just needs time. So what the experiments demonstrated is that the brain is temporarily unable to deal with this unusual situation, answering it as it usually answers the “natural” one.
BECAUSE the brain isn’t trained to deal with external entity that comes in and modifies what happens. It simply can’t see this happening (lacks this kind of diagnostic), and so keeps doing what it usually does. It’s as if we think that the experiment reveals that, ALL THE TIME, consciousness is simply busy making excuses. When instead this is happening, for the first time, in this context because “making excuses” is what the brain does when a process is initiated by external intervention.
This means that the experiment reveals the exceptionality of the experiment itself, and the blindness of the brain to something external hacking in, AND NOT the revelation of how the brain works normally.
All this to say that I keep feeling skeptical about all sort of theories. Because so much remains ambivalent, and there isn’t something that makes you definitely exclude some possibilities.
The thing is, while we absolutely know that unconscious brain has so much more control than we think it has, while it’s reasonable to say that intentions start to build up unconsciously before they have enough intensity to become conscious, all scientific evidence still doesn’t directly contradict the fact that consciousness comes up “on top”.
Meaning that the conscious model we have is still an overall accurate representation of what we are. We don’t see the “bulk” of the tides under the surface of the ocean, but we still have control on the strongest manifestations. As if consciousness, with its slowness and inefficiency, can really be bothered by what is truly important, and delegating everything else to the “autopilot” (or even the opposite, like in a moment of danger/panic: you need a quick response, and can’t rely on consciousness making the calls).
It’s an interesting analogy, Abalieno! I’ll riff on it with this – imagine that conciousness is, instead of it’s own tide, actually the point where the unconcious tides run crossways against each other. Otherwise conciousness does not emerge (which might explain how serene a zealot can be while enacting their creed). Conciousness might be a kind of tie breaker circuit, an attempt to flip a coin when the system conflicts with itself.
And here’s a kinky extension of that idea – what if some conciousnesses in some individuals attempt to horde problematic issues/tide clashes and keep rolling them over in their mind, in order to maintain the tie breaker situation largely into perpetuity?
A precarious position, I’d argue.
“…working on completing my PhD” – all the best, hope this is going well!
“Think of the myriad mistakes the brain makes: just recently, while partying with my brother-in-law on the front porch, we became convinced that my neighbour from across the street was standing at her window glaring at us – I mean, convinced. It wasn’t until I walked up to her house to ask whether we were being too noisy (or noisome!) that I realized it was her lamp glaring at us (it never liked us anyway), that it was a kooky effect of light and curtains. What I’m saying is that peering at consciousness is no different than peering at my neighbour’s window, except that we are wired to the porch, and so have no way of seeing lamps instead of ladies. Whether we are deliberating over consicousness or deliberating over neighbours, we are limited to the same cognitive systems.”
Try re-thinking this episode not from a perspective of what was missing, but on what was there.
Pattern recognition. You saw a pattern and assigned it motive. Not just you, actually, but your Bro-in-law confirmed it.
It is something that we have in excess, that no other animal seems to have. Music is a mathematical pattern sequence.
Logic is pattern recognition. Truth is a pattern.
Pattern recognition is old. Right back to insects. (There’s a type of bee that learns to recognize an ultra-violet web pattern of a spider that preys on it, which the spider changes regularly so the bees no longer know the current pattern.)
At some point, intelligence must have become a survival trait. The recognition of patterns beyond the simple association of signs to prey item, which every predator has achieved, became so important, that we see Jesus in toast. Patterns where none ever was.
So, here’s the big secret. Primates eat meat. All primates, not just the two or three species vegetarians tell you about. Even mountain gorillas. They don’t kill for it, but they’ll eat an animal that they see die from accident (not predator discards, which are usually infected by bacteria in the bites). We know that our distant pre-bipedal ancestor was a tree dweller, and it seems we lost our trees. On the ground, with our food source gone, we needed to find a new one fast. We must have figured out how to predate. Meat is a much higher efficiency food source, and even more efficient when cooked. This shrank our intestines vs. other primates, for instance, so we know we had time for digestive tract efficiencies to evolve since we started hunting. The important part: we needed a new pattern recognition mechanism, because as hunters we needed to recognize prey sign, but we had not evolved for a particular prey animal like Order Carnivora. We were jumping into an existing ecosystem with its own prey and predators well established. Predators kill other predators, so we were also making ourselves enemies with teeth… something else we needed to figure out the patterns of. That loss of ecosystem, and jumping into another should have killed us, but unlike every other species that suffered that problem, we evolved the pattern recognition necessary to hunt, avoid predators, and survive.
So that’s the first step, and I think it did have to come first, to buy us the time for the next step. You are going to call this a reach. I’m speculating. I admit it. I’m trying to come up with “plausible”, since “definitely” will likely never be known.
Query: Does evolution always evolve only efficient or survival advantages for the individual animal?
Answer: No. Bird plumage in many males is a survival disadvantage — bright colouration, long featherings, excessive displays that reduce flight capacity. How could these evolve? The general answer is that they demonstrate superior capacity to find food. The problem I see with that, and it’s not widely recognized so call BS and you may be right, is that they evolve to the edge of being suicidal. Any bigger, and the animal can’t function. It can’t select for what they claim, because if it worked that way, the animals would die.
So why?
Because of pattern recognition in the females. Females select their mates in the avian world, so females select for beauty. And it goes back before birds, to dinosaurs. The pre-mammals were head-butters and evolved complex horns for the competition. Ceratopsids evolved horns, but they do not have the structures to head butt, or use them to defend themselves. Females may have been selecting for beauty even then.
Female mammals generally don’t. Displays are limited. A lion’s mane does not determine if he mates: it’s combat with the previous pride males. The mane only indicates to other males that he’s sexually mature. Contests of display are rare in the mammalian world. Does that indicate that some level of self-awareness in female mammals is present? I’m not sure, but the combative nature of mammalian males is the typical determinant of gene transfer, so I don’t think so. The female mammal gets less say in the matter, since the losing “pretty” male gets driven off.
But something else in humans is different from other primates. Pair bonding. Call it love or companionship, at some point we started matching up one male to one female.
Let’s posit that limited self-awareness evolved early after pattern recognition. How could it happen? With no evolved prey sign pattern recognition, we developed a generalized pattern recognition that permitted a broader prey base. We’d hunt anything we could figure out how to hunt. This developed a pattern recognition different from birds. We need to start to recognize the causal nature of prey to prey sign, and predator to hunting methods, both so we could hunt but also steal prey from predators. Cooking solves the bacterial infection issue, so predator kills are safe to eat. Once we can sort out what causes which sign and observe the world for patterns less obvious, we have become empirical creatures, instead of instinctive. And now we can recognize the ultimate causality: how do we, the individual, create pattern?
Now we have self-reference, and the beginnings of self-awareness. And where does that fit into the female? The female, now aware of her causal nature, can overcome the combative gene transfer, and choose which progeny survives, and whose child she brings to maturity to mate. Now we have intelligence selecting for intelligence, even if it is not aware it is doing so, since intelligent females that choose better pattern finders as mates create children better equipped to survive in the new ecosystem reality, but those that choose on former instincts only get random results wiht less lieklihood of success. For the male, to ensure survival of progeny, he now must remain with the female (since she can kill his children), but everyone must sleep. Aware that she can choose a new mate, the female can eliminate an undesirable mate and select males that under solely combative circumstances would not reproduce. This begins pair bonding between a less combative male and a female aware of the superiority of pattern recognition. Poor pattern recognition creates poor offspring, so the capacity for intelligence and self-awareness becomes simply a need to select a batter husband.
So, what is self-awareness? Sleep now. Enough for tonight.
Okay, so we develop pattern recognition, which develops empirical analysis of the environment, and this identifies that we are individuals that have an effect on the world. Once you understand causality, the fact that you create cause and which result in effect becomes obvious. How does this lead to self-awareness and consciousness?
Well, how do we think? That is of coures the question under analysis, but what cna we perceive about our thinking process with certainty? I’ve presented this game before, but I’ll repeat it. Bring into your mind a motorcycle. Got it? So, did it also bring into your mind its component parts — gears, drive belt, pistons, fork, wheels, seat, oil filter, brakes, headlight? Or just the motorcycle as a singular concept? Now, how about “house”. Did you get a specific house, that is green, gabled, two chimneys, etc.? Or a general concept of “house”? The answer is “general” and non-specific. Now, what about “home”? I’ll bet that got a specific location, but is it the same concept as your home 25 years ago? “Home” mutated at some point, didn’t it? Our definitions of concepts are dynamic.
We have a limit to our processing, and this is well established. We can only hold in short term processing memory 7 plus or minus two things at a time. It’s like the idea of a “register” in computer processing. We can load 5-9 concepts each having a maximum sized definition at a time. This means every concept that we can think about must fit into that limited definition. So how do we deal with larger concepts, like “motorcycle” that has too many parts to fit into a single slot? Our mind links concepts together. There is the motorcycle, and it links to the concept “brakes”, which is a separate concept, linked in only when we determine that brakes are important to the current thinking. It’s a waste to link in brakes when we’re trying to redesign the gas tank, so we don’t.
So, our thinking system compartmentalizes concepts, and our pattern recognition links concepts together. and with causality getting involved, now we’re linking together not just the description of the animal, but how it affects our environment, and we’re beginning to detect our own effects on our environment. Here is where the breakthrough occurs. The causal nature of the self on the environment hooks into neuroplasticity.
Is neuroplasticity unique in humans? Obviously not. Animal conditioning exists, and the capacity to learn and reinforce behavior is present in birds and mammals.
http://psychclassics.yorku.ca/Breland/misbehavior.htm
But it does not overcome instinct. The pig example in this article is telling, since it’s the closest to us biologically. It begins rooting, where rooting has no reinforcement and no causal benefit. Neuroplasticity in animals cannot overcome strong instinctive urges, and the mechanism is demonstrated as broken in cases where negative reinforcement fails to stop unrewarded behavior.
Now, let’s go back to our pre-human. He falls out of the trees with pig-like instinct. He begins attaching the concept of self to causality, and now he sees himself doing the instinctive behaviors of his ancestors where it has no reward. Now we get a need for an oversight mechanism to identify when we are being, well, stupid and instinctual, which results in suicidal behavior because our instincts are not evolved to our new environment. Neuroplasticity must expand to overcome instinct, at the same time as pattern recognition is being selected by evolution. Neuroplasticity needs guidance from a mechanism that can recognize causality related to self-improvement. we have candidates we already discussed. Do any of them fit the description?
The overseer needs to recognize individual concepts, both those taken from memory and the new ones generated by the pattern recognizer. Pattern recognition, which largely begins in the sensory systems, is automatic, so when tasked by the overseer is merely a slave process. It generates product, which goes back into one of the short term memory slots for the overseer to further analyze, but pattern recognition is the foundation of causal analysis, and not capable of recognizing usefulness in the pattern. Pattern recognition promotes identification of the self, but it can’t act as overseer. So, junk that one. Causality recognition, while it leads to logic, again acts as a recommender, not as overseer. We still do dumb things despite causal analysis in hopes of a different outcome, so it’s not overseeing. And Neuroplasticity is also a slave to the overseer. What are we missing? The obvious and you should be there long ahead of me.
Communication.
The communication/language system in animals has limited capacity to produce causal results. It is controlled largely by instinct, to find mates, alert to danger, etc. Try as we might, chimps and parrots are not able to go beyond a certain stage in their communication, and dogs not a chance. While limited pattern recognition permits self-identification in animals, they do not understand complex sentences. Language can’t attach many concepts to patterns for them, and if we notice that animals have limited capacity for pattern recognition in the first place, the fault lies in a missing thought process we have in excess.
Communication/language has everything the overseer needs. It attaches pattern to concept (a sound to concept, even if the concept is something that makes no sound), creates causal results in the understanding listener, so evolutionary advances in pattern recognition inherently improve communication by increasing the number of patterns we can detect and project. Efforts to overcome instinct also bring communication the concept of self-awareness. And the need to transfer ever more complex concepts to the other members of a tribe promote complexity in what was previously an instinctual system. Language structure becomes thought structure, and consciousness simply spits out of structured self-awareness. The increasing complexity of the conceptual information storing system (due to pattern recognition advances) permits the creation of of non-empirical concepts to describe the growing sense of self and self-importance.
And is there any evidence that language and consciousness are inextricably linked? Little, but there is one. Have you ever heard anyone say, “I was never able to speak a second language well until I started thinking in that language?” Conscious thought is in language, which points us straight at the language center of the brain as the location of consciousness. Until we think in that language, we interpret from the language of thought into the language of communication. When the two unite, the hiccups of communication cease.
And you know what’s right next door to language according to brain mapping?
The mathematics section. Logic. The causality analyzer. The very thing that I suggest promoted the start of self-awareness, and potentially the most important thinking process of all for survival, is latched directly to language, which results in the fastest possible processing speed between consciousness and the individual’s effect on the environment.
And you say you’re not a philosopher.
One. Last. Try.
Are you familiar with the inverse problem? Tell me, how is the brain supposed to solve the inverse problem of itself?
Chris, could you actually describe the argument the original post puts forth (with all it’s flaws included, as you see them)?
If you want to talk with others, it’d seem fair to just do a quick recap to confirm you’re both talking about the same thing.
If you don’t think you need to give a recap – well then I’d be left thinking you don’t want to talk, you just want to advertise.
How is Math supposed to solve the inverse problem of itself?
By representing itself as an equation. Godel achieved that, and proved that “This statement is false” has a mathematical corollary. It is nowhere near as hard as you think. You only need to conceptualize Math into a form it recognizes as valid, and that allows it to make statements about itself.
The problem with human-created self-referential systems (like math, philosophy, etc.) is that they were, initially, assumed to all result in “true” and “false”. Yet, we humans can resolve “This statement is false” and not go insane. Why? It’s a self-referential statement that enters an insolvable “true-false” loop. And yet, our mind solves it with “No solution” after two iterations. Our mind does not seek to resolve “trueness” or “falsehood” without our conscious brain requesting it do so. The statement sits, unsolved, until we try to access our thought processes and solve it. It sees the pattern (there’s that pattern recognition again) in the iterations, and pulls out of the loop.
That active effort to solve problems overriding the automatic systems that might lock up in the loop without oversight answers your question. Self-reference does not loop in the presence of oversight. We perform one iteration at a time. So, when we think about our own consciousness, the oversight prevents the potential self-referential lock-up from happening. We are not thinking about consciousness, but the concept of consciousness. We don’t think about “dirt” by putting dirt in our minds, so why would we put consciousness in our minds when thinking about it? We load a concept called consciousness, and modify that data point.
So, how do we analyze ourselves? Most of the time we don’t! We are not constantly thinking “I think therefore I am!” Most of the time, we just sit there, process what we see, and work on the product of our sensory inputs. Advantages to the “self” are just automatic systems, trained by neuroplasticity, to reproduce similar results to similar events in the past… which we call a personality. Those sensory inputs are still very much automatic, but they spit pattern recognition into short term memory, consciousness automatically spits it into the appropriate analysis process, and we get choices of action. Boring old daily stuff.
When it comes to analyzing consciousness, just like analyzing dirt, we build a concept of what consciousness means in our brain. We link in adjectives (self-referential, self-aware, thought process), and when these adjectives become part of the active analysis, it’s easier to link those concepts into short term memory to process with other potential concepts. These get jammed into pattern recognition (logic) circuits and causal circuits (to compare to observations) which in turn generate insight/inspiration on how each concept could have come about. But it’s important to recognize that thinking about the self is only thinking about a concept, just like any other. It is mutable, and as we grow and change, our view of “self” also changes. Sometimes incorrectly. A pathological liar does not necessarily have “liar” as a part of the concept of “self”.
Your problem with BBT is that you’re starting at the end point. You’re assuming that consciousness thinks about itself, and not a concept of itself. I’m trying to get across the idea of how evolution works… it’s a slow process of incremental changes. BBT, as you’ve written it, is non-evolutionary since it makes no attempt to develop how we get from animals (with their problems that we don’t have) to what we are now, even while blaming evolution for making mistakes. It does not discuss the process of consciousness developing from an animal starting point, but tries to reverse-engineer it from the end product alone. Consciousness does not need to be a “poof” and it’s there: it can build itself up with incremental solutions to evolutionary problems that simply need better brains to solve than animals have.
So, there is a consciousness: that much is obvious. It is capable of self-reference and it is self-aware, since it clearly can choose survival over death. But when we think about our consciousness, we’re loading a concept like any other, which may be more or less accurate, depending on any of dozens of factors. We think about a representation of ourself, since we literally cannot put our own self in our own brain.
That is, after all, exactly how Godel did it with Math. He represented all Math as a single equation, which allowed a mathematical statement to refer to math. Why can’t the brain do the same thing, by creating an abstraction of consciousness in order to think about it?
So many misrepresentations, misunderstandings… Sorry, Chris. I’m giving up. I hope you have better luck in some other forum.
For someone so disdainful of philosophy, you sure spout a lot of it yourself — and of a decidedly half-baked and unsophisicated kind, too!
So tell me: what makes your philosophizing so special, especially given how uninformed you are in the field? You seem to think you have four pounds of brain. You do not.
The rule of thumb for moderating is that as soon as commentators begin dwarfing the word count for the posts you have a problem, and you ask them to relent. If they persist, your block the rants, as I did with a number, then allow them back on asking them to once again relent. If they continue, then it’s pretty clear you have some kind of problem, either because of interpersonal animus or because the individual is just one of those guys. So no more island. He’s not a bad or stupid guy – he just has some kind of turbo-charged confirmation thing happening, I think.
…some kind of turbo-charged confirmation thing…
I’ve taken to referring to a ‘metadoxastic affirmation reflex,’ the well-nigh automatic tendency people have to affirm as true everything they believe.
That is so true
Oh…wait…
I believe I was once asked, with a one liner, to explain something that’d certainly take more than one line to explain.
So what if the notion easily shifts into just a cheap way of making people jump into a pidgeon hole? A one sentence foot stuck out, multi paragraph guy tripping flat on his face? Do I take it that actually in the past I was being baited into being dismissable?
Perhaps a sub rule of thumb about if the word count of the rule of thumb is longer than the sentence it’s in regard to, maybe that sentence is just baiting?
“Does anyone see any obvious assumptive or inferential flaws?”
i see several, actually.
but to paraphrase fantasy author r. scott bakker, “so many misinterpretations, misunderstandings. sorry, scott. i’m giving up. i hope you have better luck in some other forum.”
so…does that help?
Write a guest post, och! 🙂
Owich. I feel bad, but there was definitely some kind of dysrationalia going on. Did you follow the prior back and forth I had with him or Roger’s Great Ordeal, ochlo?
Who pissed in your cornflakes? If you agree with Chris’s critique (of what I still have no idea) then tally-ho ochlo, lemme know!
So… I’d like to pop back to the question of value for a moment? I hope it’s not obnoxiously off-topic since it’s all basically related. It’s a conjecture for Scott or anyone who thinks it’s interesting. As I read it, the standard evo-psych case is that morality is a screwy byproduct of natural selection and there’s nothing else to it. But this isn’t Scott’s case. Scott’s case is that we probably can’t accurately perceive the world or ourselves. And the last post talked about verities being “confused and fractional”. This doesn’t necessarily mean there is no morality. It may mean that most people have moral views in the same way that cats have whiskers, that is to say, it may be biologically impossible to believe, with a normal brain, that there is no right or wrong. Right or wrong are biologically implanted impressions that we don’t generally ignore without an effort or manipulating factor. However, according to BBT, there is no “correct” morality that our species can comprehend. The reason we don’t agree on our impressions of what is right or wrong is because, if there are definitive answers, they’re not available to us at our current level of evolution. So I was thinking we could easily go on to say that the common sense of “moral wrongness” is a dim awareness of a problem that has arisen in relations between members of a species, but our brains have not evolved to a level of enough sophistication that would allow us to grasp the exact specifics of the problem with the clarity needed to solve it. So we make groping gestures in the direction of solutions – ethical arguments – which are actually out of our reach to accomplish, since it’s beyond the neurological capacity of our species to really understand them. So far, I think I’m sticking quite closely to BBT.
Now. Where does that leave us, in terms of meta-ethics? And I was wondering, perhaps it takes us back to intuitionism. That sounds perverse but doesn’t it fit quite interestingly? That is, there may be ethical facts, we just don’t have the ability to understand why. So the fact that the majority of people think of certain things as evil, eg Herod ordering the slaughter of the innocents, is neither coincidence nor custom, but an incomplete recognition of something that is indeed “bad” for the human eco-system, which we can never pin down without vagueness but is nonetheless a guide to avoiding collectively self-destructive behavior. In other words, evil is a slipped cog. Cats still get stuck in small spaces because they got over-excited in chasing a mouse and ignored their whiskers. Herod’s soldiers kill babies because they ignore their intuitions. But the cat’s whiskers were right. The cat should have paid attention to them.
How about that? Genius, no?
Well, mechanistically speaking, we’re all bumping along together, generating behaviours that trigger behaviours in others that light up various seek and avoid systems – which is to say, modify our behaviour. As far as I can tell, this is simply a third-person fact, be it cats or humans. Given the astronomical complexities involved, the brain requires a set of robust (crude but broadly applicable) heuristics that maximize the lighting up of seek systems and minimize the lighting up of avoid systems. This is what makes drugs the heuristic short circuit they are, why, for instance, you find these crazy statistical correlations between the purity of meth in a region and the addiction rate. Lighting up these seek systems is pretty much all what we call ‘value’ consists in. Once we gain the ability to fundamentally rewire these systems, where are we left? If this is all value consists in, the lighting up of certain systems in certain brains, then we find ourselves in a profound quandary, one where we are reengineering systems to maximize or minimize the lighting up of those systems. What lends ‘value’ the patina of ‘objectivity’ now is simply the fact that we all happen to be similarly wired. The transhumanists go on and on about reengineering the avoid systems, while ignoring the fact that this is part of a process that has essentially rendered ‘value’ as intuitively and traditionally understood meaningless. What could ‘rational’ possibly mean in such a situation?
“…but our brains have not evolved to a level of enough sophistication that would allow us to grasp the exact specifics of the problem with the clarity needed to solve it.”
What you are proposing here is that there is a correct answer to the problem, but that we just are not in the position to see it. That is a metaphysical question that we may never answer. So ‘Is there a correct answer to moral problems?’ and ‘Will we ever be able to solve the moral problems?’ are questions that we can never answer from our point of view.
“So the fact that the majority of people think of certain things as evil[…], [is] but an incomplete recognition of something that is indeed “bad” for the human eco-system,…”
Of course, this is a legitimate moral view, but I think history has “proven” that what the “the majority of people think” may change in such a drastical way as to be contadicting from time to time (and from culture to culture, for that matter). So, which majority is it that we need to listen to? The majority of Earth today? Hundered years from now? Maybe there is a planet near Alpha Centauri that has beings of which the majority is always right in moral questions.
Another this is that our intuition evolved out of an evolutionary process, where survival was the selecting factor. I don’t think this counts as an absolute moral compass, it should at least be questioned. But after all I’m not quite against this position, cause what else are we left with apart from “reason” but our intuition?
Well, true. But what I think is interesting, from this blog’s point of view, is the possible compatibility of a traditional ethical idea with BBT, which on the whole, I think, is thought of as a nihilistic destroyer of all value. So my argument, such as it is, is specifically geared to finding room for moral value within Scott’s framework.
It seems to me that BBT only shows that we can never fully understand our intuition that Herod is evil. We don’t know why we think that, and we never will. But BBT does not show that those intuitions should be ignored. We don’t accurately perceive the world around us, but BBT doesn’t argue that the world doesn’t exist (we can still bump into things) or that we can’t get it right (we don’t bump into things constantly, and that isn’t just luck). So, intuitonism could be right – what BBT would show is that we might be, at times, correctly picking up on moral truths, in a way which mostly bypasses our conscious comprehension due to the neurological bottleneck, and there is no way (short of transhumanism) to ever reach a clearer apprehension of them, but it is not a hallucination. As you say, this is highly metaphysical but it’s relevant to the question of value and neuroscience: morality, in other words, really could be a closer correspondence to reality – within BBT – whereas evo-psych generally implies, with a sad shrug at most, that evil is a more natural state of affairs. This is meta-ethical, not a proposal that necessarily leads to how we should act, but about the status of ethics.
In short, there’s something slightly ironic about using BBT to say you should listen to your conscience, but is it ruled out?
I think we pretty much agree on everything but small details. I don’t believe that moral truths exist. ‘X is bad’ is a sentence that, in my view, can’t be wrong or right.
But I do think that BBT is a “nihilistic destroyer of all value” the only problem we have with this realisation is that we still don’t see a way out. Nobody is offering us a blue/red (which was it in the movie?) pill. So all thats left to do for us is go on living, choosing the moral system we like best for aesthetical reasons or we stick with the ones we grew up with.
I rather thought I had offered a way out… I mean, that’s exactly what I’m suggesting. We could at least get to a meta-ethical position, not just nihilism. Possibly not a small detail. To clarify, “a way out” doesn’t mean “an answer to all ethical conundrums”, but a way that BBT and value can co-exist. Even if it means an almost Platonic secular mysticism.
Okay, so this is the way I see it and I think Scott might agree with me: (1)Values “exist” only in dependency to people (you, me, Scott…ect….maybe only me ;-)). (2) BBT says that people don’t “exist”. So, BBT and value can’t co-exist. The only way this can be possible is when you define two different forms of existence, but the ontological status of them is quite dubious. I have never been into this Platonic stuff 😉
Yes… that is sticky. And yet (he says, brushing past it) BBT does not entail denying the existence of society or experience as such, as I understand it… only that we may misunderstand our experience of our experience, because we’re not capable of understanding. And I’m offering a slightly different interpretation of the consequences of that – that we may be in touch with moral truths without having any way of comprehending where they came from. If this doesn’t contradict BBT (or is even supported by it), then we’re not stuck with only one nihilistic interpretation of Scott’s theory. The nihilism bothers me. If nothing else (and I would think that we could hope for more than this), the intuitionist version might have the disadvantage of making the argument marginally less alarming and so it would lose some of the compelling urgency it now has, but the overall gain would seem to be pragmatically worth it in my view – what I mean is, a more agreeable interpretation might allow a wider acceptance of it, and a lot more of the salient points about consciousness in general could reach audiences with less resistance about nihilism, anti-humanism, and so on. (In my opinion, it already has a wistful fondness for humanism, but you know the kind of objections we’ve seen so far.) So I’m tentatively offering a spoonful of sugar. Most people have so far taken the view that it’s impossible for value and BBT to co-exist at all (something they often seem amazingly at ease with, apart from Scott and people who aggressively disagree with him). I thought I’d propose a way it might work. And this would mean that 1), in your syllogism above, would have to be false.
“…BBT does not entail denying the existence of society or experience as such,…”
Well, the way I understand it it does deny the existence of society because it it only a construct of some brains. As for experiences as such, I don’t see a way to deny that. You might think that your experiences are illusory but outright denying their existence makes no sense to me and isn’t imlied by BBT in my interpretation of it.
If I understand you correctly you are proposing certain entities called “moral truths” and the reason for doing it are twofold: (1)You don’t want to believe that there morality is baseless and (2) you think that by avoiding moral nihilism you make BBT more accessable to other people.
Both are fair points and even if I don’t agree with them I believe that there are many prejudices against nihilism. So it might be a good idea to present BBT in a way that avoids its connection with nihilism. The big problem is that it is very loosely defined and often gets interpreted in a way that is…stupid, so everytime someone asks “But doesn’t X imply nihilism?” people gets scared away by X. Nihilism is kind of the philosophical boogeyman… The thing is that I don’t think that nihilism has any moral implications. There being no rules doesn’t imply that there are no consequences.
Thats the reason I’m so “amazingly at ease with” this. So, I don’t think that “I” exists. Does that mean that the thing that has experiences and calles itself “me” doesn’t love its parents, or that it doesn’t find cruelties horrible? No, but the meaning of these concepts like “love” and “cruelties” are recognised as being relative to the observer.
I have to wonder why nihilism means anything in these conversations? It seems almost like people take it to be a doctrine on how to act? So if nihilism is the case, then they must then act in so and so way? I’d suspect as a species (that’s been on the edge of extinction) it’s an urge to try and conform to the new environment, no matter how desperate that environment.
I’ll also say if you’re privy to how the universe began (and even how the thing that began it began, or how the thing that began that began and so on) please spill the beans! Nihilism certainly hasn’t nailed that as yet. I’m not sure that grasping nihilism is to grasp everything.
Now romancing the nihilistic knife, I get – people project boundries and limits where there are none and for the sake of having boundaries, one needs to carve away the idea they already exist. A micro example is in roleplay, where there are no rules present that stop a player character targeting another player character – then people go all sad face and get bent out of shape when it happens, as if it ‘just shouldn’t. And then at the macro scale we get the same thing with corporate CEO’s, with folk (even folk in government) projecting imaginary behavioural boundaries on the CEO – then going all sad face and getting bent out of shape when the CEO axes a bunch of jobs they liked to imagine were protected by their special magical boundaries.
But as I said before, there seems to be this ‘moth to flame’ thing about nihilism – people seem to avoid it more because otherwise they’d dive utterly into it burning heart?
Dietl,
We agree BBT isn’t saying “your life’s not really happening”. That’s the old brain in a vat idea, that’s not BBT. But surely nihilism has moral consequences. It means that nothing is right or wrong and nothing has value, so everything is inconsequential. It means there is no basis for objecting to cruelty, other than, as you say, perhaps “taste”. So we would end up saying, somehow, “The torture of that family isn’t to my taste at all,” and the next person says “So? Each to their own” and you say, “Yeah, whatever, I guess.” Isn’t that nihilism? What else could it be? That is, if nihilism is the position that morality is merely relative, rather than completely meaningless, then what is relativism? It seems to me that nihilism is in no way a triviality that doesn’t matter. It’s one thing to say we disagree on an ethical question and ultimately it’s up to us to decide. It’s another thing to say there’s no point to the disagreement because ultimately there’s no merit to objecting to anything. You are literally obliged to concede that nothing that has ever been done can be condemned, because the concept of right or wrong is erroneous. It would be exactly like Callan’s quite pointed example above: to be laid off and be unable to support your family would be regarded as no more than “going all sad face” and “bent out of shape” on the same level as a kid losing a video game. If this turns out to be scientific fact rather than philosophical conjecture, then physical cruelty loses its taboo. Nobody currently thinks of it as just a faux pas – how would it have no moral consequences if it became no more than a matter of aesthetic preference? In effect, you’re saying the confirmation of BBT in its nihilistic version will make no difference to any of us.
” In effect, you’re saying the confirmation of BBT in its nihilistic version will make no difference to any of us.”
Exactly.
The reaction you described in you examples strike me as unrealistic.
““Yeah, whatever, I guess.”
“going all sad face”
…ect
Those aren’t the physical reactions one might expect from a being with mirror neurons. The mechanisms that evolved in our brain are stronger then any philosophical realisation regarding morality. So what would have happened to the torturer? I think depending to the situation the nihilist would have tried to stop him as part of the physical reaction to avoid the pain that comes from empathy. But this would have happened without any moral basis. He might think, when his temper cooled down “The pain is gone. I feel better now.” as opposed to someone with moral convictions: “I have done justice. It was right/good to do this.”
The problem regarding moral nihilism that might arise is the possibility of tomorrows scientific advances. The ability to change the physical conditions, to tinker with the brain. Why not cut out your mirror neurons? Why not put yourself into a state of total happiness and moral indifference?
But do you think the urge to raise these questions comes from a nihilistic conviction? Quite the opposite, I guess, because from the nihilistic view, what value is there in total happiness? In the avoiding of pain? None. If you are nihilist in your heart these things don’t matter to you, not even being a nihilist does.
People will ask themself the questions above, if they what to play with their brain, but if they think nihilism has anything to do with that, they didn’t understand/are misinterpreting this position or are fooling themself…or their brain is.
” That is, if nihilism is the position that morality is merely relative, rather than completely meaningless, then what is relativism?”
Maybe I put it in a confusing way in my last post. Relativism says that the sentences of morality can objectively be right or wrong but they are right and wrong relative to culture, person, …whatever. Nihilism says that moral sentenses can’t objectively be right or wrong because the words in there are meaningless.
It’s a difference that shares many similar consequences.
The difference between relieving the pain of outraged empathy, and doing justice, seems like a distinction without a difference to me. Because that’s not saying there is no morality, that’s saying morality is ingrained in our brains, since it begins with an imaginative sympathy with others and the rest is rationalisation. An amoral person thinks nobody else matters, only what he or she happens to want. I agree with you that it’s unrealistic for people to be that uncaring when there’s suffering right in front of them – in fact, I would say yes, exactly. That would seem to support my side of the argument.
I have too many thoughts to offer you two and I know I’m not going to be able to innumerate them coherently – just food for though, I don’t really know I have a point. I’ve been thinking of Kreistor and ochlocrat/the two of you; minus Kreistor (so far), three opinions, I’ve come to resonate with among the TPB Old Names.
To me, misunderstandings here are born out of misattribution.
Firstly, if Bakker’s right about the BBH, then our history remains, as it would always have been, the history of Greater Brain and Greater Brain’s mirror-neuron manifested noospheric interaction with reality – simply rationalized by a cog (us), in service to its turning, as the explanatory style of intentional agents (regardless of the reality of intention)? Humans still did what they did, are doing what they do, and there is enough history to suggest that the world is just happening, sans the decisions made by historical intentionality, or tyrants & queens.
In Earwa, the Gods seem the compulsions by which humans are moved. In Neuropath and our world, our thoughts and the civilization they affect through mirrored schema, are still the manifest expression of the world just happening, like explaining photosynthesis or cycles of h2o.
Which brings me to nihilism. You’ve both hovered around it as you wrote and I’m can’t advocate getting stuck on the promontory. Just jump past it ;). Nihilism is the go to scapegoat of this conversation (and the weapon of choice by TPB interlocutors) and it’s hindering the growth of dialogue around here.
Furthermore, nihilism’s been a poor straw man at TPB, something constructed, that mostly never seems to reflect nihilism when it arises. BBH has nihilist characteristics as do many individuals? That simply doesn’t reflect a unilateral change in how we act as defined by morality, as we call it.
And that’s the kicker for me, is that we call it as we see it. If our history is Greater Brain’s history, then its expression in the world – like the shaping of mountains by erosion or the pooling of water in calderas – has still manifested our human civilization as it is, right? Our revelations don’t make a damned difference to the state of things as they are. Accepting BBH means accepting that we didn’t choose a society in which we act in accordance with what we then call morality.
Just flinging fodder ;).
“Because that’s not saying there is no morality, that’s saying morality is ingrained in our brains…”
To be a bit more precise, ‘morality’ is a term that is used in many ways today and one shouldn’t confuse them. We need to distinguish between (a) morality as such or normative ethics, (b) descriptive ethics and (c) metaethics.
A sentence from (a) might have the following or a similar form: ‘Doing X is right’
(b) deals with how/why people in reality act and is part of psychology.
In (c) people are talking about what it means for a moral sentence like the above to be true or false.
So if you say morality doesn’t exist you make a metaethical statement that can mean (1) there is no morality according to which people try to act or (2) every sentence of morality (in the sense of (a)) is meaningless.
(2) is the position I’m trying to defend here. (1) stands in contradiction to what science tells us, for people indeed try to act according to certain rules.
By saying morality is ingrained in our brains you obviously mean that some “rules of morality” are ingrained in our brains, but are those rules truely moral? Is the fact that our brains evolved the way they did sufficient in a moral way. If you believe in evolution you know that it says that the brain was arbritarly selected to be this way. So it could have evolved differently. What makes it moral superior to let’s say, the brain of a wolf? The only way to answer this is by postulating morality “in nature” (as maybe given by a god). But this would mean that there is something moraly relevant in nature like the aforementioned mirror neurons. This leads to the question ‘Is it bad to cut out your mirror neurons’ (I know that “cutting out” is not a good way to descibe it, but I quess, you get the picture)
“An amoral person thinks nobody else matters, only what he or she happens to want. ”
There are many possibilities as of what an “amoral person” might think. He/she might only ignore his/her empathical reactions, which would produce at least a small amount of cognitive dissonance. Or such a person might not have such reactions at all. Why this might be the case or how it is possible is a question for descriptive ethics and, by this, psychology. But this has nothing to do with moral nihilism and morality as such. From a metaethical standpoint again you might ask: “Is it bad to be amoral?”
Sure, and I’m saying not simply that I dispute number 2, although I do, but that BBT as I understand it does not require 2 to be true, which is the main point I’m pursuing. As I say, it seems to me that BBT only shows that we cannot understand our intuitions . So, in this model, it doesn’t follow from the argument that value is, as Scott puts it above, only the lighting up of brains. If there are objective moral truths – that is, if there are acts which if fully understood are contrary to human welfare in a way that is indistinguishable from what we call immoral – then value corresponds to the brain lighting up in contact with them, which due to our unsatisfactory neural network, only happens haphazardly. Perhaps these truths are part of an organic species-wide system that our brains are too undeveloped to grasp. As per BBT, they come to us through an unknown number of removes, through a fog of confusion. And when they get through all the filters, we receive them in a hazy form, an undefined and indefinable (according to this), though often very strong, signal that a situation is “wrong” or even “evil”. And different brains, struggling to decode a message that will forever be above their security clearance, may come up with different behavioral acts as a result of inevitably garbling the message to varying degrees, but in principle there is a pure version of the message at the source, and a superior alien brain would understand it. So, just as a cat can’t give you a conscious explanation of why it shrinks back from a small space when its whiskers tell it to, neither can we figure out exactly what morality is, but that doesn’t mean that no fundamental mistake is made if the mouse is chased down the hole. As far as brain-re-engineering goes, since there are moral facts, the engineering can either take you closer to, or further away, from being able to perceive them (make your brain light up in accordance with them), but it doesn’t make those facts relative. The pain and distress we feel when we see others being hurt may be an indicator of this – bearing in mind, I’m only asking why it can’t possibly be.
My only point, really, is that BBT opens up the possibility of objective moral facts rather than closes it down. If we already correctly perceive the world, then we’re stuck with relativism or, at a stretch, nihilism. But BBT’s very insistence on the fogginess of our worldview allows for the chance that relativism (our predominant sense of things) is completely wrong and there is a correct answer to the problems we place under the heading of “moral”, but we’re physically unable to access it except in glimmers which we experience as moral intuitions and conscience. The proposition that value might be nothing more than electrical twitches doesn’t seem to me to be demanded by the more central point that we can’t remotely comprehend our place in the world. The one doesn’t necessarily follow from the other. Obviously this doesn’t shed any light on ethics – it leaves all the traditional ethical arguments where they are, in a way. Following BBT, if I knew what was best for the species, I would be transhuman. But meta-ethically it matters and perhaps it moves us sideways in the sense that it shows BBT doesn’t rule out moral facts. It would only mean we can’t know what they are. In the big picture we can’t see, there may be the objective ethical truths that elude our consciousness. That’s very different from saying all morality is a sham.
I think we both come to the same conclusion here but from different directions. That is, BBT doesn’t outrule morality. Quite like the theory of natural selection doesn’t outrule the existence of god. So there is no need for people to fear to loose all of their precious morality by accepting BBT. I think we can settle for this.
Now, I have a different question: Where do your “moral truths” come from? Do they exist in a kind of Platonic parallel universe? Are they in a way measurable? If so, how?
Ah, well. I don’t have a great deal to say about that, since it’s part of my interpretation of BBT that we’re not capable of answering that question. I suppose I’m imagining them to be roughly along the lines of a connection between what we call morality and what is in fact beneficial for the species, so that it emerges naturally out of biology rather than theology. But only a transhuman would be able to establish that. Humans will have to settle for the negotiations we’re already embroiled in, with the new allowance that rationality is possibly illusory; the consequences of which we don’t know yet.
Sorry I’m so late to the conversation. The blog has been falling thru the cracks of my obsession these past few weeks. I just wanted to chime in that I take staggering, mind-numbing ignorance to be the upshot of BBT as well. The implication of this that I find so terrifying is simply this: Given that our intuitions count for so little, that the brain neuroscience describes could contradict them any which way, what are the odds that the mature neuroscientific picture, when it comes, will cater to our intuitions in ANY way? The obvious answer seems to be, ‘slight.’ Odds are that morality, as a matter of empirical fact, is little more than a metacognitive sop for far more profound (and inhuman) evolutionary processes.
Thank you for still trying to answer.
I, for my part, couldn’t base my own morality on something of which I have no hope understanding but maybe I’m wielding Occam’s Razor a bit too fiercely 😉
Hi Scott,
I think I’m playing advoctus diaboli a bit here, for I’m wading through the same nihilistic muck as you, but:
“Odds are that morality, as a matter of empirical fact, is little more than a metacognitive sop for far more profound (and inhuman) evolutionary processes.”
when you say ‘morality’ here, I quess what you mean is what people see as morality. Neuroscientist can describe the origins of this moral thinking and, by this, show that every decision is only part of a long chain of random events. So there’s no free will, which morality needs. From Cicero: “No obligation to do the impossible is binding.” Every moral system that forces people to do impossible things in order to act in a good way is useless.
From a neuroscientific standpoint it is impossible for people to act “freely” when they can show a complete chain of events that leads from a random “physical event” to an action of a person.
Does that mean that everything is allowed, that everybody can do what he/she wants? Obviously not, because the word ‘allowed’ is meaningless, just as the words ‘he’ and ‘she’ become more and more relative. You need to question what it means to be a moral agent. Why should one pile of atoms that we call ‘dietl’ be more relevant from a moral perspective than another pile of atoms that we call ‘stone’, ‘sun’ ect…?
This realisation might be “terrifying”, but where does this leave morality? When you look at it this way all morality is is a “metacognitive sop”. What was it before? A guide for people to tell them how to live.
Now comes the other side of my rant. The information that we don’t have a free will doesn’t help us. The “we”, “you” and “I” are part of an illusory world that we can’t escape. There is no way out. Mental engineering won’t help. We are trapped inside our reality, all we can do is change it the way we can with the help of science.
So *we* still need to make decisions. *I* am a moral agent despite the above.
People still need to make choices and they still need a guide, even if they and it don’t exists.
This is the worst-case scenario: that we evolved in such a way that we cannot live in accordance with genuine knowledge of what we are. We evolved in such a way, in other words, that we have wilfully or unwilfully make-believe to remain sane, let alone functional. But Middle-earth ain’t any more real for being believed, unfortunately.
The ‘Metacognitive Argument’ as laid out in “The Introspective Peepshow” really is the lynchpin of the whole problem – and the reason I can’t find my way out. It’s the argument that no one has actually tackled with any seriousness. And it’s a tough nut. There was Tononi’s RA, Eric Hoel, weighing in claiming that the brain could be ‘pellucid,’ that it somehow ‘automatically’ possesses accurate self-knowledge. And then, of course, Chris, simply asserting that evolution demands accurate metacognition on the basis… I was never able to figure out beyond some naive assumption that efficacy is impossible without accuracy. Then a host of others claiming that I was committing a performative contradiction, saying intentional phenomena did not exist as they define them because I had to employ those intentional phenomena as they define them (!) to make that claim. So far it’s been magic, misunderstanding, and question-begging – that sounds harsh I know, but for the life of me I don’t know any other way to characterize the critiques.
If metacognition is simply a twist on cognition proper (as it almost certainly is) then profound brain blindness is almost certainly the case. If profound brain blindness is the case, then our theoretical metacognitive intuitions are, all things being equal, deceptive through and through. The epistemic dissociation of metacognitive intuitions from neural reality transforms the issue into a numbers game: there are infinitely more ways for brain function to contradict metacognitive intuition than to confirm it.
Ergo, we are in for a rough ride.
The problem with the ‘Metacognitive Argument’ is that it is based on neuroscientific findings. So to “tackle it with any seriousness” would mean to question the basis of it. Maybe we are misinterpreting neuroscience in a way that fits our worldviews or maybe there is something that neuroscientists are overlooking? And wouldn’t that make sense given effects like WYSIATI?
Well, that’s the only reasonable response I see to this as the only way out.
Apart from that, how else can you critique this without referring to magic (which seems like the best of these three), misunderstanding it/ignoring the facts or begging the question?
“Ergo, we are in for a rough ride.”
Yeah, see you at the end. I hope this ride has one… 😉
“The problem with the ‘Metacognitive Argument’ is that it is based on neuroscientific findings. So to “tackle it with any seriousness” would mean to question the basis of it. Maybe we are misinterpreting neuroscience in a way that fits our worldviews or maybe there is something that neuroscientists are overlooking? And wouldn’t that make sense given effects like WYSIATI? Well, that’s the only reasonable response I see to this as the only way out.”
There will almost certainly be myriad hiccoughs before the community resolves about some kind of consensus, and critics will leap upon these, as they did to defend heliocentrism and biocentrism. The question is whether the results will be the same!
In the scientific community, what matters in the end are the facts and those will prevail in the end. The next generation will be taught without the preconcetptions of the old and they will die out. The old view will resonate in pop culture and pop science as long as people are interested in it but the professionals will now better. So I quess the tide will turn one day, maybe it already has, but this prediction relies on a stable society with no major political or environmental changes, so who knows what the future might bring 🙂
“I, for my part, couldn’t base my own morality on something of which I have no hope understanding.” Unless we have no choice… All speculative, of course. Nor am I welded to the idea. Thanks for engaging with it.
You are literally obliged to concede that nothing that has ever been done can be condemned, because the concept of right or wrong is erroneous.
As far as you can tell is erroneous.
But apart from that, what would conceding this result in, with you?
It means there is no basis for objecting to cruelty, other than, as you say, perhaps “taste”.
This sort of thing? What do you mean, no basis?
Other peoples brains have sympathy – the basis is, you can reach out and say that something is wrong – the feeling of wrong they have.
“But it’s not real wrong in that case!”
Lets say the other party has made the same concession already. Okay, you try and say something is wrong and lets say they cooperate with you, and lets say with your combined efforts, the thing you think is wrong (the torture) is stopped.
Isn’t that what you wanted? Or it has to be Right for it to be stopped – it just being stopped isn’t good enough?
Or perhaps how you take it is if it’s not Right to stop it, then it’s best to let it keep happening?
Does the torture being stopped matter most to you? Or does it being Right to stop it actually matter more than that?
Does being in the Right actually matter more than stopping that act?
Perhaps consider it an utter humility universe, where you don’t get to be in the Right.
You only get to stop the act (which is ostensibly what is the most important to do).
Or is stopping the torture act less important to you than being in the Right?
It might be savagely practical, but stopping the torture is the primary concern to me. Not that I’m not weak and frail and scared and a bunch of other stuff in Cordelia Fine’s ‘A mind of it’s own’. But regardless, that’s what comes first.
It would be exactly like Callan’s quite pointed example above: to be laid off and be unable to support your family would be regarded as no more than “going all sad face” and “bent out of shape” on the same level as a kid losing a video game.
Ouch man – you read that from such a radically different perspective!
There are TWO ways of treating that as just being a sad face.
You’ve grasped the cynical CEO’s treatment of it. Which is indeed as important as losing a video game.
There IS another.
I know it’s off topic for the main body of the text, but as a fan of the Second Apocalypse I just wanted to offer a ‘keep going’ re: the fiction. Appreciate the position you’re in, and don’t doubt the wait will be worth it.
Good luck with both that, and the PhD.
Oh, right. Hi again all. I had no idea this thread was still going on… I forget they don’t end when there’s a new post! I’ll just add that I think Dietl might be winning me over. I was getting quite stuck on the idea that there might be a happy convergence between our programming and what we call morality. But as you say, Dietl, perhaps it’s all a moot point.
As far as how the social consensus will take it if BBT becomes a fact, I suspect that there will be a huge numbers of articles and books and speeches defending humanism and arguing that morality carries on regardless and so on, and that many of them will turn out to be shills for corporations and government agencies taking full advantage of the latest findings in neuro-science and viewing it as actively to their advantage for the general populace to doubt those findings are real or relevant.
I think I can hear Alex Jones calling me.
I know it’s hard to tell, when a branch of the comment tree has finally withered. I usually look to the last two or three posts.
“…I think Dietl might be winning me over.”
Great! Another person I’ve convinced of the futility of life 😀
Here is what Dennett has to say that might be relevant. He is not convincing me at all, but it’s interesting to know why not: