The Crux
by rsbakker
Aphorism of the Day: Give me an eye blind enough, and I will transform guttering candles into exploding stars.
.
The Blind Brain Theory turns on the following four basic claims:
1) Cognition is heuristic all the way down.
2) Metacognition is continuous with cognition.
3) Metacognitive intuitions are the artifact of severe informatic and heuristic constraints. Metacognitive accuracy is impossible.
4) Metacognitive intuitions only loosely constrain neural fact. There are far more ways for neural facts to contradict our metacognitive intuitions than otherwise.
A good friend of mine, Dan Mellamphy, has agreed to go through a number of the posts from the past eighteen months with an eye to pulling them together into a book of some kind. I’m actually thinking of calling it Through the Brain Darkly: because of Neuropath, because the blog is called Three Pound Brain, and because of my apparent inability to abandon the tedious metaphorics of neural blindness. Either way, I thought boiling BBT down to its central commitments would be worthwhile exercise. Like a picture taken on a rare, good hair day…
.
1) Cognition is heuristic all the way down.
I take this claim to be trivial. Heuristics are problem-solving mechanisms that minimize computational costs via the neglect of extraneous or inaccessible information. The human brain is itself a compound heuristic device, one possessing a plurality of cognitive tools (innate and learned component heuristics) adapted to a broad but finite range of environmental problems. The human brain, therefore, possesses a ‘compound problem ecology’ consisting of the range of those problems primarily responsible for driving its evolution, whatever they may be. Component heuristics likewise possess problem ecologies, or ‘scopes of application.’
.
2) Metacognition is continuous with cognition.
I also take this claim to be trivial. The most pervasive problem (or reproductive obstacle) faced by the human brain is the inverse problem. Inverse problems involve deriving effective information (ie., mass and trajectory) from some unknown, distal phenomenon (ie., a falling tree) via proximal information (ie., retinal stimuli) possessing systematic causal relations (ie., reflected light) to that phenomenon. Hearing, for instance, requires deriving distal causal structures, an approaching car, say, on the basis of proximal effects, the cochlear signals triggered by the sound emitted from the car. Numerous detection technologies (sonar, radar, fMRI, and so on) operate on this very principle, determining the properties of unknown objects from the properties of some signal connected to them.
The brain can mechanically engage its environment because it is mechanically embedded in its environment–because it is, quite literally, just more environment. The brain is that part of the environment that models/exploits the rest of the environment. Thus the crucial distinction between those medial environmental components involved in modelling/enacting (sensory media, neural mechanisms) and those lateral environmental components modelled. And thus, medial neglect, the general blindness of the human brain to its own structure and function, and its corollary, lateral sensitivity, the general responsiveness of the brain to the structure and function of its external environments–or in other words, the primary problem ecology of the heuristic brain.
Medial neglect and lateral sensitivity speak to a profound connection between ignorance and knowledge, how sensitivity to distal, lateral complexities necessitates insensitivity to proximal, medial complexities. Modelling environments necessarily exacts what might be called an ‘autoepistemic toll’ on the systems responsible. The greater the lateral fidelity, the more sophisticated the mechanisms, the greater the surplus of ‘blind,’ or medial, complexity. The brain, you could say, is an organ that transforms ‘risky complexity’ into ‘safe complexity,’ that solves distal unknowns that kill by accumulating proximal unknowns (neural mechanisms) that must be fed.
The parsing of the environment into medial and lateral components represents more a twist than a scission: the environment remains one environment. Information pertaining to brain function is environmental information, which is to say, information pertinent to the solution of potential environmental problems. Thus metacognition, heuristics that access information pertaining to the brain’s own operations.
Since metacognition is continuous with cognition, another part of the environment engaged in problem solving the environment, it amounts to the adaptation of neural mechanisms sensitive in effective ways to other neural mechanisms in the brain. The brain, in other words, poses an inverse problem for itself.
.
3) Metacognitive intuitions are the artifact of severe informatic and heuristic constraints. Metacognitive accuracy is impossible.
This claim, which is far more controversial than those above, directly follows from the continuity of metacognition and cognition–from the fact that the brain itself constitutes an inverse problem. This is because, as an inverse problem, the brain is quite clearly insoluble. Two considerations in particular make this clear:
1) Target complexity: The human brain is the most complicated mechanism known. Even as an external environmental problem, it has taken science centuries to accumulate the techniques, information, and technology required to merely begin the process of providing any comprehensive mechanistic explanation.
2) Target complicity: The continuity of metacognition and cognition allows us to see that the structural entanglement of metacognitive neural mechanisms with the neural mechanisms tracked, far from providing any cognitive advantage, thoroughly complicates the ability of the former to derive high-dimensional information from the latter. One might analogize the dilemma in terms of two biologists studying bonobos, the one by observing them in their natural habitat, the other by being sewn into a burlap sack with one. Relational distance and variability provide the biologist-in-the-habitat quantities and kinds (dimensions) of information simply not available to the biologist-in-the-sack. Perhaps more importantly, they allow the former to cognize the bonobos without the complication of observer effects. Neural mechanisms sensitive to other neural mechanisms* access information via dedicated, as opposed to variable, channels, and as such are entirely ‘captive’: they cannot pursue the kinds of active environmental engagement that permit the kind of high-dimensional tracking/modelling characteristic of cognition proper.
Target complexity and complicity mean that metacognition is almost certainly restricted to partial, low-dimensional information. There is quite literally no way for the brain to cognize itself as a brain–which is to say, accurately. Thus the mind-body problem. And thus a good number of the perennial problems that have plagued philosophy of mind and philosophy more generally (which can be parsimoniously explained away as different consequences of informatic privation). Heuristic problem-solving does not require the high-dimensional fidelity that characterizes our sensory experience of the world, as simpler life forms show. The metacognitive capacities of the human brain turn on effective information, scraps gleaned via adventitious mutations that historically provided some indeterminate reproductive advantage in some indeterminate context. It confuses these scraps for wholes–suffers the cognitive illusion of sufficiency–simply because it has no way of cognizing its informatic straits as such. Because of this, it perpetually mistakes what could be peripheral fragments in neurofunctional terms, for the entirety and the crux.
.
4) Metacognitive intuitions only loosely constrain neural fact. There are far more ways for neural facts to contradict our metacognitive intuitions than otherwise.
Given the above, the degree to which the mind is dissimilar to the brain is the degree to which deliberative metacognition is simply mistaken. The futility of philosophy is no accident on this account. When we ‘reflect upon’ conscious cognition or experience, we are accessing low-dimensional information adapted to metacognitive heuristics adapted to narrow problem ecologies faced by our preliterate–prephilosophical–ancestors. Thanks to medial neglect, we are utterly blind to the actual neurofunctional context of the information expressed in experience. Likewise, we have no intuitive inkling of the metacognitive apparatuses at work, no idea whether they are many as opposed to one, let alone whether they are at all applicable to the problem they have been tasked to solve. Unless, that is, the task requires accuracy–getting some theoretical metacognitive account of mind or meaning or morality or phenomenology right–in which case we have good grounds (all our manifest intuitions to the contrary) to assume that such theoretical problem ecologies are hopelessly out of reach.
Experience, the very sum of significance, is a kind of cartoon that we are. Metacognition assumes the mythical accuracy (as opposed to the situation-specific efficacy) of the cartoon simply because that cartoon is all there is, all there ever has been. It assumes sufficiency because, in other words, cognizing its myriad limits and insufficiencies requires access to information that simply does not exist for metacognition.
The metacognitive illusion of sufficiency means that the dissociation between our metacognitive intuition of function and actual neural function can be near complete, that memory need not be veridical, the feeling of willing need not be efficacious, self-identity need not be a ‘condition of possibility,’ and so on, and so on. It means, in other words, that what we call ‘experience’ can be subreptive through and through, and still seem the very foundation of the possibility of knowledge.
It means that, all things being equal, the thoroughgoing neuroscientific overthrow our manifest self-understanding is far, far more likely than even its marginal confirmation.
Thanks for this post and glad to hear the theory will finally emerge in book form. I discuss it a in a paper I am working on and I think, in a sense, you are taking up the mantle of ‘right-wing’ Sellarsianism that Churchland sometimes fail to push ‘all the way down’ (Metzinger perhaps goes a little harder, but, alas, there is always the moment of hope before going over the edge in philosophy – even neurophilosophy).
I’m keen to hear more, Paul – especially regarding ‘right wing Sellarsianism.’ Paul Churchland loses his nerve, definitely, primarily because I think Dennett is such a silver-tongued seducer – and the fact that nobody has a plausible naturalistic way of explaining what intentionality is. Just think of the way things like his prototype activation theory of explanation are summarily dismissed for this very reason. Short of something like BBT, I would argue, Churchlandesque leaps to the brain smack more of an upside down Kierkegaard than anything… or a call to abandon the Enterprise for a Borg Cube at warp speeds!
I know in my correspondance with Tom he seems to be quite a bit more pessimistic than he comes across in his published work.
Right and left wing Sellarsianism is occasional used to distinguish between the more naturalist strand (Rosenberg, say and Churchland as ‘hard right’) and the more normative side as left (say McDowell and Brandom). I think it’s just used as shorthand when people are giving papers, etc.
I guess with Churchland there is a limit to his radicalism. For instance, toward the end of Plato’s Camera he seems very naïve about the possibility of people speaking in the depth of the scientific image (where he envisages us expanding our sense of, in his example, temperature, etc.). When I’ve seen him speak he also mentions how he and wife express mood in neurospeak (!). In the end we just get an account of how discovery happens once we shift to prototype activation theory model which looks remarkably similar Peircean abduction.
I am not surprised Metzinger is more brutally pessimistic in print. There’s a kind of going through the motions when he discusses ethics and I can’t help but cynically feel he drops that in because otherwise the implications of his project are too dark.
Interesting stuff! You find Sellarsian gestures in Churchland, certainly. Rosenberg, for instance, conflates naturalism with pragmatism, which is why he opts for his rehabilitated sense of ‘scientism,’ and although he sees himself in dialogue with Sellarsian attempts to rescue intentionality, I’m not sure I see how he himself could be ‘Sellarsian.’ So where I see Churchland divided (perhaps strategically), adopting pragmatically deflated semantic concepts AS IF they were naturalised, I see Rosenberg as stranded, sticking to his naturalistic guns, arguing that it is up to neuroscience, not philosophers, to explain where intentional discourse goes wrong. And I see, BBT, of course, as hypothetically bridging this chasm. The thing I like so much about Metzinger is that he is thoroughly invested in the consequences of his theory: I really get the sense of watching a bunch of very bright kids assemble Lego whenever I read Anglo-American philosophers on the issue of eliminativism.
Speaking of which, is there anyone you recommend that I read, Paul?
Yea, I second that… hope to see the book. Sometimes your posts remind me of Jorge Luis Borges’s The Aleph: In Borges’ story, the Aleph is a point in space that contains all other points. Anyone who gazes into it can see everything in the universe from every angle simultaneously, without distortion, overlapping or confusion.
Except that you reverse the procedure: the universe is gazing into us and all it sees for that angle is total distortion and confusion. 🙂
You make confusion sound way too cool, noir – or perhaps as cool as it should be.
Just call it “The Blind Brain Theory”. That’s a way better title …
But then I would have to see the bloody thing on my shelf all the time. I actually like the idea, and I’m actually glad I picked something prosaic to name the theory. But I absolutely despise the thing as well. It’s hard to explain…
I read this blog religiously and I still plan on buying this book.
Yes. It is far easier to desecrate a book than a blog…
Thanks for the upvote, Dave. This is new territory for me.
Yes, count me in, too. I love Gothic terror, and this blog kind of terrifies me. Still, I can’t stay away. I’d buy this book without a second thought.
Great idea! I assumed you were working up towards this given the in-depth nature of the posts recently. If I may offer up a humble suggestion… Along with the more academically rigorous explanations and postulates of BBT that will form the core of such a book, it might be worthwhile to do a boiled-down “basics” version for each post or section of the book (or however it is split) placed at the end of the chapter (or whatever). This would benefit those of us who have no philosophical or neuroscientific training, formal or otherwise. This post here was a good example of such, but I would take it even further, making sure to scrub as much jargon (“medial neglect”, etc) out of it as possible. This might help widen appeal beyond the obvious academic and scientific circles to a more general class of reader.
As one of those with no such training in the subjects (aside from what I read here) who often struggles with the jargon and references to philosophical fundamentals I have no knowledge of, I admit this is something of a selfish suggestion. 😉 But I think it would be a worthwhile exercise in any event.
But then it would be Through the Brain Lucidly! It’s a good idea, though. One of the things I appreciated about Metamagical Themas, way back, were Hofstader’s post-scripts, with his clarifications and updates.
Reading Scott’s own review of Tononi’s “PHI”:
https://rsbakker.wordpress.com/2012/09/11/a-brick-o-qualia-tononi-phi-and-the-neural-armchair/
The book is printed on heavy weight paper and is filled with photographic plates and has a shipping wieght of over three pounds.
For Scott I would suggest something similar and entitle it Three Pound Book, The Ultimate Philosophic Theory Ever Written…in Canada.
Use a second author to summarize each chapter in layman’s terms entitling them, WTF is Scott Talking About (WTFISTA).
Just as Tononi used Galileo and others as interlocutors; Scott’s interlocutors deceased : Descartes, Aristotle, Plato, and disguised living: Dr Ben Bennett, Dr Harlun McGunn, Dr Thomas Ratzinger….Include a DVD with live discussions of WTFISTA using the cast of Family Guy.
You forgot the Pope! How could you forget the Pope?
I was actually thinking of skipping the content part, and just including a mirror, brush, and a very, very sharp straight razor.
VicP wrote:
“Three Pound Book, The Ultimate Philosophic Theory Ever Written…in Canada.”
If this is the title, it needs to have a picture of a beaver wearing Morpheus sunglasses, dodging bullets in a tobacco field.
“I was actually thinking of skipping the content part, and just including a mirror, brush, and a very, very sharp straight razor.”…definitely not the Kindle Edition
How about a beaver in sunglasses dodging the gears of Leibniz Mill?
I’m actually thinking of calling it Through the Brain Darkly:
Then Alice saw a bottle upon the table, labeled ‘Think me’. and alice did. Alice became very, very big! but then alice saw that was a reflection of how very, very small she was.
Okay, it’s a stretch to take it over to ‘through the looking glass’ – so shoot me…
I like it.
Scott’s other interlocutors:
Dr Cyrus Unworthy: Philosophy professor at a southern American university
The Templelands: Husband and wife professors and authors
God: An historical figure who appears throughout history played by himself
Albert Einstein: Explains to Scott the Mother-In-Law Theory of Relativity. From her point of view her daughter is moving much faster away from her than she appears to be moving away from her son-in-law. A low level informatic projection.
Photographic Plates:
Christopher Columbus reading maps 40 days into the 1st voyage.
Christopher Columbus landing in the new world explaining this is India
John Wilkes Booth standing behind Lincoln in Fords Theater
Donald Rumsfeld advising President Bush
President GW Bush explaining his new vision for the Middle East
Big +1 on Through the Brain Darkly. Congratulations, Bakker.
Danke, Mike!
http://www.salon.com/2013/04/22/how_can_the_brain_understand_itself/
Thanks for this, ochlo.
I’ve read it all, but I’ve lost track of what’s Scott’s goal these days. A year ago I think I could follow more easily the way he was developing ideas (mostly, I’ve lost the narrative, not the abstract theory).
I’ll give a few thoughts about what those points suggest me:
1) Cognition is heuristic all the way down.
The way I understand “heuristic” applied to the brain means reduction of complexity through a series of filters. These filters are essentially patterns that work in automatic ways. The goal here is that the huge amount of data is filtered and selected. As this process goes through a number of passes, the amount of data becomes manageable.
The observation I’d make here is that these filters probably aren’t set in stone. The fact that they work on autopilot doesn’t mean that they lack plasticity. There’s also the argument of creationists here: irreducible complexity. The fact is that the brain generated these complex filters, so if it generates them then it also has the capacity to override them. So there’s both a possibility of plasticity across generations, as well plasticity that may happen within a single life. Patterns in the brain form because of repetition, which means that it’s not impossible that the process isn’t solely “bottom-up”.
Obviously the lower you go, the more ingrained the pattern is. But it’s possible it’s still a fluid process.
2) Metacognition is continuous with cognition.
This sounds similar to the example I wrote in a previous comment where I compared the brain to a sea. Metacognition, the way I see it, is about self-representation. That problem of “strange loops”, or Godel paradox (the observer including himself in the object of observation). But this line simply affirms, I think, that metacognition is just cognition, and there’s no definite line separating them. That also corresponds to that example I made (it’s just/all water).
3) Metacognitive intuitions are the artifact of severe informatic and heuristic constraints. Metacognitive accuracy is impossible.
This I guess it’s very easy to accept for everyone. If metacognitive = consciousness, and, in my example, consciousness = surface water, then it simply means that metacognitive is the tip of the iceberg, after all the filtering of point 1 already happened. “Accuracy” very obviously isn’t a goal, merely because if the goal was accuracy then using “heuristic” wouldn’t work. They are at opposite sides of the spectrum.
Though this doesn’t touch the real deal: the fact that lack of accuracy doesn’t equal worthlessness. It actually confirms the idea that the filtering of data was absolutely necessary.
It’s like the difference between a shotgun and a sniper rifle. The shotgun has its uses. This also reinstates the problem of reductionism, or even the hard problem. Are patterns observed at higher level still relevant if we don’t know exactly how to reduce THEM ALL to the most basic level?
If we agree that science works even if it still hasn’t solved the problem at the lowest level, then we should agree that the limited picture that consciousness has is still largely valid even if it lacks “reduction”.
4) Metacognitive intuitions only loosely constrain neural fact. There are far more ways for neural facts to contradict our metacognitive intuitions than otherwise.
And this is also mostly accepted too. But it’s linked to point 3.
The metacognitive capacities of the human brain turn on effective information, scraps gleaned via adventitious mutations that historically provided some indeterminate reproductive advantage in some indeterminate context. It confuses these scraps for wholes–suffers the cognitive illusion of sufficiency–simply because it has no way of cognizing its informatic straits as such. Because of this, it perpetually mistakes what could be peripheral fragments in neurofunctional terms, for the entirety and the crux.
Depends whether or not “scraps” weren’t actually carefully selected because of relevance, or really just scraps.
It’s not a problem of disproportion, but of demonstrating that what nature selected is not what’s important.
I guess I’m not sure what it was you thought you found, Abe, so I’m not sure what you lost! Just a few points: Regarding (1), ‘filtering’ is one thing that heuristics do, I suppose (I think there’s better metaphors), but you seem to be equivocating it with heuristics. As for heuristics being learned as well as innate, that goes without saying, I think. Regarding (2), metacognition on this account isn’t about ‘self-representation’ at all. There’s no representation on BBT, though their is structural recapitulation. The conceptual power of the medial/lateral distinction is that it allows for the demystification of ‘strange loops,’ which turn out to be not that strange at all. Regarding (3), the metaphor of the iceberg/water surface is faulty insofar as it implies that the information metacognized is all together at the ‘top,’ rather than scattered willy-nilly throughout the functional food chain (which is far, far more likely). It’s efficacious sure, but efficacious for what? This is the crux of the accuracy/efficacy distinction. BBT acknowledges the information available (via consciousness) for metacognition has a function, it just points out the obvious fact that this function is almost entirely inscrutable to metacognition. Are the ‘patterns observed at higher levels’ relevant for the reasons metacognitive intuition suggests? Well, first, get rid of ‘higher levels’ because, again, we have no way of knowing where in the foodchain the information comes from, let alone that it is ‘higher’ or belongs to a single ‘level.’ Second, we have no reason to trust our metacognitive intuitions! As for (4), this is not at all ‘widely accepted’ (anymore than (3) is!). Again, what do you mean by ‘relevant’ or ‘important’? For sharpening flints? For embarrassing sexual rivals? This is the whole point: efficacious for what? question has no clear answer, save that theoretical metacognitive deliberation – philosophical reflection – is almost certainly not this ‘what.’
The idea previously had an aura of seducing mystery: the suggestion that the actual world is completely alien and unknown, that we are supremely deceived and that “in the dark” lurk what we can’t even imagine. Like the enormous unknown kernel.
The problem is simply that the way you formulate The Idea these days is extremely interesting for those who study it already and have an experience with it, or part of their job. It’s much harder to “dress” in seducing ways and into a powerful narrative (that is not a variation on technocracy). So while the previous formulation had a power on everyone who listened it, this one is technical and interesting for those who can get as deep.
There’s no representation on BBT, though there is structural recapitulation.
This is for example one of those technicalities that I understand are important for the theory (since the thesis is that it’s “just environment” and so a “structural recapitulation” means there’s no divide and representation moved on a different place), but that I somewhat fail to understand how it’s a relevant distinction if one considers the context. Maybe here I have a “resolution” problem, so that I can’t perceive well the details you describe, and so in my own heuristic understanding of it it doesn’t present relevant differences.
So for me it seems more like a case of “let’s use this other technical term since it’s just more accurate than ‘representation’ when used in this other context, even if they are pretty much the same”.
Obviously you don’t have to pamper my lack of understanding. I’m merely providing feedback however useful it may be 😉
I confess lots of confusion when you go in the details of “lateral sensitivity” and “the primary problem ecology”
You say “metacognition on this account isn’t about ‘self-representation’ at all”, but I still understand “structural recapitulation” in terms of a model being built.
the metaphor of the iceberg/water surface is faulty insofar as it implies that the information metacognized is all together at the ‘top,’ rather than scattered willy-nilly throughout the functional food chain
well, in the metaphor the idea of consciousness on top wasn’t about it gathering together in a location, and more about the idea on “intensity”. So while this happens everywhere, it only “surfaces” when it gets particularly intense.
“The top” refers to the importance, insofar it could be true that a strong intensity means conscious.
It’s efficacious sure, but efficacious for what? This is the crux of the accuracy/efficacy distinction.
I still see this also in relation to reductionism in science. If it’s efficacious, then it is for what is emergent. Lacking perfect knowledge of the substrate, we map the emergent level (same as we map all we see in consciousness), and come up with rules that work describing what is going on (this is similar to the idea that we perfectly know what happens in “experience”, while science still has to figure out higher and lower levels, which was in that video I posted a few months ago).
Consciousness = perceivable reality (in different contexts, but similar patterns). Unconscious = reality going beyond the senses
But then we had this discussion and I always get stuck when you basically tell me to replace what I know, with a giant “?” (since you can’t say what an accurate representation would be like).
As in writing a story, you can do it with various degrees of accuracy and detail. Even if the goal of the story is presenting the same overall pattern. You can show this pattern in 10 pages, or 100, but it’s the same pattern, overall goal.
If you see something with your eyes, then write it down, it will never be accurate, but it would still communicate an idea that is potentially equal or similar.
If we take one step below consciousness we see the brain working in even more symbolically charged ways, like dreams. So there seem to be “stories” in consciousness as well in pre-conscious. The idea is that these patterns are the result of a number of heuristics that distill that particular “meaning” out of complex data. Then, at whatever level, the brain further examines the emerging pattern.
If on one side you have mechanical heuristics, on the other simplification that reduced complexity to a single story that is deemed relevant. As far as evolution is concerned, the brain did its best to isolate stuff potentially relevant, organizing it into a “picture”.
Now we argue, is this picture a good one?
It seems this line of thoughts ends up being about how good heuristics are. It’s as you’re saying “our heuristics, as result of evolution, are not quite so well designed and greatly deceive us”. And this also means that the “you” in consciousness, as an environment that analyzes itself, decided that he could eventually design itself better than evolution.
However transformed and reshaped, this is dualism. Because it means that a complex environment can employ a different modality (from evolution) to design itself. It’s not dualism as in different “levels” and rules, but it’s a different modality. It’s a goal achieved through complexity, and so a goal achieved by this “next level” whether or not it is effectively dualistic, or merely just the same but shaped in a new way.
If at this point you simply call this complex environment a “human being”, and cut off from the model everything below a certain picture-level heuristic, you’d have the exact same description, without accuracy, but with the same overall point.
Do with this what you want, I keep clumsily tripping over the same points 😉
I’m afraid I’ve never understood what you meant by ‘dualism,’ Abe.
You really need to understand heuristics to understand the position at all, I fear. How about this: The ‘goodness’ (or effectiveness) of a heuristic is relative to the problem it was adapted for. Theoretical metacognition almost certainly had no role to play in that adaptation. The information that theoretical metacognition accesses (likely in some degraded form) is effective for… What? Let’s say, ’embarrassing sexual competitors.’ So, what are the chances this information will provide theoretical metacognition what it needs to make accurate judgments regarding the ‘nature of consciousness’? Almost zero. This simply follows from the way heuristics work, which is by adapting themselves to the specific structure of a certain problem ecology (in this case, the problem of ’embarassing sexual competitors,’ say by, showing oneself more ’emotionally expressive’). The same way the ‘feeling of lust’ need tells us nothing about the evolutionary centrality of reproduction to be effective, this ‘feeling X’ need not tell us anything about the ‘nature of consciousness’ to be effective. In fact, odds are it will be thoroughly deceptive.
I think I understand that heuristics worked to figure out the external world, and that they suck to figure out metacognition simply because they weren’t built that way.
You don’t like that I use “filters”, so I could use “pattern recognition”. As if these heuristics are built to find certain specific patterns, and when they do they pass them on for further elaboration.
But I don’t even understand why metacognition should be concerned with the busywork at the bottom level. It doesn’t even matter to figure out if this happens in a specific place, as long you agree that consciousness only accesses a very small subset of information, this means that you can also draw an ideal cut-off. For how risible (or wrong) is this amount of information, we built a working model with it, the same as we built a scientific model of reality even if we grasp a very tiny fragment of it.
It’s all heuristics all the way up (like turtles), but we draw a line past a certain point where these heuristics are somewhat accessible by consciousness. In the same way evolution has given us five senses and not more, so consciousness is given access to a certain type of heuristics, or information. Evolution made the calls.
The same way the ‘feeling of lust’ need tells us nothing about the evolutionary centrality of reproduction to be effective
Why not? We have a model for reference.
We understand how we work, the need to eat, sleep, reproduce. We map to certain degrees most social behaviors. All this is done even if we don’t have accurate data, but we rely on this merely for the same way we rely on science: because it works to predict things, making them useful. As long the theory produces workable effects, we’re happy.
We have mapped the “feeling of lust” to reproduction. We did because we built a model that is accurate enough to predict certain outcomes. Different outcomes would require perfected, more detailed models, same as it happens when a scientific theory is revealed incomplete.
‘nature of consciousness’ is not too important to me, because I accept the vagueness of the explanation: it’s simply the “feel” of a type of “one and boundless” information space. I think you described this well enough to explain the way the qualia works. This is not the point I argue, I argue the problem of the model we have as working human beings. Actions and stuff.
Let’s say this: most human beings are fine with the vague model of consciousness they have (like most human beings do fine without knowing quantum mechanics). You don’t, you want to know how the small pieces are precisely arranged, and there goes your attention and work. But how in the end these two points of view can build completely different models of reality? You don’t seem to put at stake what’s *in* the box (because that’s where I find easy to agree with all you explain), but what is *outside*. You aren’t arguing how what’s in the box works, but how everything outside depends on it.
Or: it’s like your personal theory of quantum mechanics doesn’t simply “complete” the current model of reality, but utterly REWRITES it. So what I do not understand is how your theory gets these rewriting powers, instead of being a theory of the mind that ends up with the same results on the surface.
To also butt in a bit here and add to Callan, in your examples the problem is that the person who drives the car effectively without knowing anything about its construction/behind-the-scenes operation (or the person without a technical knowledge of quantum mechanics) is much blinder, on the BBT account, than the average person in that regard. So using your car example, it’d be more like one person, a neuroscientist with a boner for Metzinger parallel to the mechanical and electrical engineering double PhD designing futurecars, and then another person who thinks they know that an imaginary team of gnomes are under the hood, rather than your average driver who at least knows an engine is in there that operates somehow. They’d also have to believe that pressing on the pedal released a magical vapor that spurred the gnomes on with renewed vigor each time, and that turning the wheel indicated to the badgers the gnomes have running inside all your tires like rodents in an exercise wheel where you want to go via tracklights inside each tire.
I’m not sure I understand the difficulty you’re having with heuristics. And I’m not sure what you mean by ‘bottom level.’ Prinz has a great argument he draws from Jackendorf about consciousness always being in the ‘middle’ in terms of information generality. I get that sort of hierarchy. But you seem to be assuming that consciousness is the peak/surface/what have you.
And again, as your car analogy seems to show, you seem to be missing the point. There is no driver. That’s simply the picture cobbled together (via meta-car-gnition!) from glimpses of a variety of pieces scattered across the machine – a cognitive illusion. The homunculus. The car does nothing for the driver because there is no driver. If this isn’t a rewrite, I don’t know what is!
Budging in a comment…
Abalieno,
But I don’t even understand why metacognition should be concerned with the busywork at the bottom level. It doesn’t even matter to figure out if this happens in a specific place, as long you agree that consciousness only accesses a very small subset of information
It sounds like you’ve got an upstairs/downstairs model?
Consider an idea where it’s all happening at the bottom level. Conciousness isn’t accessing that bottom level. It’s amidst it. There is nothing more than the bottom level. In terms of upstairs/downstairs, upstairs is completely empty. There is no conciousness there, getting tea sent up to it. If there is a thing called conciousness, it’s not aristocracy – it’s amongst the sweaty masses. More sweaty mass.
I meant “bottom-up” simply as a continuation of the example.
Say the visual information is much richer than what arrives into consciousness. So I’m simply saying that all information is reduced and already pre-packaged before it enters consciousness. Hence I said “bottom” simply because there are a number of previous steps. It doesn’t really matter if these steps happen “over” or “under” consciousness. At least they happen in a different moment, so they can be ideally separated.
It’s simply to say that information accessible by consciousness is a tiny fragment of what’s available. And that’s the same BBT describes, so there shouldn’t be any problem.
Then I was simply making an analogy with science. The same as we access a tiny fragment of information in our brain, and with this we build the best model of reality we can, so through five senses we gather information, that is a tiny fragment of the real world, and with that we build scientific theories that describe how reality works.
BBT adds detail to the model of the brain. It explains more ACCURATELY the small engines.
I was then simply arguing how this model not only adds detail, but also has deep consequences on everything outside. Why it INVALIDATES models at higher levels.
Again, the whole point is thinking this as it happens in normal science, when there’s a new theory that explains a “smaller” phenomenon. How can this have a strong consequence also on all bigger phenomenon as to invalidate laws about them?
I think I understand now. Scientifically speaking, BBT allows the incorporation of intentional phenomena into the mechanistic paradigm of the life sciences. It does so, however, by effectively explaining intentionality away. Since all traditional understanding of mind/soul/consciousness begs intentionality, it explains them away as well. The only thing that changes scientifically, is that we have a more complete picture: the great conundrum posed by intentionality is undone. Our everyday and traditional understanding, on the other hand, are revealed as kinds of metacognitive illusions possessing deeply uncertain functionality. Since this understanding was a mess to begin with, there are no ‘laws’ (in the scientific sense of the term) to be undone.
Again, I don’t argue precisely the point of intentionality or the qualia. I know these are the most important goals, but it’s the stuff I tend to agree with, so it’s not the problematic aspect for me.
I’m more concerned with how reliable is the model. Even if we stare at it passively (and so intentionality could be out of the picture), what’s at stake is the reliability of this model. While intentionality itself, I assume, is reducible to the skyhook example.
But is it still a hook that moves things around, whether moved from the sky or from the ground, or however wired? Or it transforms?
If we obscure the box of the brain, does the BBT have an impact on something else? Or it’s merely a description of what’s inside the box and circumscribed by it?
The “more complete picture” is about the mechanics of what’s inside the black box. I understand this and tend to agree. But has this any impact on anything outside the box?
Let’s take “water” as an example. You can describe the phenomenology of water as you see it appearing, that’s how kids learn it. How it moves, how atmospheric pressure affects this movement, how it gathers, evaporates at a certain temperature, turns to ice at another and so on. And then you can lower to a more accurate model, examining how the chemical parts behave and give origin to what was previously described. Is it then possible that chemical laws you end up finding can also CHANGE or INVALIDATE the properties of observable water as previously described?
How can you say, in regards to BBT, that by studying the chemical parts of water you realized that the observable behaviors are all wrong and misperceived?
But how can a hook ‘move things around’ when it doesn’t exist? The suggestion that it nevertheless does what it is ‘supposed to do’ whether real or not simply makes no sense. The feeling of willing is a cognitive illusion, and yet we are still in charge of our destiny… ?
You seem to be confusing the fact that the brain does what it is adapted to do regardless with the efficacy of our theoretical metacognitive presumptions regardless. Like I say, this simply misses the whole point. To use your analogy, BBT says there never was any ‘water’ in the first place. It’s not simply a matter of scientifically redefining the mental the way science has redefined water (which was still dramatic enough), but of scientifically explaining away posits the way science explained away elan vital or aether or phlogiston. What we call the ‘mental’ is actually the brain. It is as much an ‘illusion’ as any magic act you have ever seen, insofar as the absence of information tricks cognition into thinking it is something else. No matter how you cut it, we are profoundly mistaken. The problem is that we happen to be hardwired into this particular magic show.
This is what makes it so difficult to swallow. It is well and truly indigestible. Human conceit being what it is, there will be no shortage of people who will game ambiguities in an attempt to make things more palatable, who will argue the identity of mind/brain, as it is being called more and more often in the literature. But again, as with magic, the actual functional context of the information integrated into conscious experience is utterly occluded, and the metacognitive judgments made are every bit as faulty. The mental, in other words, is as real as any illusion.
Perhaps if the only water you could ever study was always contaminated by something else and you can never get a non contaminated sample, the theory reveals the observations of ‘water’ are false in treating it as there just being one element involved (not sure if ‘synecdoche’ covers that idea or not). And this also covers inconsistant behaviours of ‘water’ given that particular observations were actually of different materials at different times of observation, yet all were treated as water, even as the overall behaviours observed were not consistant.
The feeling of willing is a cognitive illusion, and yet we are still in charge of our destiny… ?
I’m asking something different.
Whether the observation of willing corresponds to something the brain does, or not.
If I’m drinking from a bottle this action is built by the brain. To drink (because water is necessary and so on). BBT confirms or denies this? Does it confirm the idea of the world we have, or denies it? Regardless of what it says about agency or intentionality or who’s in charge.
The brain as a black box. It doesn’t matter HOW, but does it do the things we think it does on the outside? Or it’s unknown?
I’m putting aside the “who’s in charge” aspect, and asking if whoever is in charge at least does what we see it does, for reasons we can figure out.
If you’re asking whether the brain does what the brain does then the answer is of course. If you’re asking whether the brain does what it does in the WAY metacognition leads us to assume, then the answer is very likely not.
If you’re asking whether the brain is a mechanism, which is to say, something which operates (that is, does things) in a manner consistent with nature more generally, all I can ask is what else could it be? BBT is a mechanistic theory – like any other in the life sciences. But it is only a speculative sketch claiming that various functional constraints that follow from the mechanistic nature of the brain can explain a number of long-standing metacognitive peculiarities.
If you’re asking whether the brain is a mechanism, which is to say, something which operates (that is, does things) in a manner consistent with nature more generally, all I can ask is what else could it be?
Fine. That gives me enough of a foothold.
Does it mean, then, that you could ideally develop a model of human behavior consistent with nature?
Just external observation, describing how human beings live, how they develop relationships and so on.
Which means that at some point you have to realize that this model you carefully deduce from external observation, is at least similar, or coherent with the model you have of your own behavior in your own consciousness. They are alike, you can find yourself behaving in similar patterns, recognizing yourself in it.
Which means that maybe we mistake “how”, but not “what”. The “what” is that external naturalistic model we can grasp. The “how” is about how the process actually comes together.
We mistake the ‘what’ the degree to which we reify the mental (and there’s all kinds of subtle ways to do this). We mistake the ‘how’ the degree to which we fail to see normativity as an artifact of informatic neglect, which is to say, the degree to which fail to see ourselves as mechanisms.
Developing a model of human behaviour consistent with nature (mechanism) has always been the holy grail. Since we are compelled to theoretically metacognize our behaviour in intentional/normative terms, and since these terms are thoroughly incompatible with mechanism, no one has been able to develop anything approaching a convincing way to naturalize human behaviour. This is what I think makes BBT so significant – and disastrous.
Or, with the risk of sounding really dumb: you don’t need to know all the parts of an engine to be able to drive a car.
If you were to write down a description of the car, this description would be quite INACCURATE. Yet the “emergent” level is that you use it to move between places, and it works by the use of wheels.
An average guy who doesn’t know how the car works, compared to the engineer, have different degree of accuracy in regards to their ideal car model. But the car does the same things in both models: it carries peoples and stuff around.
How could a theory of the car REWRITE the purpose and reality of what the car does for everyone?
If I’ve followed properly, then the Volkswagen we all think we’re driving could be a jet, a lemon, or a train…
Or, more accurately, the Volkswagen we all think we’re driving we can SPECULATE could be a jet, a lemon, or a train…but it’s actually a Flintstone’s style vehicle or no vehicle at all and we’re running in place drooling and thinking about fantasy vehicles.
robots in disguise…
Check out this article:
http://www.slate.com/articles/health_and_science/science/2013/05/weird_psychology_social_science_researchers_rely_too_much_on_western_college.html
If you click through to the actual journal article (Beh Brain Sci), there is a comment by Stephen Stich on Philosophy and WEIRD intuition that may be of interest, and is related to BBT insofar as relying on “intuitive judgements for or against philosophical theories” is a dumb thing to do for all kinds of reasons, including how WEIRD we are, relative to most humans.
Ayuh. Stich is the man.
Have you read Antibodies by David Skal? It’s a very strange novel, almost like a clockwork Neuropath that tried to tread similar ground at a time before neuroscience had matured to a point to allow your pessimistic inductions to be clearly sketched out. I didn’t think it was possible for anything to create a two way deja-vu contest between Neuropath and Jacqueline Susann’s The Love Machine of all things.
[…] Brain Theory (BBT) possible? (Bakker has recently expressed the “crux” of the theory here). It would seem difficult to reconcile the de-throning of science from its central position as […]
Posting way way late on here but missed this the first time round. I like a lot of the basis of this theory, but the one thing I’d be picky about is the touch of Evo Psych that raises its head every so often, e.g. “The metacognitive capacities of the human brain turn on effective information, scraps gleaned via adventitious mutations that historically provided some indeterminate reproductive advantage in some indeterminate context”. Now I’m not saying that there won’t be adaptive explanations for various aspects of consciousness, but I think it’s also valuable, if you haven’t already, to raise the possibility that some of these aspects might just be spandrels that have historically arisen and been retained due to their either effecting no selective bias or else due to co-occurring selectively favoured traits. Maybe it’s the Nietzschean in me (humans as ‘sick animals’, etc., etc.) but whilst I do find it persuasive to argue that human cognition in general has been a major factor in our evolutionary success, certain aspects of human cognition strike me as non-adaptive or even as maladaptive (though not in the sense Leonidas and Toobey would have it, i.e. these parts have never been adaptive but have been perpetuated due to their association with the selectively favoured whole shebang).