The Second Room: Phenomenal Realism as Grammatical Violation
by rsbakker
Aphorism of the Day: Atheist or believer, we all get judged by God. The one that made us, or the one we make.
So just what the hell did Wittgenstein mean when he wrote this?
“And yet you again and again reach the conclusion that the sensation itself is a nothing.” Not at all. It is not a something, but not a nothing either! The conclusion was only that a nothing would serve just as well as a something about which nothing could be said.” (1953, 304)
I can remember attempting to get a handle on this section of Philosophical Investigations in a couple of graduate seminars, contributing nothing more than once stumping my professor with the question of fraudulent workplace injury claims. But now, at long last, I (inadvertently) find myself in a position to explain what Wittgenstein was onto, and perhaps where he went wrong.
My view is simply that the mental and the environmental are pretty much painted in the same informatic brush, and pretty much comprehended using the same cognitive tools, the difference being that the system as a whole is primarily evolved to the track and exploit the environmental, and as a result has great difficulty attempting to track and leverage the ‘mental’ so-called.
If you accept the mechanistic model of the life sciences, then you accept that you are an environmentally situated, biomechanical, information processing system. Among the features that characterize you as such a system is what might be called ‘structural idiosyncrasy,’ the fact that the system is the result of innumerable path dependencies. As a bottom-up designer, evolution relies on the combination of preexisting capacities and happenstance to provide solutions, resulting in an vast array of ad hoc capacities (and incapacities). Certainly the rigours of selection will drive various functional convergences, but each of those functions will bear the imprimatur of the evolutionary twists that led it there.
Another feature that characterizes you as such a system is medial neglect. Given that the resources of the system are dedicated to modelling and exploiting your environments, the system itself constitutes a ‘structural blindspot’: it is the one part of your environment that you cannot readily include in your model of the environment. The ‘medial’ causality of the neural, you could say, must be yoked to the ‘lateral’ causality of the environmental to adequately track and respond to opportunities and threats. To system must be blind to itself to see the world.
A third feature that characterizes you as such a system is heuristic specificity. Given the combination of environmental complexity, structural limitations, and path dependency, cognition is situation-specific, fractionate, and non-optimal. The system solves environmental problems by neglecting forms of information that are either irrelevant or not accessible. So, to give what is perhaps the most dramatic example, one can suggest that intentionality, understood as aboutness, possesses a thoroughly heuristic structure. Given medial neglect, the system has no access to information pertaining to anything but the grossest details of its causal relationship to its environments. It is forced, therefore, to model that relationship in coarse-grained, acausal terms–or put differently, in terms that occlude the neurofunctionality that makes the relationship possible. As a result, you experience apples in your environment, oblivious to any of the machinery this makes possible. This ‘occlusion of the neurofunctional’ generates efficiencies (enormous ones, given the system’s complexity) so long as the targets tracked are not themselves causally perturbed by (medial) tracking. Since the system is blind to the medial, any interference it produces will generate varying degrees of ‘lateral noise.’
A final feature that characterizes you as such a system might be called internal access invariability, the fact that cognitive subsystems receive information via fixed neural channels. All this means is that cognitive subsystems are ‘hardwired’ into the rest of the brain.
Given a handful of caveats, I don’t think any of the above should be all that controversial.
Now, the big charge against Wittgenstein regarding sensation is some version of crypto-behaviourism, the notion that he is impugning the reality of sensation simply because only pain behaviour is publicly observable, while the pain itself remains a ‘beetle in a box.’ The problem people have with this characterization is as clear as pain itself. One could say that nothing is more real than pain, and yet here’s this philosopher telling you that it is ‘neither a something nor a nothing.’
Now I also think nothing is more real than pain, but I also agree with Wittgenstein, at long last, that pain is ‘neither a something or a nothing.’ The challenge I face is one of finding some way to explain this without sounding insane.
The thing to note about the four features listed above is how each, in its own way, compromises human cognition. This is no big news, of course, but my view takes the approach that the great philosophical conundrums can be seen as diagnostic clues to the way cognition is compromised, and that conversely, the proper theoretical account of our cognitive shortcomings will allow us to explain or explain away the great philosophical conundrums. And Wittgenstein’s position certainly counts as one of the most persistent puzzles confronting philosophers and cognitive scientists today: the question of the ontological status of our sensations.
Another way of putting my position is this: Everyone agrees you’re are a biomechanism possessing myriad relationships with your environment. What else would humans (qua natural) be? The idea that understanding the specifics of how human cognition fits into that supercomplicated causal picture will go a long way to clearing up our myriad, longstanding confusions is also something most everyone would agree with. What I’m proposing is a novel way of seeing how those confusions fall out of our cognitive limitations–the kinds of information and capacities that we lack, in effect.
So what I want to do, in a sense, is turn the problem of sensation in Wittgenstein upside down. The question I want to ask is this: How could the four limiting features described above, structural idiosyncrasy (the trivial fact that out of all the possible forms of cognition we evolved this one), medial neglect (the trivial fact that the brain is structurally blind to itself as a brain), heuristic specificity (the trivial fact that cognition relies on a conglomeration of special purpose tools), and access invariability (the trivial fact that cognition accesses information via internally fixed channels) possibly conspire to make Wittgenstein right?
Well, let’s take a look at what seems to be the most outrageous part of the claim: the fact that pain is ‘neither a something or a nothing.’ This, I think, points rather directly at heuristic specificity. The idea here would be that the heuristic or heuristic systems we use to identify entities are simply misapplied with reference to sensations. As extraordinary as this claim might seem, it really is old hat scientifically speaking. Quantum Field Theory forced us quite some time ago to abandon the assumption that our native understanding of entities and existence extends beyond the level of apples and lions we evolved to survive in. That said, sensation most certainly belongs the ‘level’ of apples and lions: eating apples causes pleasure as reliably as lion attacks cause pain.
We need some kind of account, in other words, of how construing sensations as extant things might count as a heuristic misapplication. This is where medial neglect enters the picture. First off, medial neglect explains why heuristic misapplications are inevitable. Not only can’t we intuit the proper scope of application for the various heuristic devices comprising cognition, we can’t even intuit the fact that cognition consists of multiple heuristic devices at all! In other words, cognition is blind to both its limits and its constitution. This explains why misapplications are both effortless and invisible–and most importantly, why we assume cognition to be universal, why quantum and cosmological violations of intuition come as a surprise. (This also motivates taking a diagnostic approach to classic philosophical problems: conundrums such as this indirectly reveal something of the limitations and constitution of cognition).
But medial neglect can explain more than just the possibility of such a misapplication; it also provides a way to explain why it constitutes a misapplication, as well as why the resulting conundrums take the forms they do. Consider the ‘aboutness heuristic’ considered above. Given that the causal structure of the brain is dedicated to tracking the causal structure of its environment, that structure cannot itself be tracked, and so must be ‘assumed.’ Aboutness is forced upon the system. This occlusion of the causal intricacies of the system’s relation to its environment is inconsequential. So long as the medial tracking of targets in no way interferes with those targets, medial neglect simply relieves the system of an impossible computational load.
But despite it’s effectiveness, aboutness remains heuristic, remains a device (albeit a ‘master device’) that solves problems via information neglect. This simply means that aboutness possesses a scope of applicability, that it is not universal. It is adapted to a finite range of problems, namely, those involving functionally independent environmental entities and events. The causal structure of the system, again, is dedicated to modelling the causal structure of its environment (thus the split between medial (modelling) and lateral (modelled) functionality). This insures the system will encounter tremendous difficulty whenever it attempts to model its own modelling. Why? I’ve considered a number of different reasons (such a neural complexity) in a number of different contexts, but the primary, heuristic culprit is that the targets to be tracked are all functionally entangled in these ‘metacognitive’ instances.
The basic structure of human cognition, in other words, is environmental, which is to say, adapted to things out there functioning independent of any neural tracking. It is not adapted to the ‘in here,’ to what we are prone to call the mental. This is why the introspective default assumption is to see the ‘mental’ as a ‘secondary environment,’ as a collection of functionally independent events and entities tracked by some kind of mysterious ‘inner eye.’ Cognition isn’t magical. To cognize something requires cognitive resources. Keeping in mind that the point of this exercise is to explain how Wittgenstein could be right, we could postulate (presuming evolutionary parsimony) that second-order reflection possesses no specially adapted ‘master device,’ no dedicated introspective cognitive system, but instead relies on its preexisting structure and tools. This is why the ‘in here’ is inevitably cognized as a ‘little out there,’ a kind of peculiar secondary environment.
A sensation–or quale to the use the philosophy of mind term–is the product of an occurrent medial circuit, and as such impossible to laterally model. This is what Wittgenstein means when he says pain is ‘neither a something nor a nothing.’ The information required to accurately cognize ‘pain’ is the very information systematically neglected by human cognition. Second-order deliberative cognition transforms it into something ‘thinglike,’ nevertheless, because it is designed to cognize functionally independent entities. The natural question then becomes, What is this thing? Given the meagre amount of information available and the distortions pertaining to cognitive misapplication, it necessarily becomes the most baffling thing we can imagine.
Given structural idiosyncrasy (again, the path dependence of our position in ‘design space’), it simply ‘is what is it is,’ a kind of astronomically coarse-grained ‘random projection’ of higher dimensional neural space perhaps. Why is pain like pain? Because it dangles from all the same myriad path dependencies as our brains do. Given internal access invariability (again, the fact that cognition possesses fixed channels to other neural subsystems) it is also all that there is as well: cognition cannot inspect or manipulate a quale the way it can actual things in its environment via exploratory behaviours, so unlike other objects they necessarily appear to be ‘irreducible’ or ‘simple.’ On top of everything, qualia will also seem causally intractable given the utter occlusion of neurofunctionality that falls out of medial neglect, as well the distortions pertaining to heuristic specificity.
As things therefore, qualia strike as ineffable, intrinsic, and etiologically opaque. Strange ‘somethings’ indeed!
Given our four limiting features, then, we can clearly see that Wittgenstein’s hunch is grammatical and not behaviouristic. The problem with sensations isn’t so much epistemic privacy as it is information access and processing: when we see qualia as extant things requiring explanation like other things we’re plugging them into a heuristic regime adapted to discharge functional independent environmental challenges. Wittgenstein himself couldn’t see it as such, of course, which is perhaps why he takes the number of runs at the problem as he does.
Okay, so much for Wittgenstein. The real question, at this point, is one of what it all means. After all, despite what might seem like fancy explanatory footwork, we still find ourselves stranded with a something that is neither a something nor a nothing! Given that absurd conclusions generally mean false premises, why shouldn’t we simply think Wittgenstein was off his rocker?
Well, for one, given the conundrums posed by ‘phenomenal realism,’ you could argue that the absurdity is mutual. For another, the explanatory paradigm I’ve used here (the Blind Brain Theory) is capable of explaining away a great number of such conundrums (at the cost of our basic default assumptions, typically).
The question then becomes whether a general gain in intelligibility warrants accepting one flagrant absurdity–a something that is neither a something nor a nothing.
The first thing to recall is that this situation isn’t new. Apparent absurdity is alive and well at the cosmological and quantum levels of physical explanation. The second thing to recall is that human cognition is the product of myriad evolutionary pressures. Much as we did not evolve to be ideal physicists, we did not evolve to be ideal philosophers. Structural idiosyncrasy, in other words, gives us good reason to expect cognitive incapacities generally. And indeed, cognitive psychology has spent several decades isolating and identifying numerous cognitive foibles. The only real thing that distinguishes this particular ‘foible’ is the interpretative centrality (not to mention cherished status) of its subject matter–us!
‘Us,’ indeed. Once again, if you accept the mechanistic model of the life sciences (if you’re inclined to heed your doctor before your priest), then you accept that you are an environmentally situated, biomechanical information processing system. Given this, perhaps we should add a fifth limiting feature that characterizes you: ‘informatic locality,’ the way your system has to make due with the information it can either store or sense. Your particular brain-environment system, in other words, is its own ‘informatic frame of reference.’
Once again, given the previous four limiting features, the system is bound to have difficulty modelling itself. Consider another famous head-scratcher from the history of philosophy, this one from William James:
“The physical and the mental operations form curiously incompatible groups. As a room, the experience has occupied that spot and had that environment for thirty years. As your field of consciousness it may never have existed until now. As a room, attention will go on to discover endless new details in it. As your mental state merely, few new ones will emerge under attention’s eye. As a room, it will take an earthquake, or a gang of men, and in any case a certain amount of time, to destroy it. As your subjective state, the closing of your eyes, or any instantaneous play of your fancy will suffice. In the real world, fire will consume it. In your mind, you can let fire play over it without effect. As an outer object, you must pay so much a month to inhabit it. As an inner content, you may occupy it for any length of time rent-free. If, in short, you follow it in the mental direction, taking it along with events of personal biography solely, all sorts of things are true of it which are false, and false of it which are true if you treat it as a real thing experienced, follow it in the physical direction, and relate it to associates in the outer world. (“Does ‘Consciousness’ Exist?“)
The genius of this passage, as I take it, is the way refuses the relinquish the profound connection between the third person and the first, rather alternating from the one to other, as if it were a single, inexplicable lozenge that tasted radically different when held against the back or front of the tongue–the room as empirically indexed versus the room as phenomenologically indexed. Wittgenstein’s problem, expressed in these terms, is simply one of how the phenomenological room fits into the empirical. From a brute mechanistic perspective, the system is first modelling the room absent any model of its occurrent modelling, then modelling its modelling of the room–and here’s the thing, absent any model of its occurrent modelling. The aboutness heuristic, as we saw, turns on medial neglect. This is what renders the second target, ‘room-modelling,’ so difficult to square with the ‘grammar’ of the first, ‘room,’ perpetually forcing us to ask, What the hell is this second room?
The thing to realize at this juncture is that there is no way to answer this question so long as we allow the apparent universality of the aboutness heuristic get the better of us. ‘Room-modelling’ will never fit the grammar of ‘room’ simply because it is–clearly, I would argue–the product of informatic privation (due to medial neglect) and heuristic misapplication (due to heuristic specificity).
On the contrary, the only way to solve this ‘problem’ (perhaps the only way to move beyond the conundrums that paralyze philosophy of mind and consciousness research as a whole) is to bracket aboutness, to finally openly acknowledge that our apparent baseline mode of conceptualizing truth and reality is in fact heuristic, which is to say, a mode of problem-solving that turns on information neglect and so possesses a limited scope of effective application. So long as we presume the dubious notion that cognitive subsystems adapted to trouble-shooting external environments absent various classes of information are adequate to the task of trouble-shooting the system of which they are a part, then we will find ourselves trapped in this grammatical (algorithmic) impasse.
In other words, we need to abandon our personal notion of the ‘knower’ as a kind of ‘anosognosiac fantasy,’ and begin explaining our inability to resolve these difficulties in subpersonal terms. We are an assemblage of special purpose cognitive tools, not whole, autonomous knowers attempting to apprehend the fundamental nature of things. We are machines attempting to model ourselves as such, and consistently failing because of a variety of subsystemic functional limitations.
You could say what we need is a whole new scientific subdiscipline: the cognitive psychology of philosophy. I realize that this sounds like anathema to many–it certainly strikes me as such! But no matter what one thinks of the story above, I find it hard to fathom how philosophy can avoid this fate now that the black box of the brain has been cracked open. In other words, we need to see the inevitability of this picture or something like it. As a natural result of the kind of system that we happen to be, the perennial conundrums of consciousness (and perhaps philosophy more generally) are something that science will eventually explain. Only ignorance or hubris could convince us otherwise.
We affirm the cosmological and quantum ‘absurdities’ we do because of the way science allows us to transcend our heuristic limitations. Science, you could say, is a kind of ‘meta-heuristic,’ a way to organize systems such that their individual heuristic shortcomings can be overcome. The Blind Brain picture sketched above bets that science will sketch the traditional metaphysical problem of consciousness in fundamentally mechanistic terms. It predicts that the traditional categorical bestiary of metaphysics will be supplanted by categories of information indexed according to their functions. It argues that the real difficulty of consciousness lies in the cognitive illusions secondary to informatic neglect.
One can conceive this different ways I think: You could keep your present scientifically informed understanding of the universe as your baseline, and ‘explain away’ the mental (and much of the lifeworld with it) as a series of cognitive illusions. Qualia can be conceived as ‘phenomemes,’ combinatorial constituents of conscious experience, but no more ‘existential’ than phonemes are ‘meaningful.’ This view takes the third-person brain revealed by science as canonical, and the first-person brain (you!) as a ‘skewed and truncated low-dimensional projection’ of that brain. The higher-order question as to the ontological status of that ‘skewed and truncated low-dimensional projection’ is diagnostically blocked as a ‘grammatical violation,’ by the recognition that such a move constitutes a clear heuristic misapplication.
Or one could envisage a new kind of scientific realism, where the institutions are themselves interpreted as heuristic devices, and we can get to the work of describing the nonsemantic nature of our relation to each other and the cosmos. This would require acknowledging the profundity of our individual theoretical straits, to embrace our epistemic dependence on the actual institutional apparati of science–to see ourselves as glitchy subsystems in larger social mechanisms of ‘knowing.’ On this version, we must be willing to detach our intellectual commitments from our commonsense intuitions wholesale, to see the apparent sufficiency and universality of aboutness as a cognitive illusion pertaining to heuristic neglect, first person or third.
Either way, consciousness, as we intuit it, can at best be viewed as virtual.
Deleuze would agree. The Deleuzean virtual is not the condition of possibility of any rational experience, but the condition of genesis of real experience.
Deleuzean virtuality (understood as the inversion of possibility, if I remember correctly, so that the ‘actual’ is conceived as a surfeit of virtualities as opposed to a privation of possibilities) is quite different from the ‘virtual’ implied here, which means ‘neither something nor nothing,’ or in other words, something that can only be cognized, because of the inadequacy of cognition, ‘under erasure.’ But then it’s been a looooong time since my (once passionate) love affair with Difference and Repetition and Logic of Sense!
Yea, I was punning on that ‘neither some-thing nor no-thing’ in the sense that Deleuze’s ontology of the virtual (cf. Boundas 1996; May & Semetsky 2008) emancipates thinking from common sense. The Deleuzian object of experience is considered to be given only in its tendency to exist: the very nature of any “thing”, according to Deleuze, is just an expression of tendency; therefore making it “no-thing” rather than an actual “some-thing” given to common sense. Anyway liked your post on the cognitive subsystems, almost a machinology of ‘knowing’.
Big Chalmers fan, then? Enjoy a good discussion of pre-reflective consciousness now and again :)?
Scott wrote:
“You could say what we need is a whole new scientific subdiscipline: the cognitive psychology of philosophy.”
I agree. I am actually rather interested in finding any good scholarship on the default philosophical stances of different peoples and cultures as a function of several variables: age, social position, and level of education. If someone did this, I’m sure it would dredge up all sorts of interesting correlations.
Now for some random ramblings…
You’ve argued before that we can re-conceptualize philosophy like a disease. I think there might actually be some truth to this beyond the amusing aphorisms you put together.
Philosophy as disease by analogy:
The immune system can sometimes attack the body, through antigen self-recognition. If the “second room” of consciousness is a kind of immune system for mental processes, then philosophy might in turn be a kind of auto-immune disorder of the mind. The conscious attacks the normal-mental due to some kind of mistake or error.
Suppose for a second that consciousness evolved as a way for a primate to track the contents of its own mind and thus reject competing belief-systems. We would then see a correlation between activity of this system (“philosophizing”) and tendency towards disbelieving increasingly commonsensical belief structures. A prediction of this theory is that we would seen a higher degree of skeptical philosophizing within cultures exposed to an extremely high diversity of belief structures.
The final stage of this disease would be something like terminal solipsism or perhaps even refusal to believe in any kind of self whatsoever.
*shrug* Posts of this kind tend to look positively embarrassing in hindsight, but meh, I’m bored.
“…meh, I’m bored…”
https://eyewire.org/about
😀
+1. Real cool effort in crowdsourcing. I’ve been doing it, little by little, since Christmas… it’s really taken off in the past couple days. I wish I had more time for it.
I also wish I had more time…and a faster computer. I wonder whose brain it is on eyewire.
OK. I tried it out and got past the tutorial and several real cubes. Here’s what I was thinking the whole time:
“Crowdsourcing” was invented by Tom Sawyer. Hey paint these fences, it’s fun! Look assholes, no matter how much you tell me it is fun to trace dendrites and axons for your project, it’s painting fences. If you want me to do it, you need to pay me.
At first I also expected something more fun… like a real game. But I find it still quite interesting. They should make a jump&run.
I’d hazard that there are efforts you’d engage in for nothing more than interest and progress, Jorge. The data digested through Eyewire is something that simply could not be accomplished at the speed it is without the volunteered labour.
There are no efforts I engage in without due compensation.
In this sense, I am the strictest Randian. I will not be made into someone’s apparatus without money being put on the table.
Read: I am a lazy piece of shit.
Lol. Miss your perspective around the forum, Jorge. Happy holidays if you celebrate.
Jorge, do you play board games or video games?
Subtract a few values and that’s paying money in order to work.
You even get the fuzzy no mans land in between, when someone complains about having to mash the X button multiple times to open a door in a video game – it just presses too far out of some value, exiting an unfelt bubble, out into naked work.
A bit of wandering thought on the matter…
And yeah, do come back and post – sure, we can’t pay you in anything but smiley faces, but…! 🙂
It attacks the normal-mental, like for example, the normal-mental of women being kept to the kitchen?
The designation of desease seems to be an utter adhereance to the status quo. If you look away to physical mutations, I’m sure many of them could be classed as genetic deseases/genetic defects. Rather like Master Mould from the x-men comics, you could say all humans are mutants. Deseased.
Traditional science simply catalogs what is. It’s kind of parochial in how it actually adheres utterly to formulating a status quo. Traditional science is free of redemption.
And now I’ve used ‘parochial’ in a sentence, I feel kinda scarfy… 😉
Very enjoyable, I concur almost verbatim with these thoughts and musings (and in the previous posts). Keep them coming! I could not imagine continually putting out such detailed, coherent, and useful thoughts on these issues.
On the concluding of where to now, I am inclined towards the first, though I think they pretty much amount to the same. Science should eventually run against the limitations of its own heuristics, and thus try to incorporate such, especially as it comes to grips with detailing the brain, for instance. Most of the things we stack in the first-personal, the subjective should be capable of being detailed by science and, with insight from heuristic probings, show us why qualia feels the way it does. I get torn between the idea that given any brain at any time, there is an intrinsic uniqueness that is “experienced,” or at least that exists in its unique formation, which if “represented” fine-grained enough would be an unique “representation”. In that sense, if some kind of “representing” feature is presenting a good portion of brain structures or processes, there will be an unique structure to that system. But the kicker comes that it is not “qualitative” in the sense that we often associate with that word as regards mind, an association based on all the heuristic factors and previous theorizing that encourages us to postulate qualia in a problematic way. What I am saying is that there may be a unique “subjectivity” or individuality that holds for any particular system that we call brain. Now, we should be able to axe that individuality as useful, in the similar way that the individuality of the atomic structure of a rock plays a usually worthless role in science’s predicting the behaviors of that rock. The fact that your brain is unique should not come as a surprise, just as a rock that was somehow accounting within itself the exact atomic/molecular constituency at that time would find its “self” to be unique.
– By the way, slight typo in first sentence after you quote James.
The challenge I face is one of finding some way to explain this without sounding insane.
“If at first an idea is not absurd, then there is no hope for it” – Albert Einstein
“All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident” – Arthur Schopenhauer
Don’t be discouraged if you find yourself in the absurd, ridiculed stage/violently opposed – some people seem to skip mockery.
Best be careful ;). Here be shadows of the No-God…
Really liking these blogs. Last two paragraphs are wicked. You’re straying into noospheric heuristics again, or the idea of heuristics embodied and employed – I suggest that with only the loosest sense of metaphor for GB’s agency as it stands – at a social level by a certain number of constituent entities, or persons.
Also, almost done amalgamating all of TPB’s links into one… monumental artifact, despite much of it being useless and misleading, out of post/comment context – though I’ll eventually pare it down to something distilled. Have you recently, or ever, correlated the statistics on sales (amazon sales or whatever other data you have at your availability) to the content of your posts at the time? Seen if what’s topical affects sales?