Out-Danning Dennett
by rsbakker
The idea is this. What you take yourself to be at this very moment is actually a kind of informatic illusion.
For me, the picture has come to seem obvious, but I understand that this is the case for everyone with a theory to peddle. So the best I can do is explain why it seems obvious to me.
One of the things I have continually failed to do is present my take, Blind Brain Theory (BBT), in terms that systematically relate it to other well-known philosophical positions. The reason for this, I’m quite certain, is laziness on my part. As a nonacademic, I never have to exposit what I read for the purposes of teaching, and so the literature tends to fall into the impressionistic background of my theorization. I actually think this is liberating, insofar as it has insulated me from many habitual ways of thinking through problems. I’m not quite sure I would have been able to connect the dots the way I have chasing the institutional preoccupations of academe. But it has certainly made the task of communicating my views quite a bit harder than it perhaps should be.
So I’ve decided to bite the bullet and lay out the ways BBT overlaps and (I like to think!) outruns Daniel Dennett’s rather notorious and oft-misunderstood position on consciousness. For many, if not most, this will amount to using obscurity to clarify murk, but then you have to start somewhere.
First, we need to get one fact straight: consciousness possesses informatic boundaries. This is a fact Dennett ultimately accepts, no matter how his metaphors dance around it. Both of his theoretical figures, ‘multiple drafts’ and ‘fame in the brain’ imply boundaries, a transition of processes from unconsciousness to consciousness. Some among a myriad of anonymous processes find neural celebrity, or as he puts it in “Escape from the Cartesian Theater,” “make the cut into the elite circle of conscious events.” Many subpersonal drafts become one. What Dennett wants to resist is the notion that this transition is localized, that it’s brought together for the benefit of some ‘neural observer’ in the brain–what he calls the ‘Cartesian Theatre.’ One of the reasons so many readers have trouble making sense of his view has to do, I think, with the way he fails to recognize the granularity of this critical metaphor, and so over-interprets its significance. In Consciousness Explained, for instance, he continually asserts there is no ‘finishing line in the brain,” no point where consciousness comes together–”no turnstyle” as he puts it. Consciousness is not, he explicitly insists in his notorious piece (with Marcel Kinsbourne) “Time and the Observer” in Behavioural and Brain Sciences, a subsystem. And yet, at the same time you’ll find him deferring to Baars’ Global Workspace theory of consciousness, even though it was inspired by Jerry Fodor’s notion of some ‘horizontal’ integrative mechanism in the brain, an account that Dennett has roundly criticized as ‘Cartesian’ elsewhere.
The evidence that consciousness is localized (even if widely distributed) through the brain is piling up, which is a happy fact, since according to BBT consciousnesscan only be explained in subsystematic terms. Consciousness possesses dynamic informatic boundaries, both globally and internally, all of which are characterized, from the standpoint of consciousness, by various kinds of neglect.
In cognitive psychology and neurology, ‘neglect’ refers to an inability to detect or attend to some kind of deficit. Hemi-neglect, which is regularly mentioned in consciousness discussions, refers to the lateralized losses of awareness commonly suffered by stroke victims, who will sometimes go so far as to deny ownership of their own limbs. Cognitive psychology also uses the term to refer to our blindness various kinds of information in various problem-solving contexts. So ‘scope neglect,’ for instance, involves our curious inability to ‘value’ problems according to their size. My view is that the neglect revealed in various cognitive biases and neuropathologies actually structures ‘apparent consciousness’ as a whole. I think this particular theoretical cornerstone counts as one of Dennett’s ‘lost insights.’ Although he periodically raises the issue of neglect and anosognosia, his disavowal of ‘finishing lines’ makes it impossible for him to systematically pursue their relation to consciousness. He overgeneralizes his allergy to metaphors of boundary and place.
So, to give a quick example, where BBT views Frank Jackson’s Mary argument as a kind of ‘neglect detector,’ a thought experiment that reveals the scope of applicability of the ‘epistemic heuristic’ (EH), Dennett thinks it constitutes a genuine first-order challenge, a circle that must be squared. BBT is more interested in diagnosing than disputing the intuition that physical knowledge could be complete in the absence of any experience of red. Why does an obvious informatic addition to our environmental relationship (the experience of red) not strike us as an obvious epistemic addition? Well, because our ‘epistemic heuristic,’ even in its philosophically ‘refined’ forms, is still a heuristic, and as such, not universally applicable. Qualia simply lie outside the EH scope of applicability on my view.
I take Dennett’s infamous ‘verificationism’ as an example of a ‘near miss’ on his part. What he wants to show is that the cognitive relationship to qualia is informatically fixed–or ‘brainbound’–in a way that the cognitive relationship to environments are not: With redness, you have no informatic recourse the way you do with an apple–what you see is what you get, period. On my view, this is exactly what we should expect, given the evolutionary premium on environmental cognition: qualia are best understood as ‘phenomemes,’ subexistential combinatorial elements that enable environmental cognition similar to the way phonemes are subsemantic combinatorial elements that enable linguistic meaning (I’ll get to the strange metaphysical implications of this shortly). Granting that qualia are ‘cognition constitutive,’ we should expect severe informatic access constraints when attempting to cognize them. On the BBT account, asking what qualia ‘are’ is simply an informatic confusion on par with asking what the letter ‘p’ means. The primary difference is that we have a much better grasp of the limits of linguistic heuristics (LH) than we do EH. EH, thanks to neglect, strikes us as universal, as possessing an unlimited scope of applicability. Thus the value of Mary-type thought experiments.
Lacking the theoretical resources of BBT, Dennett can only form a granular notion of this problem. In one of his most famous essays, Quining Qualia, he takes the ‘informatic access’ problem, and argues that ‘qualia’ are conceptually incoherent because we lack the informatic resources to distinguish changes in them (it could be our memory that has been transformed), and empirically irrelevant because those changes would seem to make no difference one way or another. Where he uses the ‘informatic access problem’ as a argumentative tool to make the concept of qualia ‘look bad,’ I take the informatic access problem to be an investigative clue. What Dennett shows via his ‘intuition pumps,’ I think, are simply the limits of applicability of EH.
But this difference does broach the most substantial area of overlap between my position and Dennett’s. In a sense, what I’m calling EH could be characterized as an ‘epistemological stance,’ akin to the variety of stances proposed by Dennett.
BBT takes two interrelated angles on ‘brain blindness’ or neglect. The one has to do with how the appearance of consciousness–what we think we are enjoying this very moment–is conditioned by informatic constraints or ‘blindnesses.’ The other has to do with the plural, heuristic nature of human cognition, how our various problem-solving capacities are matched to various problems (the way cognition is ‘ecological’), and how they leverage efficiencies via strategic forms of informatic neglect. What I’m calling EH, for instance, seems to be both informatically sufficient and universally applicable, thanks to neglect–the same neglect that rendered it invisible altogether to our ancestors. In fact, however, it elides enormous amounts of relevant information, including the brain functions that make it possible. So, remaining faithful to the intuitions provided by EH, we conceive knowledge in terms of relations between knowers and things known, and philosophy sets to work trying to find ways to fit ever greater accumulations of scientific information into this ‘intuitive picture’–to no avail. How do mere causal relations conspire to create epistemological relations, which is to say, normative about relations? On my view, these relations are signature examples of informatic neglect: ‘aboutness’ is a shortcut, a way to relate devices in the absence of any causal information. ‘Normativity’ is also a shortcut, a way to model mechanism in the absence of any mechanistic information. (Likewise, ‘object’ is a shortcut, and even ‘existence’ is a shortcut–coarse-grained tools that get certain work done). Is it simply a coincidence that syntax can be construed as mechanism bled of everything save the barest information? Even worse, BBT suggests it could be the case that both aboutness and normativity are little more than reflective artifacts, merely deliberative cartoons of what we think we are doing given our meagre second-order informatic access to our brain’s activity.
In one of his most lucid positional essays, “Real Patterns,” Dennett argues the ‘realism’ of his stance approach vis-a-vis thinkers like Churchland, Davidson, and Rorty. In particular, he wants explain how his ‘intentional stance’ and the corresponding denial of ‘original intentionality’ does not reduce intentionality to the status of a ‘useful fiction.’ Referencing Churchland’s observations regarding the astronomical amount of compression involved in the linguistic coding of neural states (in “Eliminative Materialism and the Propositional Attitudes“), he makes the point that I’ve made here very many times: the informatic asymmetry between what the brain is doing and what we think we’re doing is nothing short of abyssal. When we attribute desires and beliefs and goals and so on to another brain, our cognitive heuristics are, Dennett wants to insist, trading in very real patterns, only compressed to a drastic degree. It’s the reality of those patterns that render the ‘intentional stance’ so useful. It’s the degree of compression that renders them incompatible with the patterns belonging to the ‘physical stance’–and thus, scientifically intractable.
The only real problem BBT has with this analysis is its granularity, a lack of resolution that leads Dennett to draw several erroneous conclusions. The problem, in a nutshell, is that far more than ‘compression’ is going on, as Dennett subsequently admits when discussing his differences with Davidson (the fact that two interpretative schemes can capture the same real pattern, and yet be incompatible with each other). Intentional idioms are heuristics in the full sense of the term: their effectiveness turns on informatic neglect as much as the algorithmic compression of informatic redundancies. To this extent, the famous ‘pixilated elephant’ Dennett provides to illustrate his argument is actually quite deceiving. The idea is to show the way two different schemes of dots can capture the same pattern–an elephant. What makes this example so deceptive is the simplistic account of informatic access it presupposes. It lends itself to the impression that ‘informatic depletion’ alone characterizes the relation between intentional idioms and the ‘real patterns’ they supposedly track. It entirely ignores the structural specifics of the informatic access at issue (the variety of bottlenecks posited by BBT), the fact that our Intentional Heuristic (IH), very much like EH, elides whole classes of information, such as the bottom-up causal provenance belonging to the patterns tracked. IH, in other words, suffers from informatic distortion and truncation as much as depletion.
His illustration would have been far more accurate if one of the pixilated figures only showed only the elephant’s trunk. When our attentional systems turn to our ‘intentional intuitions’ (when we reflect on intentionality), deliberative cognition only has access to the stored trace of globally broadcast (or integrated) information. Information regarding the neurofunctional context of that information is nowhere to be found. So in a profound sense, IH can only access/track acausal fragments of Dennett’s ‘real patterns.’ Because these fragments are systematically linked to what it is our brains are actually doing, IH will seem to be every bit as effective as our brains at predicting, manipulating, and understanding the behavioural outputs of other brains. Because of neglect (the absence of information flagging the insufficiency of available information), IH will seem complete, unbounded, which is likely why our ancestors used it to theorize the whole of creation. IH constitutively confuses the trunk for the whole elephant.
In other words, Dennett fails to grasp several crucial specifics of his own account. This oversight (and to be clear, there are always oversights, always important details overlooked, even in my own theoretical comic strips) marks a clear parting of the ways between his position and my own. It’s the way developmental and structural constraints consistently distort and truncate the information available to IH that explains the consistent pattern of conceptual incompatibilities between the causal and intentional domains. And as I discuss below, it’s a primary reason why I, unlike Dennett, remain unwilling to take theoretical refuge in pragmatism. No matter what the ‘reality’ of intentionality, BBT shows that the informatic asymmetry between it and the ‘real patterns’ it tracks is severe enough warrant suspending commitment to any theoretical extrapolation, even one as pseudo-deflationary as pragmatism, based upon it.
This oversight is also a big reason why I so often get that narcissistic ‘near miss’ feeling whenever I read Dennett–why he seems trapped using metaphors that can only capture the surface features of BBT. Consider the ‘skyhook’ and ‘crane’ concepts that he introduces in Darwin’s Dangerous Idea to explain the difference between free-floating, top-down religious and naturally grounded, bottom-up evolutionary approaches to explanation. On my reading, he might as well as used ‘trunk’ and ‘elephant’!
Moreover, because he overlooks the role played by neglect, he has no real way of explaining our conscious experience of cognition, the rather peculiar fact that we are utterly blind to the way our brains swap between heuristic cognitive modes. Instead, Dennett relies on the pragmatics of ‘perspective talk’–the commonsense way in which we say things like ‘in my view,’ ‘from his perspective,’ ‘from the standpoint of,’ and so on–to anchor our intuitions regarding the various ‘stances’ he discusses. Thus all the vague and (perhaps borderline) question-begging talk of ‘stances.’
BBT replaces this idiom with that of heuristics, thus avoiding the pitfalls of intentionality while availing itself of what we are learning about the practical advantages of specialized (which is to say, problem specific) cognitive systems, how ignoring information not only generates metabolic efficiencies, but computational ones as well. The reason for our ‘peculiar blindness’–the reason Dennett has had to do to such great lengths to make ‘Cartesian intuitions’ visible–is actually internal to the very notion of heuristics, which, in a curious sense, use blindness to leverage what they can see. From the BBT standpoint, Dennett consistently fails to recognize the role informatic neglect plays in all these phenomena. He understands the fractured, heuristic nature of cognition. He is acutely aware of the informatic limitations pertaining to thought on a variety of issues. But the pervasive, positive, structural role these limitations play in the appearance of consciousness largely eludes him. As a result, he can only argue that our traditional intuitions of consciousness are faulty. Because he has no principled means of explaining away ‘error consciousness,’ all he can do is plague it with problems and offer his own, alternative account. As a result, he finds himself arguing against intuitions he can only blame and never quite explain. BBT changes all of that. Given its resources, it can pinpoint the epistemic or intentional heuristics, enumerate all the information missing, then simply ask, ‘How should we determine the appropriate scope of applicability?’
The answer, simply enough, is ‘Where EH works!’ Or alternately, ‘Where IH works!’ BBT allows us, in other words, to view our philosophical perplexities as investigative clues, as signs of where we have run afoul informatic availability and/or cognitive applicability–where our ‘algorithms’ begin balking at the patterns provided. On my view, the myriad forms of neglect that characterize human cognition (and consciousness) can be glimpsed in the shadows they have cast across the whole history of philosophy.
Bur care must be taken to distinguish the pragmatism suggested by ‘where x works’ above from the philosophical pragmatism Dennett advocates. As I mentioned above, he accepts that intentional idiom is coarse-grained, but given its effectiveness, and given the mandatory nature of the manifest image, he thinks it’s in our ‘interests’ to simply redefine our folk-psychological understanding using science to lard in the missing information. So with regard to the will, he recommends (in Freedom Evolves) that we trade our incoherent traditional understanding in for a revised, scientifically informed understanding of free will as ‘behavioural versatility.’ Since, for Dennett, this is all ‘free will’ has ever been, redefinition along these lines is imminently reasonable. I remember once quipping in a graduate seminar that what Dennett was saying amounted to telling you, at your Grandma Mildred’s funeral, “Don’t worry. Just call rename your dog, Mildred.” After the laughter faded, one of the other students, I forget who, was quick to reply, “That only sounds bad if your dog wasn’t your Grandma Mildred all along.”
I’ve since come to think this exchange does a good job of illustrating the stakes of this particular turn of the debate.
You can raise the most obvious complaint against Dennett: that the inferential dimension of his redefinition makes usage of the concept ‘freedom’ tendentious. We would be doing nothing more than gaming all the ambiguities we can to interpret scientific ‘crane information’ into our preexisting folk-psychological conceptual scaffold–wilfully apologizing, assuming these scientific ‘cranes’ can be jammed into a ‘skyhook’ inferential infrastructure. Dennett himself admits that, given the information available to experience, ‘behavioural versatility’ is not what free will seems to be. Or put differently, that the feeling of willing is an illusion.
The ‘feeling of willing,’according to BBT, turns on a structural artifact of informatic neglect. We are skyhooks–from the informatic perspective of ourselves. The manifest image is magical. Intentionality is magical. On my view, the ‘scientific explanations’ are far more likely to resemble ‘explanations away’ than ‘explanations of.’ The question really is one of how other folk-psychological staples will fare as cognitive neuroscience proceeds. Will they be more radically incompatible or less? Imagine experience and the skein of intuitive judgments that seem to bind it as a kind of lateral plane passing through an orthogonal, or ‘medial,’ neurofunctional space. Before science and philosophy, that lateral plane was continuous and flat, or maximally intuitive. It was just the way things were. With the accumulation of information through the raising of philosophical questions (which provide information regarding the insufficiency of the information available to conscious experience) through history, the intuitive topography of the plane became progressively more and more dimpled and knotted. With the institutionalization of science, the first real rips appear. And now, as more information regarding various neurofunctions becomes available, the skewing and shredding are becoming more and more severe. The question is, what will the final ‘plane of experiential intuition’ look like? How will our native intuitions fare?
How deceptive is consciousness?
Dennett’s answer: Enough to warrant considerable skepticism, but not enough warrant abandoning existing folk-psychological concepts. The glass, in other words, is half full. My answer: Enough to warrant wondering if anyone has ever had a clue ever. The glass lies in pieces across the floor. The trend, at least, is foreboding. According to BBT, the informatic neglect that renders the ‘feeling of willing’ possible is a structural feature belonging to all intentional concepts. Given this, it predicts that very many folk-psychological concepts will suffer the fate the ‘feeling of willing’ seems to be undergoing as I write. From the standpoint of knowledge, experience is about to be cast into the neurofunctional wind.
Grandma Mildred isn’t you dog. She’s a ghost.
Either way, this is why I think pragmatic or inferentialist accounts are every bit as hopeless as traditional approaches. You can say, ‘There’s nothing but patterns, so lets run with them!’ and I’ll say, ‘Where? To the playground? Back to Hegel?’ When knowledge and experience break in two, the philosopher, to be a philosopher, must break with it. The world never wants for apologists.
BBT allows us to frame the problem with a clarity that evaded Dennett. If our difficulties turn on the limited applicability of our heuristics, the question really should be one of finding the heuristic that possesses the most applicability. In my view, that heuristic is the one that allows us to comprehend heuristics in the first place: nonsemantic information. The problem with pragmatism as a heuristic lies in the way it actively, as opposed to structurally (which it also does), utilizes informatic neglect. Anything can be taken as anything, if you game the ambiguities right. You could say it makes a virtue out of stupidity.
In place of philosophical pragmatism, my view recommends a kind of philosophical akratism, a recognition of the heuristic structure of human cognition, an understanding of the structural role of informatic neglect, and a realization that conscious experience and cognition are drastically, perhaps catastrophically, distorted as a result.
Deliberative human cognition has only the information globally broadcast (or integrated) at its disposal. Likewise, the information globally broadcast only has human cognition. The first means that human cognition has no access whatsoever to vast amounts of constitutive processing–which is to say, no access to neurofunctional contexts. The second means that we likely cognize conscious experience as experience via heuristics matched to our natural and social environments, as something quite other than whatever it is.
Small wonder consciousness has proven to be such a knot!
And this, for me, is where the fireworks lay: critics of Dennett often complain about the difficulty of getting a coherent sense of what his theory of consciousness is, as opposed to what it is not. For better or worse, BBT paints a very distinct–if almost preposterously radical–picture of consciousness.
So what does that picture look like?
It purports, for instance, to explain how the apparent reflexivity of consciousness can arise from the irreflexivity of natural processes. For me, this constitutes the most troubling, and at the same time, most breathtaking, theoretical dividend of BBT: the parsimonious way it explains away conscious reflexivity. Dennett (working with Marcel Kinsbourne) sails across the insight’s wake in “Time and the Observer” where he argues, among other things, for the thoroughgoing dissociation of the experience of time from the time of experience, how the time constraints imposed by the actual physical distribution of consciousness in the brain means that we should expect our conscious experience of time to ‘break down’ in psychophysical experimental contexts at or below certain thresholds of temporal resolution.
The centerpiece of his argument is the deeply puzzling experimental variant of the well-known ‘phi phenomenon,’ how two different closely separated spots projected in rapid sequence on a screen will seem to be a single spot moving from location to location. When experimenters use two different colours for each of the spots: not only do subjects report seeing the spot move, they claim to see it change colour, and here’s the thing, midway. What makes this so strange is the fact that they perceive the colour change before the second spot appears–before ‘seeing’ what the second colour is. Ruling out precognition, Dennett proposes two mechanisms to account for the illusion: either the subjects consciously see the spots as they are only to have the memory almost instantaneously revised for consistency, what he calls the ‘Orwellian’ explanation, or the subjects consciously see the product of some preconscious imposition of consistency, what he calls the ‘Stalinesque’ explanation. Given his quixotic allergy to neural boundaries, he argues that our inability to answer this question means there is no definite where and when of consciousness in the brain, at least at these levels of resolution.
Dennett’s insight here is absolutely pivotal: the brain ‘constructs,’ as opposed to perceives or measures, the passage of time, given the resources it has available. The time of temporal representation is not the time represented. But he misconstrues the insight, seeing in it a means to cement his critique of the Cartesian Theatre. The question of whether this process is Orwellian or Stalinist, whether neural history is rewritten or staged, simply underscores the informatic constraints on our experience of time, our utter blindness to neurofunctional context of the experience–which is to say, our utter blindness to the time of conscious experience. Dennett, in other words, is himself making a boundary argument, only this time from the inside out: the inability to arbitrate between the Orwellian and Stalinist scenarios clearly demarcates the information horizon of temporal experience.
And this is where the theoretical resources of BBT come into play. Wherever it encounters apparent informatic constraints,it asks how they find themselves expressed in experience. Saying that temporal experience possesses informatic boundaries is platitudinal. All modalities of experience are finite: we can only see, hear, taste, think, and time so much in a given moment. Saying that the informatic boundaries of experience are themselves expressed in experience is somewhat more tricky, but you need only attend to your own visual margins to see a dramatic example of such an expression.
You could say vision is an exceptional example, given the volume of information it provides in comparison to other experiential modalities. Nevertheless, one could argue that such boundaries must find some kind of experiential expression, even if, as in the cases of clinical neglect, it evades deliberative cognition. BBT proposes that neglect is complete in many, if not most cases, and information regarding informatic boundaries is only indirectly available, typically via contexts (such as psychological experimentation) that foreground discrepancies between brute environmental availability and actual access. The phi phenomenon provides a vivid demonstration of this–as does, for that matter, psychophysical phenomena such as flicker-fusion. For some mysterious reason (perhaps the mysterious reason), what cannot be discriminated, such as the flashing of lights below a certain temporal threshold, is consciously experienced as unitary. It seems a fact of experience almost too trivial to note, but perhaps immensely important: Why, in the absence of information, is identity the default?
If you think about it, a good number of the problems of consciousness can be formulated in terms of identity and information. BBT takes precisely this explanatory angle, interpreting things like the unity of consciousness, personal identity, and nowness or subjective time as products of various species of neglect–literally as kinds of ‘fusions.’
The issue of time as it is consciously experienced contains a cognitive impasse at least as old as Aristotle: the problem of the now. The problem, as Aristotle conceived it, lay in what might called the persistence of identity in difference that seems to characterize the now, how the now somehow remains the same across the succession of now moments. As we have seen, whenever BBT encounters an apparent cognitive impasse, it asks what role informatic constraints play. The constraints, as identified by Dennett and Kinsbourne in their analyses in “Time and the Observer,” turn on the dissociation of the time of representation from the time represented. In a very profound sense, our conscious experience of time is utterly blind to the time of conscious experience, which is to say, information pertaining to the timing of conscious timing.
So what does this, the conscious neglect of the time of conscious timing, mean? The same thing all instances of informatic neglect mean: fusion. The fusing of flickering lights when their frequency exceeds a certain informatic threshold seems innocuous likely because the phenomenon is so isolated within experience. The kind of temporal fusion at issue here, however, is coextensive with experience: as many commentators have noted, the so-called ‘window of presence’ is just experience in a profound sense. The now always seems to be the same now because the information regarding the time of conscious timing, the information required to globally distinguish moment from moment, is simply not available. In a very profound sense, ‘flicker fusion’ is a local, experientially isolated version of what we are.
Thus BBT offers a resolution of the now paradox and an explanation of personal identity in a single conceptual stroke, as it were. It provides, in other words, a way of explaining how natural and irreflexive processes give rise to the apparent reflexivity that so distinguishes consciousness. And by doing so it drastically reduces the explanatory burden of consciousness, leaving only ‘default identity’ or ‘fusion’ as the mystery to be explained. Given this, it provides a principled means of ‘explaining away’ consciousness as we seem to experience it. Using informatic neglect as our conceptual spade, one need only excavate the kinds of information the conscious brain cannot access from our scientific understanding of the brain to unearth something that resembles–to a remarkable degree–the first-person perspective. Consciousness, as we (think we) experience it, is fundamentally structured by various patterns of informatic neglect.
And it does so using an austere set of concepts and relatively uncontroversial assumptions. Conscious episodes are informatically encapsulated. Deliberative cognition is plural and heuristic (though neglect means it appears otherwise). Combining the informatic neglect pertaining to the first–which Dennett has mistakenly eschewed–with the problems of ‘matching’ pertaining to the second, produces what I think could very well be the single most parsimonious and comprehensive theory of ‘consciousness’ in the field.
But I anticipate it will be a hard sell, with the philosophy of mind crowd most of all. Among the many invisible heuristics that enable and plague us are those primed to dismiss outgroup deviations from ingroup norms–and I am, sadly, merely a tourist in these conceptual climes. Then there’s the brute fact of Hebb’s Law: the intuitions underwriting BBT demand more than a little neural plasticity, especially given the degree to which they defect from any number of implicit and canonically explicit assumptions. I’m asking huge populations of old neurons to fire in unprecedented ways–never a good thing, especially when you happen to an outgroup amateur!
And then there’s the problem of informatic neglect itself, especially with reference to what I earlier called the epistemic heuristic. I often find myself flabbergasted by how far out of step I’ve fallen with consensus opinion since the key insight behind BBT nixed my dissertation over a decade ago. Even the notion of content has come to seem alien to me! a preposterous artifact of philosophers blindly applying EH beyond its scope of application. On the BBT account, the most effective way to understand meaning is as an artifact of structured informatic neglect. In a real sense, it holds there is no such thing as meaning, so the wide-ranging debates on content and representation that form the assumptive baseline for so many debates you find in the philosophy of mind are little more than chimerical from its standpoint. Put simply, ‘truth’ and ‘reference’ (even ‘existence’!) are best understood as kinds of heuristics, cognitive adaptations that maximize effectiveness via forms of informatic neglect, and so possess limited scopes of applicability.
Even the classical metaphysical questions regarding materialism are best considered heuristic chimera on my view. Information, nonsemantically construed, allows the theorist to do an end run around all these dilemmas, as well as all the dichotomies and dualisms that fall out of them.
We are informatic subsystems attempting to extend our explanatory ‘algorithms’ as far into subordinate, parallel, and superordinate systems as we can, either by accumulating more information or by varying our algorithmic (cognitive) relation to the information already possessed. Whatever problem our system takes on, resolution depends upon this relation between information accumulation and algorithmic versatility. So as we saw with ‘qualia,’ our system is stranded: we cannot penetrate and interact with red the way we can with apples, and so the prospect of information accumulation are dim. Likewise, our algorithms are heuristic, possessing a neglect structure appropriate to environmental problem-solving (given various developmental and structural constraints), which is to say, a scope of applicability that simply does not (as one might expect) include qualia.
The ‘problem of consciousness,’ on the BBT account, is simply an artifact of literally being what science takes us to be: an informatic subsystem. What has been bewildering us all along is our blindness to our blindness, our inability to explicitly consider the prevalent and decisive role that informatic neglect plays in our understanding of human cognition. The problem of consciousness, in other words, is nothing less than a decisive demonstration of the heuristic nature of semantic/epistemic cognition–a fact that really, in the end, should come as no surprise. Why, when human and animal cognition is so obviously heuristic in so many ways, would we assume that a patron as stingy as evolution would flatter us with a universal problem-solving device, if not for simple blindness to the limitations of our brains?
The scientific problem of consciousness remains, of course. Default identity remains to be explained. But given BBT, the philosophical conundrums have for the most part been explained away…
As have we.
Heavy stuff to digest after all that Thanksgiving turkey…
How would define your Dûnyain characters with respect to the BBT?
“Deliberative human cognition has only the information globally broadcast (or integrated) at its disposal. Likewise, the information globally broadcast only has human cognition. The first means that human cognition has no access whatsoever to vast amounts of constitutive processing–which is to say, no access to neurofunctional contexts. ”
Are they simply acutely aware of how blind their brain’s are? Or do you write them as being something more? As beings that have developed access to their “constitutive processing” and own “neurofunctional contexts”?
and will you post cover art for the Unholy Consult when it is complete for the Canadian(Penguin) version?
Very shrewd interpretation, Benjamin. The Dunyain simply have brains that, thanks to multigenerational breeding and extensive, lifelong training, are not nearly so auto-informatically blind as our own. So they actually have an intuitive appreciation for the fragmentary nature of human cognition, for instance – the subpersonal ‘legion’ that gets smeared, in our case, into ‘you’ or ‘me.’
Does that tie into the number of gods Earwa has?
“our system is stranded”
We are not stranded because we are not restricted to introspection. We can examine brain processes even if we can’t access them through introspections. The brain may be blind to its own processes but scientific investigation is not. As our technologies improve we will have greater and greater access to the goings on within the “black box” of the brain. From this ever improving access to brain activity we will be able to construct ever improving models of brain functioning. So, the way I see it, “information accumulation” is inevitable and unlimited.
Or maybe you are saying that philosophy is stranded because it relies on introspection. This I would agree with.
Yep. The picture BBT paints really is quite clear: the ‘manifest image’ (what the brain looks like from its own perspective) is the informatic bottleneck that philosophical cognition has been labouring under all this time.
I just ran across this on Facebook. It seems appropriate.
Philosophy is like being in a dark room and looking for a black cat.
Metaphysics is like being in a dark room and looking for a black cat that isn’t there.
Theology is like being in a dark room and looking for a black cat that isn’t there and saying, “I found it!”
Science is like being in a dark room and looking for a black cat with a flashlight.
Great stuff – though I might amend the ‘flashlight’ to ‘flame-thrower’! We could very well be a nocturnal species…
The real question is what all you pervs want with a cat in a dark room?
Scott, would you please explain what Default Identity is (so I can better grok why it has to be explained)
Google ain’t helping me with all its Outlook tutorials and stuff like that…
P.S.:
As to Dennett’s stance on folk psychology, it strikes me as, well, bullshit (I’m probably building up a disagreeable reputation here, this being the second time I say mean things about reputable philosophers 🙂 ). The utility of “folk psychology” is not obvious (and I don’t care about existing “consensus”, none of arguments in favor of FP that I’ve seen are even remotely convincing as far as, you know, actual science goes), and Dennett’s whole “folk psychology rescue” project strikes me as both intellectually dishonest and pointless.
As a side-rant, Dennett’s effort in regards to FP in general and so-called “free will” specifically (and, for that matter, any compatibilist free-willy claim) isn’t like renaming your beloved dog to the name of your deceased grandma (which is darkly comical, but isn’t completely unreasonable unless you make some uncanny assumptions about expected outcome).
What Dennett’s (and other compatibilists’s) business is, if we choose to keep running with the “dead grandma metaphor”, to me, is more akin to surgically implanting grandma’s rotten, disintegrating corpse with robotic actuators, placing webcams in cadaver’s empty eye sockets, further complimenting the resultant abomination with a speaker in the skull’s cheekless, oozing mouth, and connecting all the “robotic” stuff to a fairly sophisticated computer (not even wirelessly, mind you – thick wires are snaking out of the cadaver. Let’s not specify where exactly they snake from, shall we?). After all of that is done, the poor machine will be programmed to run an unconvincing and revolting pantomime of “beloved grandma”, shuffling around the room, stumbling, barking disjointed “grandma quotes” at anything unlucky enough to witness the horror.
Occasionally, a joint or two disintegrates completely, or some screws come work their way out of decaying bone, and a jaw, limb, or finger hit the floor with a helpless wet thud.
When that happens, compatibilists swarm in, drills, electric screwdrivers and duct tape rolls blazing, and “fix grandma up”, allowing the abomination to continue its pointless journey from one corner of the room to another and then back, amid buzzing black clouds of scavenging insects.
That’s what compatibilism is all about.
Default identity is simply the question of why two flickering lights should fuse into ONE continuous light in the first place. I think this is the best way to put it. It’s basically the question of why their should be experience at all, only stripped down to its kernal. Given default identity, BBT can go to work explaining away everything else that seems miraculous about experience.
I’ve spent quite some time hunting down any reference to the mystery of ‘fusion’ with no luck. It really seems to be one of those things that are so intuitive as to be all but invisible. No one asks why: they take it for granted. And as a result, no one has picked up on the ways so many ‘problems of consciousness’ can be interpreted as instances of ‘fusion’ – which is to say, understood in terms of brain blindness or informatic neglect.
I love your ‘Zombie Grandma’ analogy, 03!
As someone who spent quite some time toying (and not so toying) with “evolutionary” algorithms, I gotta-hafta ask – is “why?” even something one could try to ask about an evolutionary system?
To me, it seems that for systems that self-improve through stochastic mutations and natural selection, there is neither “why” nor “because”.
They simply…
…are.
‘Why’ is a ‘skyhook’ question. Something that only makes sense to something riding the neural horse backward. Since we happen to be riding thus, it makes total sense to us, which is why it’s driving so many theorists crazy in their fruitless attempts to ‘naturalize intentionality.’
Though there’s intentionality surviving in some basic form: the tendency for living beings to try to survive.
At a basic level that’s “intention” and “purpose”, even if they lead to a bigger “why”.
Still, what’s the point in nature developing such patterns if they have no real point? What’s the point of “surviving” in a system where there’s no difference between those states?
So, why did an evolutionary system crop up in the first place?
I’m not sure where the intentionality comes in, though. Are viruses intentional?
“So, why did an evolutionary system crop up in the first place?”
How is the question that has wheels here (if you are using the purposive, as opposed to the causal, ‘why’). There’s no purpose in evolution.
Duh, it will take me days to go through all this.
Besides, I now noticed that the first Dennett link refers to the second, so it might me easier to read them in order.
Anyway, while reading that first link I felt a frustration, because the discussion seems to boil down just to the typical one about “emergence”. He goes explaining how it’s hard to identify a “property K” that can properly define the passing of the threshold toward in-light consciousness. And then also makes the example of the British Empire seen as a “thing” with an identity, similar to how consciousness is a unified thing.
It’s just defining the same problems with emergence, reductionism and so on. And in the end it boils down to the other obvious consideration: that the interactions and links and relationships between these parts is what is important in the model, more than studying each “particle” on its own.
And that returns to the patterns of the “strange loops” and recursive processes of self-observation, meta-linguistic and self-describing, that we know are the “culprits”.
This is more or less the “map” I have to understand this.
And as I said in the other comment that probably no one read: https://rsbakker.wordpress.com/2012/09/27/thinker-as-tinker/#comment-12984
On a general level, there are some ideas and themes that seem to return in completely different contexts. This isn’t likely just a “coincidence”. As in those videos I linked on Stephen Hawking, the science studying the cosmos ends up touching very similar issues to the science studying consciousness and the human brain. You end up talking about informatic boundaries in both. Only in physics you deal with the threshold of the property of lights and time. Whereas in neuroscience you deal with the threshold between the dark and light of consciousness. Between what appears to exist and what is hidden and unaccounted. Between the observer and what is changed by observation. Two singularities.
So, too romantic, or too close to be considered just coincidence?
Beside all this: what I was able to understand in this Bakker vs Dennett piece is that the difference is in the amount of misrepresentation.
Dennett, I think, makes a similar objection I was making on this site in the past: that consciousness only accesses fewer information, so is mostly unaware of what the greater brain is doing, but in the end its “cartoon” more or less matches the real deal. It’s kind of “analog”, in the sense that it loses detail, but captures the essence at higher level, and so it becomes a reliable approximation.
Whereas Bakker says there isn’t just this problem of asymmetry in the information between consciousness and the greater brain. But, on top of that, consciousness can’t properly track itself because it can’t perceive any “lack”, since it’s structurally impaired by that sense of “sufficiency” that closes the gaps by nullifying them. So not only it’s an extremely low res picture, but it also misses crucial pieces.
Did I get it right up to that point..?
Yeah, it turned into a much larger post than I intended! The contrast you make at the end is right on the button. Where I depart from your reading is the way you characterize the ‘repeating binaries’ you find throughout so much of philosophical thought. For me, having explicitly wrestled with those binaries for so long, the single most extraordinary thing about BBT is how it provides a plausible way out. It interprets the inside/outside, self/other, subject/object, ideal/real set of oppositions as features belonging to the heuristic nature of human cognition, the way it maximizes (or extracts) efficiencies via the systematic neglect of certain classes of information. The prime culprit in this piece is named the Epistemic Heuristic: since it is automatically engaged and blind to its own scope of applicability, we also ‘trip’ into it, as it were, whenever we drag issues like these into the purview of deliberative cognition. It is our ‘natural’ way of making sense of things, and is primarily designed to cope with environmental problems. This is why consciousness is so difficult to make sense of – these dichotomies begin generating incompatibilities – and why the ‘nonsemantic information heuristic,’ which can be applied independently of EH, is such an effective replacement. Of course it reasserts itself whenever we try to make ‘sense’ of this information, but we can now qualify it’s determinations as being heuristic, or perspectival. The ultimate view we arrive at isn’t going to be ‘intuitively satisfying’ to anyone, no more than Quantum Mechanics! simply because of the way it turns on a mongrel collection of intuitions – what we pretty much should expect, given that we quite simply did not evolve to troubleshoot ourselves.
“nonsemantic information heuristic” What is this?
A placeholder as much as anything as it stands – the unexplained explainer. It seems to be the case that ‘systematic differences making systematic differences’ has a lot of mileage, as far as heuristics go. It’s like a Prius or something!
The Free Will argument always allows for great species armchair entertainment. Just as the amoeba does not freely swim away from the food source and towards the toxin; we don’t randomly drive home every night to any strangers house, eat dinner at their table, sit in front of their TV and sleep that night with their partner. We ARE ENVIRONMENT PROCESSING REDUCTIVE systems just like our amoeba ancestors and for efficiency we have survived and prospered through social aggregation through linguisitic interaction.
This is the best blog on the internet.
Where else can you watch an outre fantasy author slowly go mad whilst delving into the soul-cracking mysteries of consciousness?
In one short story per modern horror anthology. But this is the real deal! 😉
So, the scientific problem of consciousness is fitting all the heuristics together in such a way as to create something that doesn’t qualify for the HAL Institute for Criminally Insane Robots? (Heuristic ALgorithmic Computer)
That would be friendly AI.
Consciousness problem is quite okay with the conscious subject being criminally insane.
That distinction is interesting. So, envision the four quadrants of the xy plane, those to the right of the y axis constituting conscious ensembles of heuristics and those to the left non-conscious. Those above the x axis are functioning correctly (or “optimally,” or whatever) and those below malfunctioning. So those in the upper right quadrant are sane; the lower right are insane; the upper left are functioning heuristics (problem solvers) and the lower left are malfunctioning heuristics. Ultimately, someone has to come up with a scheme for combining heuristics that move mere problem solvers from left of the y axis to the right. Does that just mean that they have to pass the Turing test? I would say at a bare minimum that not only must they pass the test administered by other members of the consciousness club, they must pass the test administered by themselves, but what does this amount to? That when they ask themselves whether they are conscious that the answer “yes” returns to them? It does seem to be very hard to escape the Cartesian Theater.
It means they have a D&D alignment! Law/Chaos. Good/Evil. 😉
I’ve been thinking it through in terms of a ‘Meta-heuristic Maze’ myself, the fact that we are confined to heuristics in our attempts to demarcate the scope of the heuristics that confine us. I’m sure Chandra would be mortified!
“our utter blindness to the time of conscious experience”
Take it one step further. Dennet showed we can’t localize our phenomenology in space (with his famous surrogate robot body thought experiment), and you just argued we can’t even place it in time. Is my experience of the ‘now’ really actually happening ‘now’? Imagine how fucked up it will get if someone finds out the neural correlates of consciousness occur only hours after of the eliciting inputs have passed! (My head really starts to spin if I consider longer time frames than that.)
I just got a prickle up my scalp. I going to make a note of that particular possibility – add it to the Semantica Mindfuck pile. Since you’re already on the Acknowledgements list there’s not much more I can offer… I could Tuckerize you, Jorge!
Thanks Terry, you beat me to the punch! “Nonintentional information heuristic” was driving me crazy. Last night I had a dream in which I signed a letter “Nonintentionally Yours.”
This “nonintentional information heuristic”/”systematic differences making systematic differences” etc., as the unexplained explainer, drives me nuts, which means I’m dying to hear it explained with more than a Prius metaphor, since (or in spite of the fact that) I already own one 🙂
I’m a long-time lurker on this blog, and I have quite a hard time with BBT *as solution* (philosophically, not chemically). Considered this way, it kinda reminds me of Laruelle as Ray Brassier uses him in Nihil Unbound. I can find some other similarities between Bakker and Brassier–just one, for example, in “fusion.” There may not be any thorough study of fusion in the cogsci literature, but Brassier’s version of Kantian “transcendental synthesis” seems somehow akin to fusion. Maybe “binding” in the general philosophical sense could be an analogue or artifact of fusion. I’m probably doing it right now :-p
But I find BBT *as description* (heuristically, of course) totally fascinating…which is why I keep coming back here.
Welcome to the world of the ‘lurked,’ kfreak. I would love to know where to find Ray’s version of transcendental synthesis (I don’t recall it Nihil Unbound). The reason I personally don’t have a problem with ‘unexplained explainers’ (so long as they’re acknowledged has to do with something I do take to be empirical fact, which is TI (theoretical incompetence). Anything at these levels of abstraction can be gamed this way or that, which is why philosophy confronts you with morasses of incompatible interpretations every way you turn. So I could, if I wished, recharacterize information as ‘systematic causality,’ for instance, and rationalize it using one of the thousands of extant interpretations that purport to ground it. For me, the overarching virtue is parsimony and comprehensiveness: if something relatively simple can be used to explain a lot, not to mention offer elegant solutions to long held conundrums, then it deserves serious consideration. In a sense I’m saying, ‘give me this much, and I will give you a good chunk of the world.’
I have no idea whether BBT is a dead end or not, but one thing I do know is that it seems to have something to say about every philosophical topic it touches… It’s like my theoretical alter-ego or something!
Only not quite so good-looking…
From your essay: “The centerpiece of his [Dennett’s] argument is the deeply puzzling experimental variant of the well-known ‘phi phenomenon,’ how two different closely separated spots projected in rapid sequence on a screen will seem to be a single spot moving from location to location. When experimenters use two different colours for each of the spots: not only do subjects report seeing the spot move, they claim to see it change colour, and here’s the thing, midway. What makes this so strange is the fact that they perceive the colour change before the second spot appears–before ‘seeing’ what the second colour is. Ruling out precognition, Dennett proposes two mechanisms to account for the illusion: either the subjects consciously see the spots as they are only to have the memory almost instantaneously revised for consistency, what he calls the ‘Orwellian’ explanation, or the subjects consciously see the product of some preconscious imposition of consistency, what he calls the ‘Stalinesque’ explanation. Given his quixotic allergy to neural boundaries, he argues that our inability to answer this question means there is no definite where and when of consciousness in the brain, at least at these levels of resolution.”
Dennett’s fanciful Stalinesque and Orwellian “explanations” lead us away from an understanding of the brain mechanism that actually generates the phi phenomenon.
In the retinoid model of consciousness, selective visual attention consists in the projection of added neuronal excitation in retinoid space by selective excursions of the heuristic self-locus (HSL). Here’s how patterns of self-locus activation explain the phi phenomenon:
1. When the first dot flashes on (S1), HSL moves to the spatial locus of S1.
2. When S1 turns off and the second dot (S2) flashes on after a blank interval of ~ 30 ms up to ~200ms, HSL moves over intervening autaptic neurons to the new spatial locus of S2.
3. Over the trajectory of S1 to S2, neuronal HSL excitation plus excitation from the decaying S1 combine to create a moving trace of heightened autaptic-cell activity.
4. We see phi motion between successively flashed dots because there really is a path of moving neuronal excitation induced by the heuristic self-locus in the spatial interval between S1 and S2.
For more about the retinoid model see here:
I should add that subjects do not *perceive* the color change before the second dot appears. They *judge* the perceived change in color to have happened about midway in passage. This is simply their best estimate.
Hi Arnold! Thanks for weighing in. I was just going to ask you about this very question about the change in colour being experienced before the dot reaches its second position. In your response – the subject ‘judges’ the change to happen midway – aren’t you actually making Dennett’s point: What forms the BASIS of this judgement? Is the judgment DISTINCT from the experience (Orwellian), or integral to it (Stalinesque)? And most importantly, how, empirically, could you claim to KNOW one way or another?
I’m not sure I see how you manage to slip Dennett’s critique here.
I don’t see how I’m making Dennett’s point. The judgement *is* distinct from the conscious phi experience because the experience is the target of the judgement (Orwellian?). The notion that the judgement is integral with the phi experience per se (Stalinesque?) is not credible on the basis of what we know about the neurophysiology of the brain. As I understand it, Dennett’s point is that we cannot know which metaphorical “explanation” is correct. But this simply confuses the matter because only one metaphor is appropriate (the Orwellian), while the other metaphor (the Stalinesque) is misapplied because it clashes with the system of brain mechanisms that generate the phi phenomenon and the subsequent judgements about one’s immediate experience of the phi phenomenon. In the absence of knowledge about the relevant brain mechanisms, metaphors might serve a useful purpose but they can lead us astray when they conflict with how the cognitive brain works. In short, the power of argument from metaphor is enhanced by ignorance.
You wrote: “And most importantly, how, empirically, could you claim to KNOW one way or another?”
The only things we know with certainty are the logical implications of formal logical systems. But science is a pragmatic enterprise where what is taken as current knowledge is based on the weight of evidence. It is within this framework that I claim the Stalinesque story is off base. Is there a better way?
In other words, the subjects don’t know what they ‘really’ experienced. So they have a veridical experience that conforms with what actually happens at the neurophysiological level, which they then utterly forget in favour of a second, illusory experience, which they take as canonical, having no memory of the original? This has to be case, IF your hypothetical explanation turns out to be the case.
I agree that Dennett overplays his argument in certain respects, but I just want to be clear as to what it is you’re suggesting.
As an afterthought, how do you think Dennett would explain this?
The table on the right is the same length and width as the table on the left, yet it appears to have a distinctly different shape. The retinoid mechanism explains this but, of course, we have no first-person knowledge of how our retinoid mechanism does the trick.
You say: “So they have a veridical experience that conforms with what actually happens at the neurophysiological level, which they then utterly forget in favour of a second, illusory experience, which they take as canonical, having no memory of the original?”
The phi experience that conforms with what actually happens at the neurophysiological level is *not* veridical because the veridical visual stimulus is *not* a dot in motion. There is only one illusory experience. It is of a colored dot moving from one place (P1) to another (P2) and changing color at some place in between P1 and P2. The *judgement* of where the color change might have taken place is *not* an illusory experience; it is a judgement *about* an indeterminate phenomenal event (exactly where did the color change?). I don’t understand what you mean by a “a second, illusory experience, which they take as canonical, having no memory of the original.” Can you clarify this?
Veridical is wrong term: ‘true to what is going on in the brain’ is what I mean.
Subjects in these experiments report experiencing the change midway, not ‘judging that the change must have happened given ‘an indeterminate phenomenal event,” do they not?
I don’t see how the PHI experience is problematic if the brain responds more slowly to the objects position,i.e. retinoidal hysterisis response; while the color qualia mechanism is more direct and faster. Like the old movie days when you had a separate video track and sound track so sometimes they were out of sync.
When I started reading philosophy 5 years ago, I read alot of Dennett’s books. As a popular author, he focused alot on intuition breakers which made him popular with alot of laymen.
From the engineer’s perspective multiple drafts is how most engineering systems are designed which was not info apparent to people in DeCarte’s day. What’s usually at the center of our thinking minds are concepts, so the concept of a homonculi is a recursive concept. From the Linguistic POV I’m more a fan of the Cartesian Lecture Hall.
For you PHI fans and Dennett debunkers I got alot of pleasure out of reading this:
Critique of a Homuncular Model of Mind from the Neo-Vitalist Perspective
http://dspace.sunyconnect.suny.edu/handle/1951/44884
It is problematic for views like Arnold’s, where the time of representing carries over into the representation of time. Very problematic, I think, anyway… I’m still trying to get a clear picture of his argument. I’m going to check out that link now!
Scott: “Subjects in these experiments report experiencing the change midway, not ‘judging that the change must have happened given ‘an indeterminate phenomenal event,” do they not?”
Yes, but a report of an experience is not the same as the experience itself. When you report on an experience you are having, you have to analyze it, and make judgements about it on which to base your report. Your report might be right or wrong, but the experience itself is just what it is, neither right nor wrong. I have experienced the two-color phi phenomenon and I assure you that trying to discriminate where the color changes in the very rapid illusory motion is very difficult. Where discrimination is fuzzy, the safest bet is to report the change somewhere in the middle — the safety of selecting the mean. Others report a similar problem in discriminating where the color seemed to change.
Arnold: “Your report might be right or wrong, but the experience itself is just what it is, neither right nor wrong.”
And what might this ‘experience itself’ be? You do see the problem here.
Scott: “And what might this ‘experience itself’ be? You do see the problem here.”
I assume that you are referring to the so-called “hard problem”. In the retinoid theory of consciousness, a phenomenal experience is just the global pattern of autaptic-cell activity on the Z-planes within one’s putative egocentric retinoid space (see Fig. 8 in “Space, self, and the theater of consciousness”) during the *extended present*. In the phi experiment, the global experience has to be decomposed, parsed, and analyzed by pre-conscious cognitive brain mechanisms before a verbal report of the phenomenal experience can be expressed. For more about this see *The Cognitive Brain* (MIT Press, 1991), Ch. 7, “Analysis and Representation of Object Relations”, here:
The hard problem is turned into an insoluble problem by the mistaken notion that consciousness/feeling must be something that is *added* to an essential brain process — the activity of a particular kind of brain mechanism. So the objection is repeated “But the *doing* of the brain mechanism does not explain its *feeling*!” If we adopt a monistic stance, then the processes — the doings — of the conscious biophysical brain must *constitute* conscious experience, and nothing has to be added to these essential brain processes.
I have argued that we are conscious only if we have an experience of *something somewhere* in perspectival relation to our self. The minimal state of consciousness/feeling is a sense of being at the center of a volumetric surround. This is our minimal phenomenal world that can be “filled up” by all kinds of other feelings. These consist of our perceptions and other cognitive content such as your emotional reaction in response to reading this comment.
On the basis of this view of consciousness, I proposed the following working definition of consciousness:
*Consciousness is a transparent brain representation of the world from a privileged egocentric perspective*
The scientific problem then is to specify a system of brain mechanisms that can realize this kind of egocentric representation. It is clear that it must be some kind of global workspace, but a global workspace, as such, is not conscious — think of a Google server center. What is needed is *subjectivity*, a fixed locus of spatiotemporal perspectival origin within a surrounding plenum . I call this the *core self* within a person’s phenomenal world. A brain mechanism that can satisfy this constraint would satisfy the minimal condition for being conscious. I have argued that the neuronal structure and dynamics of a detailed theoretical brain model that I named the *retinoid system* can do the job, and I have presented a large body of clinical and psychophysical evidence that lends credence to the retinoid model of consciousness.
I wasn’t referring to the Hard Problem at all, in fact – although it does loom over all these debates. The question is far more simple: what is this ‘experience itself’ outside what people think they experience.
If you say, ‘Neurofunction x!’ then you immediately face a whole host of pretty difficult questions, not the least of which, is the way conscious experience is utterly blind to neurofunctions. Stipulating mind-brain identity is one thing, but short of some account of why this strikes so many as so obviously wrong, that’s all you’re doing, stipulating. You might as well be saying, ‘God is love.’ Of course, you’ll want to object that your project is empirical, that it offers many plausible explanations of many different phenomena, and this should be enough, given that this is all any other scientific account has required… but none of this changes the fact that you are simply defining the problem away, when what you need is a principled way of explaining it away – this is why I’m always trying to convince you that BBT is your friend!
The problem of consciousness is quite unique in the history of science, for many reasons, not the least of which is the way it takes the very frame of empirical observation as its object of explanation. It ‘frames the frame’ so to speak. You have all these apparently immediate intuitions inveighing against your explanation, not the least of which is the fact that when it comes to things like the phi phenomenon, there’s not a neuron to be found. But the list goes on.
If you were to dump the program memory contents for your GPS as streams of 1’s and 0’s, it would be informationaly rich but meaningless to us unless we convert it into higher level language with comments etc. To the GPS not all of the data is pure numeric data but certain bit patterns command the electronic hardware to enable other internal devices and subroutines etc. We can say that the electronic platform itself is information or our own nervous systems are the information itself (Joseph Hellerism “We are the bombadeers”). Even if our nervous systems had 1000 fold complexity we would still have the limitation of informatic blindness; so even if we move up into higher intuition we are still skyhooking ourselves though we think we are doing otherwise. Informational blindness is an inherent aspect of any informational system so even if we break a geocentric theory with heliocenticism, we are simply moving up to a higher state of “seeing”, but still blind.
Ayuh. Now you’re starting to see. It’s all a matter of heuristic bootstrapping, up and down. Brain blindness is ineliminable. The more neural processors you add to track neural processing, the greater the ‘untracked’ processing load becomes. Taking nonsemantic information as our unexplained explainer, I’m arguing, provides the most effective means of throwing normative and referential heuristics into relief, thus showing us not only why consciousness has seemed to hitherto impenetrable to thought, but why it has the structure it does. It allows us to restrict the deployment of either heuristic to the problems/ecologies matched to them (or in other words, it opens up an entirely new domain of cognitive psychological research), and so sort ‘scope violation problems’ (such as, just for instance, the ‘content determination problem’ in representational theories of mind) which are simply artifacts of deploying heuristics out of school, from the ‘real’ problems.
Is there a more effective heuristic? Who knows? The big thing is that this provides an entirely novel, parsimonious and principled way to reconceive, not simply consciousness, but philosophy and cognition as a whole. It’s going to be a hard sell, though – and for reasons that it can itself explain. Given informatic neglect, normative and referential heuristics seem universal through and through: How the hell do you convince someone convinced of the universal applicability of their approach that calling qualia ‘sub-existential’ can make any sense? Really, all you can do is keep pointing out all the information missing (because it is obvious when you start thinking about it) and thus the necessarily heuristic nature of the cognitive stance taken, and thus the inevitability of limited scopes of application. Then you keep pointing out the relation between heuristic scope violations and cognitive impasses.
But if you can think of a better way, I’m all ears. This thing is way, way bigger than me!
Scott: The 1’s and 0’s of course represent not information but timing states in the machine just as observing neural firings represent timing states in our nervous system. I suspect that neural firing timing states enable cellular metabolic processes that of course are still undiscovered. I theorize that cells can cluster into supercells in the sensory cortex areas and thalmocortical areas by locking inner cellular function. These inner structures form Arnold’s 1pp (see below).
The neocortex is multilayered and I suspect the highest layers “rollup” into the functionality in the frontal lobes which form Arnold’s, 3pp.
Scott, yes, you can say that BBT is my friend, but BBT is just another way of labeling what we neuroscientists have known for over a century, namely that a person cannot perceive anything about his/her brain because there are no sensory receptors in the brain to detect the structure and workings of the person’s relevant brain mechanisms; i.e., brain blindness. You are right, however, in emphasizing this fact in an effort to “explain away” the unique problem of consciousness. Here is an excerpt from one of my forthcoming articles that I hope will shed more light on the problem:
———————————————————————————————
Dual-Aspect Monism
Each of us holds an inviolable secret — the secret of our inner world. It is inviolable not because we vouch never to reveal it, but because, try as we may, we are unable to express it in full measure. The inner world, of course, is our own conscious experience. How can science explain something that must always remain hidden? Is it possible to explain consciousness as a natural biological phenomenon? Although the claim is often made that such an explanation is beyond the grasp of science, many investigators believe, as I do, that we can provide such an explanation within the norms of science.
However, there is a peculiar difficulty in dealing with phenomenal consciousness as an object of scientific study because it requires us to systematically relate third person descriptions or measures of brain events to first person descriptions or measures of phenomenal content. We generally think of the former as objective descriptions and the latter as subjective descriptions. Because phenomenal descriptors and physical descriptors occupy separate descriptive domains, one cannot assert a formal identity when describing any instance of a subjective phenomenal aspect in terms of an instance of an objective physical aspect, in the language of science. We are forced into accepting some descriptive slack. On the assumption that the physical world is all that exists, and if we cannot assert an identity relationship between a first-person event and a corresponding third-person event, how can we usefully explain phenomenal experience in terms of biophysical processes? I suggest that we proceed on the basis of the following points:
1. Some descriptions are made public; i.e., in the 3rd person domain (3 pp).
2. Some descriptions remain private; i.e., in the 1st person domain (1 pp).
3. All scientific descriptions are public (3 pp).
4. Phenomenal experience (consciousness) is constituted by brain activity that, as an object of scientific study, is in the 3 pp domain.
5. All descriptions are selectively mapped to egocentric patterns of brain activity in the producer of a description and in the consumer of a description (Trehub 1991, 2007, 2011).
6. The egocentric pattern of brain activity – the phenomenal experience – to which a word or image in any description is mapped is the referent of that word or image.
7. But a description of phenomenal experience (1 pp) cannot be reduced to a description of the egocentric brain activity by which it is constituted (there can be no identity established between descriptions) because private events and public events occupy separate descriptive domains.
It seems to me that this state of affairs is properly captured by the metaphysical stance of dual-aspect monism (see Fig.1) where private descriptions and public descriptions are separate accounts of a common underlying physical reality (Pereira et al 2010; Velmans 2009). If this is the case then to properly conduct a scientific exploration of consciousness we need a bridging principle to systematically relate public phenomenal descriptions to private phenomenal descriptions.
————————————————————————————————–
So we are not only blind to the workings of our own brain, the language that we properly use to describe our own conscious experiences is in a separate domain from the language that we properly use to describe the workings of a brain. How can we possibly describe our personal experience in the descriptive terms of brain activity! Philosophy has to understand this natural obstacle, and science has to adopt an investigative strategy to circumvent this natural obstacle.
Yes. BBT is just common sense! That’s why I think it will tear a hole through the heart of cognitive science when it’s finally recognized and unleashed. Philosophy, on the BBT account, is simply humanity waking up from its millennial anosognosia, and finally coming to recognize, not only the way informatic neglect structures ‘conscious experience,’ but the pernicious way it has led us to continually think depleted and distorted information is sufficient.
But it is NOT a theory of consciousness. That’s up to you and your peers Arnold!
More later… I just realized I have to pick up my little girl. BBT is, potentially anyway, a good way to piss off your wife!
Arnold: By your statement you are saying this is still primitive science:
“However, there is a peculiar difficulty in dealing with phenomenal consciousness as an object of scientific study because it requires us to systematically relate third person descriptions or measures of brain events to first person descriptions or measures of phenomenal content. We generally think of the former as objective descriptions and the latter as subjective descriptions.”
For centuries people observed cloud formations, winds and temperatures and were able to make predicitions about atmospheric events; although people didn’t even understand how clouds could hold water. Today of course we understand weather phenomena from a system level right down to a microscopic level in the clouds. Unfortunately consciousness science has some understanding of nervous subsystems interaction but there is no intuitive science of qualia/feelings emergence at the cellular level. Doing consciousness studies is akin to trying to reverse engineer a computer when the investigator has no understanding of what’s in the microdevices. By tracing waveforms he may gain some clues but even trying to gain insight at the macrolevel can be misleading.
VicP: “Doing consciousness studies is akin to trying to reverse engineer a computer when the investigator has no understanding of what’s in the microdevices.”
Yes, but when we create a theoretical model of the “microdevice” of the human cognitive brain, we *do know* what’s in the theoretical microdevice; i.e., its hypothesized structure and dynamics. And we also know what our conscious experiences are like. Moreover, we know that the activity of a theoretical conscious mechanism/microdevice cannot be *equated* to a conscious experience, just as we know that a theoretical model of weather phenomena cannot be equated to a real weather pattern. Nevertheless, the theoretical model of weather activity *explains* the weather patterns that we experience. Why do we believe that a theoretical model of weather actually explains the weather that we experience? We do so because the model successfully *predicts* (within tolerable limits of error) the weather that is supposed to happen. Why should we believe the retinoid model of consciousness? For the same reason that we believe a good theoretical model of weather patterns, namely because the retinoid model successfully predicts a wide range of previously inexplicable conscious phenomena, and even successfully predicts the occurrence of conscious phenomena that are totally novel!
Arnold: I see your point when you treat the entire brain as the unknown or microelectronic device and I can see how gathered data and predictibility models may coincide.
I was equating the neurons themselves with the microdevices. Computer microdevices are designed for all types of funtionality; some for storage, some for data manipulation, some as intermediary buffers and some simply regulate the current which feeds the devices etc. For microelectronic devices we know they all operate on the principle of energy storage (charged=1, discharged=0), however neurons may fulfill an energy storage role but there may be another metabolic process occurring in them which form conscious/feeling/qualia states. I believe this potential “inner dualism” is what Chalmers eludes to with the Zombie conjecture.
Likewise weather systems are physical systems but once again they are massive atmospheric energy systems which ancient man may have equated to conscious entitities or gods of weather etc.
Back, but only with a few moments before I have to pick up my daughter I see. I’m sure you’ve already read McGinn’s “Can We Solve the Mind-Body Problem?” but the description you give is very close (and more elegantly expressed) than the one he gives as a thumbnail of the prevailing understanding of the problem: a kind of battle of the registers/faculties assumption. But this is really just a problem with mismatched intuitive registers, suggesting that the ‘bridging principle’ need only be rhetorical. I think it goes even deeper, and I think this is precisely where the conceptual utility of BBT and its reliance on nonsemantic information shines.
On the BBT account, all cognition is heuristic or special purpose. Many of these heuristics have been sussed out by cognitive psychology, and I’m sure this trend will continue. Given the indirect nature of the evidence, the work of sorting what tool does what in our cognitive tool box will likely take some time. Informatic neglect means that we are blind to our own cognitive tools, short of experimentation and learning. High up on this heuristic food-chain, BBT proposes, lies what might be called the ‘Reference Heuristic,’ our propensity to see things in representational terms. There can be little doubt that the ‘representational paradigm’ is heuristic. Why? Because like all heuristics it turns on the neglect of information to leverage various problem-solving efficiencies. In this case, what is neglected is nothing other than all the CAUSAL information pertaining to our cognitive relation to the world. With reference to what you call a bridging principle, this has tremendous implications. As a heuristic, the representational paradigm necessarily has a limited scople of applicability: it is not a universal problem-solver. It only seems that way because of informatic neglect, once again. It seems fair to assume that, if anything, it is primarily adapted to environmental problem-solving. In other words, third-person cognition. As a result it has no problem dealing with causality in our environments: it’s tailor made for causal explanations of things not itself. It neglects almost all information pertaining to your informatic relation to your environment, and just delivers objects bouncing around in relation to one another – perfect fodder for causal explanation. But when you take this heuristic to the question of consciousness and the brain everything goes haywire. It transforms ‘qualia’ into things possessing relations to other things, and asks, as it is prone to do, the most obvious, natural question of them all: what are these damn things! But neglecting your informatic relation to functionally independent systems in your environment is one thing; and neglecting your informatic relation to functionally dependent systems in your own brain is something altogether different. You have quite clearly violated the scope of applicability of the representational heuristic.
A ‘bridging principle’ suggests that some kind of ‘local fix’ is needed, when BBT suggests that a radically new paradigm is required – a whole new frame of reference. What cognitive psychology is discovering is that the representational paradigm and science are not coextensive, that the latter probably outruns the heuristic boundaries of the former by a good margin. This is why I hedged way back when when you asked me if I was a dual-aspect monist on Peter’s blog, Arnold. It too represents an attempt to square the representationalist circle. So long as theorists continue universalizing the Reference Heuristic, the Mysterian wins. As soon as they accept that it is just another heuristic, and most importantly, that there are other ways of framing all these problems, well then, we are off to the races! Just like Al the Alien says…
The final picture is intuitively jarring, precisely what an evolved bundle of mutually blind cognitive systems should expect.
Shit. Late again…
Scott: If I understand correctly, the Reference Heurisitic is our own third person observer. I think the root of it is our own bodily movement or the actualization of our own motor system. When you question a minimum temporal field, well maybe it’s big enough to remember our last two steps; but (((three steps back(four)five)))etc.? Of course science has a way around this and we can “expand” the Reference Heuristic and minimal temporal field by recording our steps with our video phone.
When we attempt to expand our basic Reference Heurisitc with language we have no problem observing external events like observing someone else’s movement but the internal linguistic investigation of our own minds leads to brain knots. We run into semantic problems or the language problem because our own internal speech/heurisitc system is not well understood and our own third person reporting system is equally complex.
In other words the problem is soluble by breaking two problems, the inability to understand information “formation” at the neural level (VicP) which is the root of BBT (R Scott) and the inability to understand 1pp/3pp observation language (Dr Trehub) caused by complex layering of the brain (appeared Cognitive Closure which leads to HOT’s).
If you think of representational theories of mind more generally, it’s the TRANSPARENCY of phenomenal experience along with some commitment to truth-conditional semantics (viz., a particular philosophy of language) that really seems to motivate the accounts. Prima facie at least, representationalism allows them to hold onto a servicable account of meaning and objectivity and intuitively ‘account’ for phenomenal consciousness, and yet as you say, the problems pile up as soon as they begin asking questions. But as this connection suggests, the language problem you mention is actually of a piece with the reference-heuristic problem: the informatic neglect of transductive and neurofunctional causal information (transparency) works very well when dealing with systems that are largely functionally independent of cognition, but as soon as you turn it on functionally interdependent systems, the heuristic literally neglects the very thing it is attempting to cognize.
I’ll be posting on this very topic very soon…
Scott: “You have all these apparently immediate intuitions inveighing against your explanation, not the least of which is the fact that when it comes to things like the phi phenomenon, there’s not a neuron to be found.”
The situation is not quite so bleak. Here is just one example: Larsen et al (2006) , employing fMRI imaging, showed that when subjects experienced illusory motion between two spots of sequentially flashed lights, there was a *path of brain activation in the visual system corresponding to the path of illusory motion*!
Scott, for me it is a truism that *all* evolutionary adaptations are biological heuristics. I have even proposed the *heuristic self-locus* as a neuronal explanation of selective attention. So I have no disagreement with you on this point.
You say: “On the BBT account, all cognition is heuristic or special purpose. Many of these heuristics have been sussed out by cognitive psychology, and I’m sure this trend will continue. Given the indirect nature of the evidence, the work of sorting what tool does what in our cognitive tool box will likely take some time. Informatic neglect means that we are blind to our own cognitive tools, short of experimentation and learning.”
Here you leave out the critical role of theoretical modeling. What you call information neglect is really an information horizon, and it has traditionally been the role of science to extend our information horizon by proposing *theoretical* entities and processes that are supposed to exist beyond our current observations and conceptions. We are blind to these theoretical entities (e.g., the Higgs boson, retinoid space), but we can provisionally accept their reality just in case their observable *effects* correspond to what is predicted on the basis of their theoretical properties. Tackling the problem of consciousness this way is more difficult then theorizing about quarks and such because, as you suggest, it seems self evident that thinking about the activity of neurons in one’s brain cannot be same as the activity of neurons in one’s brain. I read McGinn’s paper only after you called it to my attention, and I was struck by McGinn’s initial formulation of the mind-body problem:
“How is it possible for conscious states to
depend upon brain states?”
It seems to me that putting the question this way leads one to think of conscious states as *arising* from brain states — that there are two different states, a brain state and a conscious state that depends on the brain state. I would put the question dfferently:
“How is it possible for brain states to *be* conscious states?’
Scott: “A ‘bridging principle’ suggests that some kind of ‘local fix’ is needed, when BBT suggests that a radically new paradigm is required – a whole new frame of reference.”
The bridging principle that I have proposed is what I think is needed to formulate and test explicit brain models of consciousness. I’m not a philosopher so here I’m on shaky ground, but is your “new frame of reference” a novel metaphysical stance? Why don’t you propose it on the PhilPapers Forum and have it discussed by other philosophers?
I think it’s a truism as well, Arnold. All I’m really doing is working a figure-field switch on the ways we can reason through the implications of that truism. ‘Information horizon’ is actually my preferred term (it’s the one I use throughout “The Last Magic Show” for instance), but ‘informatic neglect’ seems more apropos of discussing their role with respect to our problem-solving capacities. McGinn and the ‘cognitive closure’ thesis are actually a great way to illustrate the versatility this provides. I agree with you that his expressive choices guide him down problematic paths, but I’m not convinced that your alternate formulation doesn’t suffer similar problems – and I say this as someone who uses the same formula.
To see how this could be so, just apply the logic of the heuristic truism to your own nest of assumptions: as fundamental as it is our notion of being has to be heuristic, which is to say, a cognitive artifact of some kind of informatic neglect. As such, it possesses a scope of application – it is not universal. This is what I take to be McGinn’s mistake: what he’s really making is a scope of applicability argument for a particular heuristic that he then universalizes. The psychophysical link lies outside the purview of our epistemic heuristic, therefore we will never be able to know it. The assumption he’s making (because he doesn’t recognize the heuristic nature of his target) is that the epistemic heuristic is ALL WE HAVE, and here he’s dead wrong. This is what I meant when I said earlier that science outruns epistemology.
So for me, whenever I mention mind/brain identity, I always try to remind myself that this is actually a heuristic shortcut, a way to make sense of consciousness, that we can either hold onto or dispense with depending which way our questions take us.
Regarding the Philpapers forum: The plan, for now, is to refine my position via interactions like these, with the help and criticisms from people like you (and other philosophers that I correspond with) and most importantly, build an arsenal of very difficult to answer questions, before taking the next step. You know better than most that philosophers often tend to be the most close-minded souls anyone can encounter! I take human theoretical incompetence (TI) outside the sciences to be a matter of empirical fact – the evidence is nothing short of mountainous. This means that pretty much ANY theoretical position can be rationalized, given time, skills, and ingenuity – things which philosophers have in abundance (back in my day only physicists outscored them on the GREs). This is what makes science the accomplishment it is, something that needs to be learned. The power of questions lies in the way they make ignorance visible, how they can pluck what engineers call ‘unk-unks’ (unknown unknowns) out of oblivion.
So I’ll branch out when the time is right. In the meantime, I get an extraordinary amount of traffic considering the incredibly dense and esoteric nature of my posts. To tell the truth, the reason I LOVE this way of pursuing the problems is that it feels more honestly philosophical, peripatetic in the ancient Greek sense, only with finger-taps instead of feet! The one reason I despair of doing philosophy the traditional way, papers and monographs, is that I seem constitutionally unable to stand still on any one topic. I’ve always been obsessional and neurotic, interest-driven to a fault given the way my interests refuse to stand still! Whether or not I’m onto something that science will ultimately vindicate, I have no idea. What I do know is that this interpretative angle I’ve stumbled upon is more than worth exploring, and at least as theoretically interesting as outlooks like HOT.
Given the nature of the problem, I figure any novel outlook possessing systematic interpretative consequences is worth it’s weight in gold.
“You know better than most that philosophers often tend to be the most close-minded souls anyone can encounter! I take human theoretical incompetence (TI) outside the sciences to be a matter of empirical fact – the evidence is nothing short of mountainous.”
See: nearly every blustering near-dead brain in a tenure jar who busted a non-comprehending gasket to After Finitude.
Hilarious. After Finitude is tenure-juice.
Scott: I agree with you that his expressive choices guide him down problematic paths, but I’m not convinced that your alternate formulation doesn’t suffer similar problems – and I say this as someone who uses the same formula. To see how this could be so, just apply the logic of the heuristic truism to your own nest of assumptions: as fundamental as it is our notion of being has to be heuristic, which is to say, a cognitive artifact of some kind of informatic neglect. As such, it possesses a scope of application – it is not universal.”
I thought that I made it clear that I believe all of our formulations are heuristic. So we don’t disagree on this point. A proposal with a universal application could only be expressed by an omniscient source. Science is not omniscient; it is a pragmatic enterprise. In science, formulations are judged by their success in explaining/predicting relevant empirical events and are always provisional. So even though all formulations about consciousness are heuristic, some formulations are more useful heuristics than others, within scientific norms. A relevant question would be “What competing theoretical model explains consciousness better than the retinoid model?” We are all ignorant of the underlying reality and we do the best that we can.
We’re pretty much on the same page, Arnold: the only place where we depart as far as I can see is on how best to characterize the explanandum, and to what degree we should allow intentional concepts into our explanans. I don’t think there’s such a thing as ‘representations’ that can do mechanistic explanatory work or ‘egocentric perspectives’ that require positive explanations. But I appreciate how much I’m out a limb on these claims.
I also think phrases like ‘ignorant of the underlying reality’ express a limited heuristic approach. I look at everything in terms of granularity and effective reach any more. How far a theory will take us across domains and how deep it will take us within any one domain. ‘Reality’ is simply a way to simplify, to posit a pretend informatic omniscience to provide a ground floor to brace our explanations.
Scott: “I don’t think there’s such a thing as ‘representations’ that can do mechanistic explanatory work or ‘egocentric perspectives’ that require positive explanations.”
I assume you believe that there is a real physical world of which you have a conscious experience. Am I wrong? I assume that you believe that our experience of this real world is indirect rather than direct (i.e., you are not a naive realist). Then isn’t it proper to think of your experience of the world as a personal *representation* of the real world around you rather than the real thing? If not, why not?
If there are such personal representations, the evidence is overwhelming that they exist as dynamic patterns of neuronal activity in the brain and, as such, these representations constitute the essential source of our internal egocentric/perspectival stimulation that enables us observe and learn about the world we live in. So, in this heuristic, representations really do mechanistic explanatory work. For an example of the kind of work they do, see *The Cognitive Brain*, Ch. 12, “Self-Directed Learning In a Complex Environment”, here:
My latest post happens to be on this very topic, Arnold. By all means, send the link to your colleagues: https://rsbakker.wordpress.com/2012/10/22/v-is-for-defeat-the-total-and-utter-annihilation-of-representational-theories-of-mind/
BBT takes you beyond the realism/idealism dichotomy. Since you agree that human cognition is both plural and heuristic, you agree that belief in ‘a real physical world of which you have a conscious experience’ is the product of heuristic intuition. This means you will also agree that, as a heuristic, it’s good only as far as it goes. On my view, the nub of the difficulty posed by consciousness hangs upon this, the question of the scope of applicablity of the heuristic(s) responsible for discursive environmental cognition – for science in effect. This is literally why I think science outruns realism: the latter is the expression of an evolutionary cognitive adaptation that possesses a more restricted scope of applicability. All this time we’ve been trying to make science answer to realism, when the issue (on the BBT account, at least) is precisely the inverse! In a sense all realism is naive: all that varies is the location of the naivete.
So I think we are products of a physical universe because science treats us as the products of a physical universe. The notion of making this answer to some philosophical notion of the ‘real’ or the ‘ideal’ is to literally get the cognitive pecking order backward, to plop science within a heuristic frame it quite clearly outruns.
What’s doing the ‘work’ in the contexts you cite are homomorphisms, not representations. Homomorphisms structurally recapitulate various environmental features according to broader neurofunctional and biological demands. ‘Representational content’ is best thought of as a cognitive illusion secondary to informatic constraints placed on metacognition. The post goes into more intimate detail.
[…] given that no one I know of has referenced this piece apart from Dennett himself), suggesting, once again, that Dennett was very nearly on the right track, but that he simply failed to grasp the […]